Patreon CEO Jack Conte calls AI copyright fair use claims 'bogus' at SXSW 2026

    Patreon CEO Jack Conte addressed thousands of attendees at SXSW 2026 in Austin and made a direct, pointed argument: AI companies' reliance on fair use as a legal defense for training on copyrighted creative work is bogus. His word, not a paraphrase. Conte, who is himself a musician and content creator before being a tech executive, was specific about why he finds the fair use argument unconvincing, and the core of his case came down to one observable fact about how AI companies actually behave.

    If training on copyrighted content were genuinely fair use, AI companies would not be negotiating and paying for licensing deals with major rights holders. They are. OpenAI signed a deal with the Associated Press. Google has licensing arrangements with publishers through its AI training programs. Multiple AI music companies have reached agreements with Universal Music Group and Warner Music Group. The selective application of the fair use argument, claiming it for independent creators whose work was scraped while simultaneously paying large institutions for their content, is the contradiction Conte put on the table in front of a SXSW audience.

    Patreon CEO Jack Conte argues AI copyright fair use claims are bogus at SXSW 2026
    Patreon CEO Jack Conte argues AI copyright fair use claims are bogus at SXSW 2026

    The specific argument Conte made about payment and precedent

    Conte's argument works on a straightforward logical level. If fair use applied to AI training data, then paying Disney or Warner Music would be unnecessary. Those companies have legal teams that would not accept payment for something they had no right to charge for in the first place. The fact that these deals exist is evidence that the companies entering into them, including the AI companies paying for the licenses, do not actually believe that training on copyrighted work is covered by fair use in all cases.

    What makes the situation inequitable for independent creators is that large rights holders have the leverage to negotiate. An illustrator with 50,000 followers on social media does not have a legal team, does not have a licensing department, and does not have the negotiating position to compel a deal with an AI company that scraped their portfolio to train an image generation model. The same content, used in the same way, generates no compensation for that illustrator while a major studio receives a licensing check.

    Why this argument lands differently coming from a tech CEO

    Conte's position is unusual because he runs a technology company that has a direct financial interest in the health of the independent creator economy. Patreon's business model depends entirely on creators having work that audiences will pay to support. If AI tools reduce the economic viability of independent creative work by flooding markets with generated content that undercuts the market for human-made work, Patreon's subscription revenue base shrinks. Conte is not simply a creator advocate speaking emotionally. He has a balance sheet reason to take this position seriously.

    Patreon reported in early 2025 that creators on its platform collectively earned over $3.5 billion in 2024, with the fastest growth coming from writers, illustrators, and musicians, exactly the categories of creator whose work has been most heavily used in AI training datasets. The connection between AI training data practices and the long-term earning capacity of Patreon's creator base is not hypothetical for Conte. It is a business question he has to think about.

    The legal status of AI training and fair use

    The fair use question in AI training is genuinely unsettled in US courts. The New York Times filed a copyright lawsuit against OpenAI and Microsoft in December 2023 alleging that training on Times articles constituted infringement. OpenAI's primary defense is that training is transformative, which is one of the four factors courts evaluate in fair use analysis. The case has not reached a verdict.

    The strongest precedent in AI companies' favor is the Google Books case, in which the Second Circuit Court of Appeals ruled in 2015 that Google's scanning of copyrighted books to create a searchable index was transformative and therefore fair use. AI training advocates have pointed to that ruling consistently. Critics, including Conte's position implicitly, argue that generating commercial products from training data is different in kind from creating a searchable index, and that the commercial output makes transformation a harder argument to sustain.

    Conte's view on creators adapting

    Despite his sharp critique of AI companies' legal positioning, Conte told the SXSW audience that he is genuinely optimistic about creators' ability to adapt and build sustainable careers through the current period of disruption. This is consistent with Patreon's public messaging, which has generally framed direct creator-to-audience relationships as the durable model that survives platform changes, algorithmic shifts, and now AI-driven content generation.

    His reasoning is that audiences pay for relationship with specific creators, not just for content in the abstract. A podcast subscriber paying $8 per month to a particular host is not primarily paying for audio content they could get elsewhere. They are paying for access to that specific person's perspective, community, and output. AI can generate content that resembles any given style, but it cannot replicate the actual relationship between a specific creator and their specific audience, which is what subscription platforms sell.

    How the licensing deals with major studios actually work

    The licensing arrangements between AI companies and large rights holders are not uniform. Some cover training data use, allowing the AI company to use the rights holder's catalog to train models in exchange for a fee. Some cover output licensing, meaning the AI company gets rights to generate content in a particular style or using particular assets for commercial deployment. And some are structured as research partnerships with revenue sharing components tied to how the AI tools are eventually commercialized.

    Universal Music Group's deal with a major AI music company, announced in 2024, was reported to include both training data rights and a share of revenue from AI-generated music tools. Warner Music Group has entered similar arrangements. The financial terms have not been made public in detail, but the structure acknowledges that the rights holders' content has commercial value in AI training contexts, which is precisely the acknowledgment that independent creators are not receiving.

    What policy responses might actually help independent creators

    Conte did not lay out a specific legislative agenda at SXSW, but the broader policy conversation around AI and creator compensation has produced a few concrete proposals. The EU's AI Act, which came into full effect in phases through 2025, requires AI companies to disclose what copyrighted material was used in training their models and to honor opt-out requests from rights holders. That transparency requirement is a prerequisite for any compensation framework, since creators cannot negotiate for something they do not know was used.

    In the United States, the Copyright Office issued a report in early 2025 recommending that Congress consider a licensing framework specifically for AI training data, modeled loosely on the compulsory licensing system that governs music covers. Under compulsory licensing, anyone can record a cover of a song without seeking permission from the songwriter, but they must pay a statutory royalty rate. A similar system for AI training would allow AI companies to train on copyrighted work while requiring them to pay into a fund distributed to rights holders, including independent creators. No such legislation has been passed as of March 2026.

    Love this story? Explore more trending news on sxsw 2026

    Share this story

    Frequently Asked Questions

    Q: Why does Conte say AI companies' fair use argument is contradicted by their own behavior?

    If training on copyrighted content were truly fair use, AI companies would have no legal obligation to pay rights holders for licensing. The fact that companies like OpenAI and Google have signed paid licensing deals with publishers and music labels suggests they do not fully believe the fair use defense holds, making it inconsistent to invoke that same defense when independent creators ask for compensation.

    Q: How much did Patreon creators collectively earn in 2024?

    Patreon reported in early 2025 that creators on its platform collectively earned over $3.5 billion in 2024, with the fastest growth coming from writers, illustrators, and musicians, the exact categories most heavily represented in AI training datasets.

    Q: What did the US Copyright Office recommend about AI training data licensing?

    The US Copyright Office issued a report in early 2025 recommending that Congress consider a compulsory licensing framework for AI training data, modeled on the statutory royalty system that governs music cover recordings. As of March 2026, no such legislation has been passed.

    Q: What does the EU AI Act require from AI companies regarding training data?

    The EU AI Act, which came into full effect in phases through 2025, requires AI companies to disclose what copyrighted material was used in training their models and to honor opt-out requests from rights holders. This transparency requirement is a foundation for any compensation system, since creators need to know their work was used before they can seek payment.

    Q: Why does Conte think creators can still build sustainable careers despite AI?

    Conte's argument is that audiences pay for relationships with specific creators, not just for content in the abstract. A paying subscriber is investing in access to a particular person's work and community, something AI cannot replicate regardless of how well it mimics a creative style. Patreon's subscription model is built on exactly that kind of direct creator-to-audience connection.

    Read More