Yesterday, popular authors including John Grisham, Jonathan Franzen, George R.R. Martin, Jodi Picoult, and George Saunders joined the Authors Guild in suing OpenAI, alleging that training the companyā€™s large language models (LLMs) used to power AI tools like ChatGPT on pirated versions of their books violates copyright laws and is ā€œsystematic theft on a mass scale.ā€

ā€œGenerative AI is a vast new field for Silicon Valleyā€™s longstanding exploitation of content providers," Franzen said in a statement provided to Ars. "Authors should have the right to decide when their works are used to ā€˜trainā€™ AI. If they choose to opt in, they should be appropriately compensated.ā€

OpenAI has previously argued against two lawsuits filed earlier this year by authors making similar claims that authors suing ā€œmisconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.ā€

This latest complaint argued that OpenAIā€™s ā€œLLMs endanger fiction writersā€™ ability to make a living, in that the LLMs allow anyone to generateā€”automatically and freely (or very cheaply)ā€”texts that they would otherwise pay writers to create.ā€

Authors are also concerned that the LLMs fuel AI tools that ā€œcan spit out derivative works: material that is based on, mimics, summarizes, or paraphrasesā€ their works, allegedly turning their works into ā€œengines ofā€ authorsā€™ ā€œown destructionā€ by harming the book market for them. Even worse, the complaint alleged, businesses are being built around opportunities to create allegedly derivative works:

Businesses are sprouting up to sell prompts that allow users to enter the world of an authorā€™s books and create derivative stories within that world. For example, a business called Socialdraft offers long prompts that lead ChatGPT to engage in ā€˜conversationsā€™ with popular fiction authors like Plaintiff Grisham, Plaintiff Martin, Margaret Atwood, Dan Brown, and others about their works, as well as prompts that promise to help customers ā€˜Craft Bestselling Books with AI.ā€™

They claimed that OpenAI could have trained their LLMs exclusively on works in the public domain or paid authors ā€œa reasonable licensing feeā€ but chose not to. Authors feel that without their copyrighted works, OpenAI ā€œwould have no commercial product with which to damageā€”if not usurpā€”the market for these professional authorsā€™ works.ā€

ā€œThere is nothing fair about this,ā€ the authorsā€™ complaint said.

Their complaint noted that OpenAI chief executive Sam Altman claims that he shares their concerns, telling Congress that "creators deserve control over how their creations are usedā€ and deserve to ā€œbenefit from this technology.ā€ But, the claim adds, so far, Altman and OpenAIā€”which, claimants allege, ā€œintend to earn billions of dollarsā€ from their LLMsā€”have ā€œproved unwilling to turn these words into actions.ā€

Saunders said that the lawsuitā€”which is a proposed class action estimated to include tens of thousands of authors, some of multiple works, where OpenAI could owe $150,000 per infringed workā€”was an ā€œeffort to nudge the tech world to make good on its frequent declarations that it is on the side of creativity.ā€ He also said that stakes went beyond protecting authorsā€™ works.

ā€œWriters should be fairly compensated for their work,ā€ Saunders said. "Fair compensation means that a personā€™s work is valued, plain and simple. This, in turn, tells the culture what to think of that work and the people who do it. And the work of the writerā€”the human imagination, struggling with reality, trying to discern virtue and responsibility within itā€”is essential to a functioning democracy.ā€

The authorsā€™ complaint said that as more writers have reported being replaced by AI content-writing tools, more authors feel entitled to compensation from OpenAI. The Authors Guild told the court that 90 percent of authors responding to an internal survey from March 2023 ā€œbelieve that writers should be compensated for the use of their work in ā€˜trainingā€™ AI.ā€ On top of this, there are other threats, their complaint said, including that ā€œChatGPT is being used to generate low-quality ebooks, impersonating authors, and displacing human-authored books.ā€

Authors claimed that despite Altmanā€™s public support for creators, OpenAI is intentionally harming creators, noting that OpenAI has admitted to training LLMs on copyrighted works and claiming that thereā€™s evidence that OpenAIā€™s LLMs ā€œingestedā€ their books ā€œin their entireties.ā€

ā€œUntil very recently, ChatGPT could be prompted to return quotations of text from copyrighted books with a good degree of accuracy,ā€ the complaint said. ā€œNow, however, ChatGPT generally responds to such prompts with the statement, ā€˜I canā€™t provide verbatim excerpts from copyrighted texts.ā€™ā€

To authors, this suggests that OpenAI is exercising more caution in the face of authorsā€™ growing complaints, perhaps since authors have alleged that the LLMs were trained on pirated copies of their books. Theyā€™ve accused OpenAI of being ā€œopaqueā€ and refusing to discuss the sources of their LLMsā€™ data sets.

Authors have demanded a jury trial and asked a US district court in New York for a permanent injunction to prevent OpenAIā€™s alleged copyright infringement, claiming that if OpenAIā€™s LLMs continue to illegally leverage their works, they will lose licensing opportunities and risk being usurped in the book market.

Ars could not immediately reach OpenAI for comment. [Update: OpenAIā€™s spokesperson told Ars that ā€œcreative professionals around the world use ChatGPT as a part of their creative process. We respect the rights of writers and authors, and believe they should benefit from AI technology. Weā€™re having productive conversations with many creators around the world, including the Authors Guild, and have been working cooperatively to understand and discuss their concerns about AI. Weā€™re optimistic we will continue to find mutually beneficial ways to work together to help people utilize new technology in a rich content ecosystem.ā€]

Rachel Geman, a partner with Lieff Cabraser and co-counsel for the authors, said that OpenAIā€™s "decision to copy authorsā€™ works, done without offering any choices or providing any compensation, threatens the role and livelihood of writers as a whole.ā€ She told Ars that "this is in no way a case against technology. This is a case against a corporation to vindicate the important rights of writers.ā€

  • ryathal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    Ā·
    10 months ago

    I think that fear is overblown, ai models are only as good as their training material. It still requires humans to create new content to keep models growing. Training ai on ai generated content doesnā€™t work out well.

    Models arenā€™t good enough yet to actually fully create quality content. Itā€™s also not clear that the ability for them to do so is imminent, maybe one day it will. Right now these tools are really onlyngood for assisting a creator in making drafts, or identifying weak parts of the story.

    • damndotcommie@lemmy.basedcount.com
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      edit-2
      10 months ago

      Models arenā€™t good enough yet to actually fully create quality content.

      Which is why I really hate the fact that the programmers and the media have dubbed this ā€œintelligenceā€. Bigger programs and more data doesnā€™t just automatically make something intelligent.

      • Zormat@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        10 months ago

        This is such a weird take imo. Weā€™ve been calling agent behavior in video games AI since forever but suddenly everyone has an issue when itā€™s applied to LLMs.