OpenAI Levels Accusations Of AI Model Hacking Against New York Times In Copyright Lawsuit

OpenAI’s response to The New York Times’ copyright lawsuit has ignited a legal skirmish, with allegations of AI hacking and deceptive practices swirling in the courtroom. In a bid to thwart aspects of the lawsuit, OpenAI has petitioned a federal judge to dismiss certain claims, contending that the newspaper engaged in nefarious tactics by allegedly commissioning someone to manipulate AI models, including ChatGPT, to fabricate evidence.

The crux of OpenAI’s argument, outlined in a filing submitted to a Manhattan federal court, revolves around accusations that The NYT induced the AI technology to replicate its content through what it deems “deceptive prompts,” a violation of OpenAI’s terms of service. Notably, AI Company refrained from explicitly naming the individual purportedly enlisted by The NYT, a strategic move aimed at sidestepping potential charges of breaching anti-hacking statutes.

According to the filing submitted by OpenAI:

“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in this case, is that the Times paid someone to hack OpenAI’s products.” 

Counterbalancing OpenAI’s claims, Ian Crosby, representing The New York Times, dismisses the notion of hacking, characterizing it as an endeavor to leverage OpenAI’s own products to uncover evidence substantiating the alleged misappropriation and reproduction of NYT’s copyrighted material.

The genesis of this legal clash dates back to December 2023 when The New York Times initiated legal action against AI Company and its primary benefactor, Microsoft, asserting unauthorized utilization of millions of NYT articles to train chatbots. Drawing on constitutional provisions and copyright legislation, the lawsuit champions the integrity of NYT’s original journalism, while also singling out Microsoft’s Bing AI for allegedly generating verbatim excerpts from its content.

OpenAI Drives Trend Of Copyright Holders Suing Tech Entities

This lawsuit mirrors a broader trend where copyright holders, spanning authors, visual artists, and music publishers, pursue legal recourse against tech entities for purportedly exploiting their content in AI training endeavors.

OpenAI contends that training sophisticated AI models sans copyrighted materials is unattainable, citing the expansive purview of copyright law encompassing diverse human expressions. This stance was articulated in a submission to the United Kingdom House of Lords, wherein AI company underscored the indispensability of incorporating copyrighted materials in AI training.

Tech firms, echoing OpenAI’s sentiment, assert the equitable use of copyrighted material in their AI systems, cautioning that lawsuits of this nature imperil the burgeoning multitrillion-dollar industry’s trajectory.

Amidst this legal quagmire, courts grapple with the contentious issue of whether AI training qualifies as fair use under prevailing copyright statutes. Some infringement allegations linked to outputs generated by generative AI systems have been dismissed due to insufficient evidence establishing resemblance to copyrighted works.