Technology in Court: AI Lawsuits to look out for
September 21, 2023
Madrid, Spain
Artificial Intelligence (AI) is on the cusp of a transformative era, poised to revolutionize various facets of our daily lives. From healthcare and education to finance and retail, AI’s potential to drive innovation and efficiency is unparalleled. However, as with any groundbreaking technology, the rapid evolution and integration of AI into mainstream applications come with their own set of challenges.
The novelty of AI, while being its strength, is also its Achilles’ heel. As industries rush to adopt AI-driven solutions, they are often met with unforeseen obstacles. Among these, security vulnerabilities stand out, with concerns about data breaches and unauthorized access gaining prominence. Ethical dilemmas, too, are at the forefront, as stakeholders grapple with questions about AI’s decision-making processes, potential biases, and the implications of its actions.
Yet, one of the most pressing and contentious issues emerging in the AI landscape is related to copyright infringement. The training of AI systems often requires vast amounts of data, and there’s a growing debate about the legitimacy of using copyrighted materials for this purpose. Over the past few years, this has culminated in a series of lawsuits, with companies and individuals alleging unauthorized use of copyrighted content to train AI models. These legal challenges not only highlight the complexities of intellectual property rights in the digital age but also underscore the urgent need for clearer regulations and guidelines. As AI continues to evolve, striking a balance between innovation and legal compliance will be paramount.
In conclusion, while AI holds the promise of a brighter, more efficient future, it is imperative that its development is guided by ethical considerations and respect for existing legal frameworks. Only then can we harness its full potential while safeguarding the rights and interests of all stakeholders.
Next, we will analyze the most relevant legal cases against AI systems filed for copyright infringement.
Authors Guild v Open AI
Alphabet, OpenAI, Meta and Microsoft are being pursued for unlawful and unfair business practices by the Authors Guild, an advocacy association for copyright protection of original works. The Authors Guild, representing authors like John Grisham, George R.R Martin and Margaret Atwood, among others, are pursuing a class-action lawsuit (As of June 2023) for the use of their copyrighted work without permission to train AI models. The plaintiffs are concerned that Open AI’s GPT-3 would generate text that resembles copyrighted works, potentially infringing upon the author’s intellectual property rights. The lawsuit involves complex legal questions related to fair use, transformative use (through generative AI) and the boundaries of copyright law. It is alleged that over 30,000 books of the plaintiffs have been used to train OpenAI’s GPT-3.
Young v Neocortext
Kyland Young, a reality TV actor, is proposing a class action lawsuit against Neocortext (the creators of Reface), for their generative AI-powered app which lets users swap faces with well-known individuals and fictional characters. The plaintiff asserts that the app’s deepfake nature, is a commercial exploitation of all the celebrities and characters names, voices, and images without their permission. Filed in May 2023, Young claims that the outcomes generated by Reface are infringing upon the rights of publicity of celebrities and other characters used by the generate AI app. Given that the app allows a user to swap faces and generate “deepfake imagery”, this can be misused to create disinformation to the public on behalf of the framed individual. There is a larger problem behind “deepfake” technology where it can be used to initiate international misunderstanding, especially when used to generate content related to politicians.
Andersen v Stability AI
Filed in January 2023, artists Sarah Andersen, Kelly McKernan and Karla Ortiz filed a lawsuit representing a class of artists against Stability AI (Ltd. & Inc.), Midjourney and DeviantArt under copyright infringement. It is alleged that the generative AI systems of the defendants have produced “derivative works” of authentic art, imitating or like the plaintiff’s work. It is argued that the AI systems have been trained using unlawfully obtained copyrighted pieces of art, which infringed their rights of publicity and amounts to unfair competition. Since January, several artists have joined this lawsuit as it has been found that the AI image generation tools are also capable of “copying the style of art”. As this is a case of semi-publicly available content used to train generative AI technology, it would be the first case of its kind in the cases filed against AI.
ACLU v Clearview AI
Clearview AI was pursued by the American Civil Liberties Union for violating citizens’ rights under the Biometric Information Privacy Act of Illinois (BIPA). In May 2020, it was alleged that Clearview collected their images and data without knowledge nor consent. The main issue revolved around the fact that Clearview failed to inform affected individuals that their data was collected. The plaintiff raised concerns about individual privacy and security, with specific issues including the potential for Clearview AI to uncover classified information if the database of face prints falls in the wrong hands. The danger of collecting biometric information is that AI tools or even perpetrators can maneuver facial recognition for unknown use. Upon assessing the risk in this case, the Illinois state court decided that Clearview shall comply with the BIPA and would also ban from making the faceprint database available to private entities. This is one of the most groundbreaking cases in privacy law and AI.
TMT Area of ECIJA
Telf: + 34 91.781.61.60