Following a copyright lawsuit against an AI code generator and industry questions about who actually owns images made by AI text-to-image generators, we look at the legal issues (and others) surrounding generative AI.
The recent lawsuit and questions from coders, artists, musicians, and other creatives show that the challenge is that there is currently a lack of clarity around issues of ownership relating to the output of AI content generating tools. There are many issues at the heart of the whole generative AI area, including:
– AI tools that generate images, code, text, and music are relatively new and how and what they produce hasn’t yet been subject to much legal scrutiny.
– AI content generating tools are built using with algorithms that have trained on previous work produced by humans and, once again, need more scrutiny.
– As noted by visual artists, the legality and ethics of AI that incorporates existing work needs to be examined. Also, AI art tools that have been trained on work by specific artists can copy their style in the images they produce. This could have a negative impact on the artist’s income.
– It is not clear exactly who owns an image or other piece of content that generative IT tools produce. For example, is it the owner of the AI that trains the model, or the human that prompts the AI with words?
The Lawsuit – Who Owns AI Generated Code?
The recent class-action lawsuit filed in California was focused on an AI tool called GitHub Copilot which automatically writes working code as the programmer types. The coder who filed the lawsuit argued that the code-writing tool may be infringing copyright because it doesn’t provide any attribution for the open-source code it reproduces. Some open-source code, for example, is covered by a license that requires attribution.
It should be noted that GitHub’s CEO has now said that Copilot now has a feature that can be enabled to prevent copying from existing code.
DALL-E Prompts Questions About Copyright And Ownership Of AI Generated Images
Another recent example of generative AI that has prompted industry questions relating to copyright and ownership is OpenAI’s DALL ·E tool. DALL·E 2 is an AI system that can create realistic images and art from a description in natural language using a process it calls “diffusion” (see: https://openai.com/dall-e-2/). Although for a subscription, users are given full usage rights to reprint, sell and merchandise the images they create with the tool, creative professionals have been asking questions about generative AI ownership issues like the ones mentioned above.
Other Examples Of Generative AI Tools
GitHub Copilot and DALL·E are by no means the only AI generative tools available. Others (and there are many more) include:
– Images (text-to-image) – Starryai, Craiyon, and NightCaf.
– Video (text-to-video) – Synthesia, Lumen5, and Elai.
– Design – Khroma, Designs.ai, and Uizard.
– Audio (text-to-speech voice generators) – Replica, Speechify, and Play.ht.
– Music -AIVA, Jukebox, and Soundraw.
– Text – Jasper.ai, Peppertype, and Copy.ai
– Code (text-to-code) – Tabnine, PyCharm, and Kite.
Up until now, the Internet has created a challenging area to keep track of legally, nevertheless some basic copyright rules apply. That said, so much digital (and non-digital) work is continuously created that there is no one copyright register in the UK for the online world. Instead, the law simply states that a person automatically enjoys copyright protection when they create something, e.g. original literary, dramatic, musical, and artistic work (including illustration and photography). This automatic ownership also applies to creating original non-literary written work, such as software, web content and databases.
If a person has copyright protection in the UK, it should mean that nobody else can copy, distribute (paid or free), rent, or lend copies of that work, make an adaptation of the work, or put that work on the Internet. However, AI content generating tools are blurring those lines and raising new ownership questions.
Some legal and tech commentators have pointed to the possible importance and relevance of US copyright ‘fair use’ in making decisions about (for example) the output of text-to-image generators. For example, in Google LLC v. Oracle America, Inc (2021), it was decided that Google’s use of Oracle’s code was ‘fair use’, and the focus of the decision wasn’t whether the material copied was protected by copyright.
What Does This Mean For Your Business?
This is a relatively new area where, as with so much of AI, the technology and its usage appear to be advancing faster than regulation and laws. This is generating more questions than clear answers, thereby creating uncertainty. For creatives such as musicians and artists, generative AI could be a threat, e.g. copying their style or work, as well as an opportunity.
For coders too, generative AI tools could represent a threat although, as with GitHub’s CoPilot, features could be added to the tools to lessen the threat. However, generative AI is a growing and lucrative market with the potential to step on many toes, hence the inevitable lawsuits. Users of generative AI services may also have doubts about the absolute legality of what they produce and publish using generative AI services, e.g. it may not always be clear whether AI-produced text for blogs contains copied material or is even factually accurate.
It appears, however, that the courts in each country will be the way that disputes about infringements by generative AI are decided and settled. Generative AI tool producers will need to keep a very close eye on how their algorithms work and the legal outcomes and implications of various cases as they are decided. For businesses using generative AI tools (e.g. to create images or other content), it undoubtedly meets a need in a new and innovative way, can save time, add value, and be a source of new strengths and opportunities. For the large, well-established photo/image retailers, these tools may currently represent a threat so it remains to be seen how markets such as this react.