By Megan Howard and Skylr Martucci-Moore
Artificial Intelligence (AI) has become ubiquitous over the last few years, prompting many businesses and institutions to scramble to adopt policies, procedures and playbooks to avoid getting left behind by their competitors and to stay relevant with customers. It’s not just a passing fad, either – developments over the past several years make it clear that AI will inevitably become a mainstay of our working lives and will likely impact most, if not all, of the tasks a business and its employees perform on a daily basis. With any new technology, however, businesses will need to make informed decisions about the ways in which they utilize AI technology and the way they contract with third parties to ensure that the business is protected and prepared for addressing AI issues as they arise. In the article below, we will address some of the most pressing concerns for businesses looking to integrate generative AI technology in their operations: namely the ownership of AI outputs and selected considerations for drafting and negotiating contractual agreements.
What is AI?
The first hurdle a business must overcome is understanding what AI is. At its core, AI is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy. While this overarching definition is expansive and calls to mind Terminator-like displays of near-human sentience, AI is not a new innovation. AI first emerged in the 1950s and has been a constantly advancing technology ever since. In fact, we encounter AI every day in our emails, search engines, social media, facial recognition, and even recommendations about what to watch on streaming services. Although the term “AI” often makes people think about the likes of ChatGPT and Midjourney, AI is more of an overarching concept than a singular technology, and we see its functions in every aspect of our personal and professional lives.
Understandably, businesses are most concerned with the AI functions and technologies that result in creative outputs or other “generative” AI – that is, AI that digests material and creates new content based on its training data and learning model. Deep learning and large language models such as ChatGPT, DALL-E, Midjourney, Microsoft 360 CoPilot, and for lawyers, CoCounsel, are able to produce complex original content such as long-form text, high-quality images, realistic audio and video, and other outputs. These new works raise a plethora of issues for business users, including who owns the output, ethical considerations, potential liabilities, and still other unanticipated concerns. In an era of rapid change, where the capabilities of new AI technology are constantly evolving and being improved, businesses must use all tools at their disposal, including contractual arrangements, to best position themselves today to take advantage of the technological developments of tomorrow.
Who owns the outputs of generative AI technologies?
In the United States, the ownership of works produced by generative AI technologies is governed by the existing US intellectual property regime –copyright law under the Copyright Act of 1976, trademark law under the Lanham Act and various state laws, patent law, and trade secret law. Under each of these regimes, except trade secret law (which we discuss in greater detail below), there is a clear, bright-line rule: raw AI output is not protectible, as such outputs are not the work of a human author. Protecting human authorship is the root of nearly all US intellectual property regimes[i], and, without a human, there can be no protectible authorship. This principle is perhaps most colorfully illustrated by the famous “monkey selfie case”, in which a macaque triggered a camera to capture a “selfie”-style photograph.[ii] The Ninth Circuit held[iii] – and the US Copyright Office later clarified[iv] – that copyright protections do not extend to works produced by non-human authors.
Works of authorship that are the result of the combined efforts of humans and AI technology may benefit from some level of legal protection, however. According to guidance provided by the US Copyright Office and the US Patent and Trademark Office (USPTO), works that have “sufficient” human authorship may be partially protectible.[v] Copyright protections apply to only the human-authored aspects of a work, which are independent of AI-generated material.[vi] In fact, the US Copyright Office has taken the position that registrations with the US Copyright Office must identify human-authored versus AI-generated content to prevent over-reaching, and misrepresentation or failure to properly identify AI-generated content can result in the US Copyright Office rolling back some of the protections afforded by registration.[vii] For an example of this practice, consider the graphic novel “Zarya of the Dawn,” which was created by a human author using Midjourney, a generative AI technology capable of producing images from text prompts.[viii] When Kris Kashtanova submitted the finished graphic novel to the US Copyright Office, the Office initially provided registration – and thus protection – for the entirety of the work.[ix] Upon learning about the use of Midjourney to produce the images used in the graphic novel, the US Copyright Office later denied protection of the images and extended protection only to the text and sequencing of the novel, which had been the original work of Kashtanova.[x]
The USPTO released guidance in early 2024 coming to a similar conclusion for patent applications, noting that while an inventor must be a natural person, AI-assisted inventions are not categorically unpatentable.[xi] When AI has contributed to innovation, however, human contribution must be significant enough to qualify for a patent over such invention.[xii] The standard for what constitutes “significant enough” has yet to be fully defined and will likely continue to be fleshed out in the near future.
Finally, trade secrets, which derive their protection by virtue of not being known to the market generally, naturally provide the strongest protection for works created through the use of generative AI. Because trade secrets are secret, ownership of AI-created technology is a less significant issue in the trade secret context, and ownership of resultant technology or data can remain with the entity that takes steps to protect its trade secrets. It follows that businesses should focus on strengthening trade secret policies and protections.
What are best practices for protecting content developed by AI technologies?
As evidenced by the fact that the guidance provided by the US Copyright Office and the USPTO was published in the last 24 months, this area of law is quickly evolving. The doctrines accepted today may not be true in the coming years, and what is considered a “best practice” today for protecting a business’ intellectual property may no longer be adequate in a few years’ time. Of course, business does not stop to allow the law to develop or for government agencies and courts to figure out how to apply the law to new technologies, so it behooves the savvy business-person to ensure that contracts with third parties are crafted to address current concerns while providing flexibility to accommodate potential future changes in the law. Accordingly, it is prudent to ensure that any agreement that has any AI components, or that involves AI technology, specifies which party owns which rights in the inputs and outputs of the AI technology, even if such inputs or outputs are not protectible under the copyright or patent regimes at the time of drafting.
Prohibiting the Use of AI
The most straightforward approach to prevent the ownership quandaries that come with AI is to contractually prohibit the use of it altogether. Even so, prohibiting service providers, partners, or other parties’ use of AI is tricky. Prohibiting all uses of broadly-defined AI is unwise, as it would hinder efficiency. After all, there are very limited fields where services or goods can be provided without the consistent use of some form of AI. If a contract’s AI prohibition is too encompassing, it could even prohibit the use of emails or messaging, and we all know how some individuals fall apart at the first hint of having to discuss things over the phone. A key drafting consideration in prohibiting the use of AI is determining what should be included within the category of “prohibited AI.” Although the best approach remains to be seen, many contracting parties have added language prohibiting the use of generative AI, as this technology is the subset of AI that is primarily responsible for creating new outputs that cannot be “owned” under the existing US IP landscape. Regardless of what prohibition mechanism is incorporated into a document, parties must ensure that by its plain language it applies to all necessary parties, including, but not limited to, any third party and fourth party subcontractors, independent contractors, or vendors performing on behalf of a contracting party.
To combat the inflexibility of a point-blank generative AI prohibition, allowing for the use of AI as disclosed at contract execution and otherwise with prior written approval is a sounder approach. Increasingly, generative AI is making processes more efficient and saving time and money for those individuals and entities that use it effectively. When providing approval for the use of generative AI, whether on a case-by-case or more general basis, understanding the terms of use or other end user terms applicable to the approved generative AI programs is critical. Contracting parties should either receive a copy of the applicable terms, or, at the very least, have the party requesting the use of the generative AI program represent and warrant as to the AI program’s terms regarding the datasets that encompass the AI’s knowledge-base and the ownership rights to inputs and outputs. As a general rule of thumb, free versions of generative AI should not be used for business purposes, as the security risk is too high. In addition to the concerns about ownership of output, free AI programs have the added risk that existing, protected intellectual property will be made available to the public generally – whether through open-source software, the AI technology’s knowledge database, or otherwise.
Contracting for Ownership
Although existing IP protection schemes in the United States do not presently allow for ownership of machine-made outputs, this does not mean that parties should overlook the importance of establishing contractual ownership of these outputs. Contracts legally bind the parties at hand regardless of the silence or other inadequacy of the law (as long as the contract is not against public policy or illegal). Further, if the body of IP law shifts to allow ownership of machine-made outputs, the purported owner does not want to be left in a questionable ownership situation simply because the contract failed to address ownership of generative AI outputs. As with most contractual terms, a party’s perspective on ownership will vary depending on which side of the coin that party is on. End customers will want to maintain ownership of the AI inputs they have provided and any AI outputs created. Service providers, on the other hand, may also want ownership of, or at least a broad license to, AI outputs for the benefit of their businesses and other customers, even when those outputs are based on inputs owned solely by end customers. The key is that ownership of both AI inputs and outputs should be affirmatively established in the contract, especially when generative AI is involved.
Contracting parties should also address ownership of any human manipulations to AI outputs given that it is well established that human manipulations remain protectable intellectual property rights. End customers will want to include that all manipulations to raw AI outputs are “work for hire” if copyrightable or otherwise owned by, and assigned to, the end customer by virtue of the agreement. Whether the service provider should ask for ownership of these manipulations is dependent on the particular contract and may need to be negotiated. Taking this step precludes a potential gap in ownership when a human individual manipulates the output, creating ownership rights in that manipulation, which may not be considered a deliverable owned by the end customer under the contract without the right language.
Representations and Warranties & Risk Allocation
In addition to establishing permitted uses of AI ownership of the inputs and outputs of such use, contracting parties should also allocate the risks that are inherent in the use of generative AI, including through representations, warranties, and indemnification. Of particular importance is a warranty of non-infringement, indemnification against claims of infringement, and/or infringement mitigation obligations. In most situations, a service provider will agree to warrant that its services do not infringe upon the IP rights of others and that it has sufficient ownership or licensing rights to perform its obligations under the agreement. The use of generative AI creates a nuance to this representation, which, if narrowly drafted, may create a risk for end customers. AI is not perfect; it may, by itself, provide infringing outputs. The representations and warranties within a contract typically establish where the liability for infringement will lie, but will the market approach be extended to apply to the “acts and omissions” of AI technology taken or omitted without human intervention?[xiii] Non-infringement warranties must be drafted broadly enough to treat the output of AI technology the same as any output created by the service provider itself because ownership of the AI outputs means little if such output is infringing the IP rights of a third party.
Even if an agreement does not include a broad warranty of non-infringement, agreements that relate to delivery of any work product or other outputs by a service provider should include indemnification obligations of the service provider related to infringement. An end customer should expect a service provider to protect the end customer against claims of infringement related to outputs of generative AI programs used by the service provider, or the training materials used by such generative AI programs. On the other hand, service providers should approach such indemnification obligations with caution to ensure that the providers of generative AI programs will provide supporting indemnity. A service provider should seek to avoid the position where they are required to indemnify a customer for risks that are not covered by the providers of the underlying technology.
Without understating the importance of being certain that representations and warranties sufficiently address the risk of AI, the trend appears to be that bespoke, expansive AI representations and warranties are typically only included when AI is a key component of the transaction, services, or goods at hand.[xiv] Appropriately-drafted, generally applicable infringement representations, warranties, and indemnification are generally sufficient to allocate the infringement risk to the party who provides the infringing materials, regardless of whether those materials were created by that party or by a machine. While creating new and complicated terms may be unnecessary, the analysis of coverage is indispensable.
Conclusion
The use of AI to increase efficiency in everyday life is likely to continue increasing as each day passes. Generative AI is getting progressively more advanced and is expected one day to be “strong” AI, with the capacity to think as humans do. For all our sake, let’s hope when that time comes, our society isn’t overthrown like so many science fiction novels and movies have foretold. Until such time when the AI overlords take over and ownership of IP is no longer our concern, business parties should continue to address ownership and risk allocation issues related to AI by contract and continually re-evaluate the approach as AI technology and the surrounding legal landscape evolve. We certainly will.
[i] Unlike its cousins which aim to protect human innovation, trademark law aims to protect the goodwill and reputation associated with marks. Accordingly, the USPTO has not refused to grant marks on the basis that such marks were partially or entirely created using AI technology.
[ii] Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).
[iii] Id.
[iv] U.S Copyright Office, Compendium of U.S. Copyright Practices §§ 306, 313.2 (3d ed. 2017).
[v] U.S. Copyright Office, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16,224 (Mar. 16, 2023).
[vi] Id.
[vii] Id.
[viii] Tony Analla, How AI Is Changing the Landscape of Copyright Protection, Harvard J. Law & Tech. (Mar. 6, 2023), https://jolt.law.harvard.edu/digest/zarya-of-the-dawn-how-ai-is-changing-the-landscape-of-copyright-protection.
[ix] Id.
[x] Id.
[xi] U.S. Patent and Trademark Office, Inventorship Guidance for AI-Assisted Inventions, 89 Fed. Reg. 10,043 (Feb. 13, 2024).
[xiii] Practical Law Intellectual Property & Technology, AI Key Legal Issues: Overview, Westlaw w-018-1743 (last visited Nov. 11, 2024).
[xiv] Practical Law Intellectual Property & Technology, AI Representations in M&A Agreements, Reuters (June 1, 2024), https://www.reuters.com/practical-law-the-journal/transactional/ai-representations-ma-agreements-2024-06-01/.