Publications

It Wasn’t Me, It Was the AI: Intellectual Property and Data Privacy Concerns With Nonprofits’ Use of Artificial Intelligence Systems

November 22, 2023 Articles
Board & Administrator

In today's rapidly changing technological landscape, artificial intelligence (AI) is making headlines and being discussed constantly. To be sure, AI provides a powerful tool to nonprofits in creating content and exploiting for countless cost-effective purposes. As nonprofit executives, you may be wondering how AI intersects with intellectual property and data privacy law, and how it could affect your organization. While the full extent of the implications will only be fully understood after some history with the use of AI, some of the issues are already predictable.

Copyright creates at least two main concerns: the ability to protect works created using AI, and the risk of infringement by an AI system that copies existing works.  In the privacy realm, companies using AI have to be careful to ensure unexpected uses are not being made of personal information in violation of your privacy policy, whether by the AI itself, or by the AI software provider as described in its contract provided with the software.

Copyright Concerns

One of the main concerns when it comes to AI and intellectual property is how copyright laws impact the use of AI. In the copyright realm, there are two prominent issues—indeed, two sides of the copyright coin—for nonprofits to consider:

Ownership of AI-created content: If your nonprofit uses AI to create content, you need to clarify who owns that content; it will be difficult to protect that content against use by third parties. That is because, under United States law (and the law of nearly all other jurisdictions), copyright is only recognized in works created by human beings. AI-generated content is not eligible for copyright protection. As such, one would not be able to protect against third parties who copy such content and use it for their own purposes.

The best defense against such taking, then, would be to add human creation into the AI-generated work. While any copyright protection would only vest in those human-authored additions/revisions, to the extent a third party’s copying included those changes, it could support a claim of copyright infringement, and would give the nonprofit a basis to stop that third party’s use of the work.  Note, though, that a copy that did not incorporate those human-authored aspects would not violate the rights of the authoring entity.

Copyright infringement: AI's ability to make copies of data, including copyrighted material, raises concerns about potential copyright infringement. For example, AI large language models learn by analyzing vast amounts of data from the materials it is provided, which could include the entirety of internet content accessible at the time the AI system was trained.  Obviously this could, and necessarily would, include copyrighted material. In undertaking such training, the entity training the AI system necessarily copies those materials, though the copying is not long-term and the copies are discarded. While fair use may be a defense to that kind of copying, it is a complex legal issue, and you as the trainer of the AI system may need to be prepared for potential litigation.

That said, the issues with training copies are more of a concern for the owners of the AI system, and if the system you are using is one provided by a third party, copyright infringement arising from training may not be your concern.. But there is another consideration. We are starting to see issues where AI systems are “creating” content that seems suspiciously similar to existing content. To the extent your nonprofit uses an AI system—even one provided by a third party for your use—to create content (text, images, videos, etc.) that you post on your site or otherwise distribute, where that content actually copies protected third-party works, your nonprofit could be liable for copyright infringement.

Here, it is important to look at the agreements governing your use of the system. Just as you would expect an artist you hire to stand behind their work, you could try to have the owner of the AI system provide warranties against infringing creations. It is certainly unlikely that off-the-shelf products are going to include such protections in their user agreements. But as AI business develops, there will likely be bespoke work done for clients, and in those relationships such contractual protection could be available.  And even assuming the provider offers reps and warranties, and even indemnification, against such copyright infringement claims, that may not be enough.  A claim for copyright infringement against materials you post on your site will be brought against you.  Even if the AI-provider is ultimately responsible under the agreement, if it does not have the resources to defend you (if, for example, it is out of business), you won’t be off the hook.  With all of this in mind, it is imperative to carefully consider the product and the provider before using AI to create content that you plan to disseminate through your website or otherwise.

Right of Publicity and Privacy Concerns

Another area in which the use of AI could present issues is that of the rights of publicity and privacy. The right of publicity protects individuals' rights to control their name, image, and other identifiers for commercial purposes. One could imagine a scenario where AI-created content includes such materials without the authorization of the individual identified.  Under state right of publicity laws, courts have commonly found liability for the use of look-alikes (and even sound-alikes) even where the actual person’s image (or voice) is not used.

Additionally, AI systems could collect personal information of individuals and lead to the company employing such systems storing and using such personal information without authorization from such individuals.  Indeed, considering that the system may have had access to a broad universe of data, it is likely such data included individuals’ personal information.  Any use of an AI system should be carefully tailored to protect against unintended use and collection of third-party data.

It is also possible for an AI system to re-identify personal information from data that was previously de-identified (as a matter of course or upon a request of an individual).  In that case the company could be in violation of its own consumer-facing privacy statement, as well as, potentially, the law.

Finally, AI systems may scrape personal data to use in connection with training, and then such information may be used otherwise without authorization.

To safeguard your nonprofit's reputation and minimize legal pitfalls, consider the following:

  • Carefully review content for potential issues with the identification of individuals;
  • Ensure your privacy policy accurately reflects the data collected and used by AI. If AI gathers additional information, update your policy accordingly to be transparent with your audience.
  • Implement gatekeeping measures to protect sensitive donor and employee data. Techniques like CAPTCHA can help keep AI from scraping information from your website.

Of course, these are just a few of the issues we expect to arise in IP and data privacy from the use of AI systems. What is clear, though, is that it is essential that nonprofits are vigilant as to how AI impacts your organization's intellectual property and privacy practices. While AI offers exciting possibilities for nonprofits, it also poses legal challenges that require careful consideration. Stay informed, work with legal and IT professionals, and update your policies to ensure ethical AI usage and protect your nonprofit's reputation and avoid legal issues. By understanding the implications of AI and adopting responsible practices, your nonprofit can harness the power of technology while safeguarding your stakeholders' interests.

"It Wasn’t Me, It Was the AI: Intellectual Property and Data Privacy Concerns With Nonprofits’ Use of Artificial Intelligence System,” published in Board & Administrator, December 2023, Vol. 40 , No. 4, Copyright 2023.

Firm Highlights

Publication

California Proposes New AI & Automated Decision-Making Technology Regulations

The California Privacy Protection Agency (CPPA) released its draft  regulatory framework for automated decision-making technology (ADMT) on November 27. These regulations are a preview of what new requirements may look like for companies currently...

Read More
News

Farella 2024 Partner Elevations: Cynthia Castillo and Greg LeSaint

Northern California legal powerhouse Farella Braun + Martel is pleased to announce the election of two lawyers to partnership effective Jan. 1: Cynthia Castillo and Greg LeSaint. “We are thrilled to elevate Cynthia and...

Read More
Publication

Fair Use Question Goes to Trial in AI Copyright Lawsuit – Thomson Reuters v. Ross Intelligence

On September 25, 2023, a United States Circuit Judge determined that fact questions surrounding issues of fair use and tortious interference required a jury to decide media conglomerate Thomson Reuters’s lawsuit against Ross Intelligence...

Read More
Publication

Major Decision Affects Law of Scraping and Online Data Collection, Meta Platforms v. Bright Data

On January 23, 2024, the court in Meta Platforms Inc. v. Bright Data Ltd. , Case No. 3:23-cv-00077-EMC (N.D. Cal.), issued a summary judgment ruling with potentially wide-ranging ramifications for the law of scraping and...

Read More
Publication

A Summary of New Laws Coming for California Employers in 2024

In 2023, California has adopted several new employment laws either introducing new employee protections or codifying existing practices into state law. With these changes, employers will need to examine and adjust some of their...

Read More
Publication

Is the Copyright Threat to Generative AI Overhyped? Implications of Kadrey v. Meta

In November 2023, Meta successfully had nearly all of the claims against it dismissed in the Kadrey v. Meta Platforms, Inc. suit, a victory with potential implications for other technology companies with generative AI tools...

Read More
Publication

Court Reinstates CPPA Enforcement Authority and Confirms No Delay Necessary for Enforcement of Future CCPA Regulations

A recent appellate decision has made clear that the regulations promulgated under California’s groundbreaking consumer privacy law, the California Consumer Privacy Act (CCPA, as amended by the California Privacy Rights Act (CPRA)), are ripe...

Read More
News

Winston Liaw Named a Leadership Council on Legal Diversity Fellow

Northern California legal powerhouse Farella Braun + Martel is proud to announce that Winston Liaw has been named a Leadership Council on Legal Diversity (LCLD) Fellow for 2024. Winston joins a select group of...

Read More
News

Farella Wins Complete Defense Ruling at Trial for Smart Meter Technology Company

Northern California legal powerhouse Farella Braun + Martel secured a complete defense victory for a smart meter technology company following a two-week bench trial in the U.S. Bankruptcy Court for the Southern District of California...

Read More
News

Scraping Battles: Meta Loses Legal Effort to Halt Harvesting of Personal Profiles

Alex Reese spoke to Matt Fleischer-Black of  Cybersecurity Law Report about the Meta v. Bright Data decision and its impact on U.S. scraping case law. Read the article here (paywall or trial).

Read More