What Legal Protections Do Actors Have Against AI?

Article Image
Photo Source: Margaret Ruling/Ian Robinson/Sripfoto/Shutterstock

As AI continues to evolve, institutions are pushing to protect their people from major risks, exploitation, and misrepresentation, and concerns for the acting and performance community in particular are getting more specific. Lawyers, governments, and SAG-AFTRA are all weighing in on the emerging technology and how to protect yourself and your career if you’re approached to work on an AI project.

JUMP TO

What are some of the concerns about AI entering the acting space, and how are lawmakers responding?

Many folks hold reservations about how AI will be used and what the long-term implications are. One of the principal concerns is using AI to impersonate actors without their consent. 

“The thing that worries me about AI… is that it could put words in my mouth that I don’t believe in,” says actor Noelle Gizzi (“Irreverend,” “Newsworthy”). “Personally, I’m very protective of my image.” 

With enough data—i.e. images and video of a person—generative AI models can be trained to analyze a person’s likeness and intricately map it out. Users can then recreate virtually anyone by projecting an image of them onto another person’s face or body. 

So, what can you do if someone uses AI to pretend to be you or replicate your image without your consent? According to intellectual property lawyer and partner at Barton LLP, Tara Aaron-Stelluto, you may be able to seek damages.

“Even if they don’t have copyright ownership over what the AI output is, whoever put that string of queries together and actually was responsible for creating that deepfake, AI or not, still has liability”
Tara Aaron-Stelluto, intellectual property lawyer

“The issues that come out of that [will still be] publicity, and a moral rights question if someone’s image is being used to make them say something that they absolutely would never say.... Even if they don’t have copyright ownership over what the AI output is, whoever put that string of queries together and actually was responsible for creating that deepfake, AI or not, still has liability,” she says.

And while there are major legal filings posed to possibly disrupt the AI industry, it’s too early to tell what regulations will or won’t fall into place. China has already drafted strict regulations for generative AI, which include restricting use for any content that infringes on an individual’s “likeness rights, reputation rights, personal privacy, or commercial secrets.” The draft also includes restrictions against using generative AI to harm the state’s image/national unity/socialist values, protection for intellectual property rights, and legal responsibility for how datasets are sourced, among other restrictions.    

Meanwhile, the European Union’s proposed Artificial Intelligence Act focuses on protecting citizens’ rights and demanding transparency from AI developers. If it passes, each AI model will be sorted by how dangerous they are to individuals and society at large. “Minimal risk” models will be left alone, while “unacceptable risk” models will need to conform to regulations set out by lawmakers or face legal consequences. 

Lawmakers in the United States are comparatively lagging behind. The Biden administration released a “Blueprint For An AI Bill of Rights,” in October 2022, and the Federal Trade Commission publicly warned companies about using AI to unfairly disrupt other businesses and take advantage of consumers in April 2023. In May 2023, OpenAI’s CEO, Sam Altman, testified before Congress and agreed with lawmakers proposing the idea of regulations, saying, “For a new technology we need a new framework.” 

Altman also echoed Aaron-Stelluto’s sentiment about users’ liability. “Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well,” he said.

What should actors be aware of if they’re approached to work on AI projects?

Compounding the growing concern over generative AI is the new legal frontier of its use to replicate an actor’s likeness. As it stands, the best protection against this possibility is to carefully examine every contract you sign. With AI increasingly infiltrating acting spaces and entertainment at large, acting contracts carry more weight than ever before. “Read your contracts. That’s No. 1, always” advises Aaron-Stelluto. 

“Generally, what you’ll see in a contract like this is… ‘[technology] now known or hereafter devised.’ So [production companies], they’re pretty good at making sure that their rights are protected…regardless of what happens with the technology.” 

Aaron-Stelluto strongly advises fighting for every right you can if you’re approached to work on a project that includes signing away your likeness to be used in AI. “I would want protections in there for reputational damage,” she says. “I wouldn’t necessarily advise you to walk away from a Netflix contract if you’re a young actor, but I would certainly want to try to put those protections in there.” 

If you belong to SAG-AFTRA, you already have some protection from anyone trying to use your image for generative AI. A public statement from SAG-AFTRA reads: 

“The terms and conditions involving rights to digitally simulate a performer to create new performances must be bargained with the union. In addition, any use or reuse of recorded performances is limited by our collectively bargained contract provisions, including those requiring consent and negotiation of compensation.”

“These rights are mandatory subjects of bargaining under the National Labor Relations Act. Companies are required to bargain with SAG-AFTRA before attempting to acquire these rights in individual performers’ contracts. To attempt to circumvent SAG-AFTRA and deal directly with the performers on these issues is a clear violation of the NLRA.”

You can read the full statement here.