California’s New Law on Artificial Intelligence becomes effective in a few months
- East West General Counsel
- Apr 25
- 4 min read

California is making new rules to help people understand when they’re interacting with artificial intelligence (AI). In September 2024, Governor Gavin Newsom signed a new law called the California Artificial Intelligence Transparency Act, or CAITA. See our other blog article about this development. Compliance with this law is expected on January 1, 2026, and it’s all about making sure AI tools are open and honest about how they work.
Let’s break down what this law is, why it matters, and how it might affect you or the technology you use.
What is CAITA?
CAITA is a new law that tells AI companies they need to be clear when they are using AI to make content like images, videos, audio, or text. This is especially important when that content looks real but was actually created by a computer; something known as synthetic content or deepfakes.
The law says that certain AI companies (Covered Providers) must:
Include tools that help users know when something was created or changed by AI.
Provide something called Provenance Data, which is information attached to digital content that shows where it came from, who created it, or how it was edited.
Make sure these tools and data are easy to find on their websites and mobile apps.
Even companies that license or use these AI systems from other creators (Third-Party Licensees) must follow the same rules. They’re not allowed to remove or hide the transparency tools.
What Is a Generative AI System?
A Generative AI System, or GenAI, is a kind of AI that takes in data and then creates something new with it. This can include:
Images (like with DALL-E or Midjourney)
Text (like with ChatGPT or Copilot)
Videos
Audio or music
Basically, it’s any AI that can “generate” content that looks like it was made by a human.
Who Is a “Covered Provider”?
A company or person is a Covered Provider under CAITA if they:
Build or design a Generative AI system,
Have over 1 million users or visitors each month, and
Let people in California use it.
What Is Provenance Data?
Provenance Data is like a digital receipt that tells you where a piece of content came from. It can show:
Who made it
If it was changed
What tools were used to create or edit it
This helps people figure out if a video, image, or article is real or if it was created or changed by AI.
Are There Any Exceptions?
Yes! Some types of entertainment are not affected by CAITA. These include:
Video games
Movies and TV shows
Streaming services
Interactive experiences (like virtual reality games)
As long as the content isn’t made by users, these don’t have to follow the same rules.
What If You’re Not in California?
If your AI system can be used in California—and it has over 1 million users—you still have to follow CAITA, even if your company is based in another state or country.
When Do These Rules Start?
CAITA goes into effect on January 1, 2026. That’s when companies need to start following the rules.
What Happens If Companies Break the Rules?
Companies that don’t follow CAITA can get in big trouble. The penalties include:
$5,000 fines for every day they break the rules
Having to pay for lawyers’ fees
Possibly being taken to court by the California Attorney General or local government lawyers
Third-party companies that use GenAI tools and don’t follow the rules could also get sued or be forced to stop using the technology. CAITA does not give regular individuals or private companies the right to sue directly under the law. In legal terms, it does not create a private right of action. This means that users or consumers cannot sue a Covered Provider or Licensee just for breaking CAITA. Only government entities can enforce the law through legal action.
Why This Matters and What It Might Change
CAITA is important because it helps people trust what they see online. With AI getting better and better at creating videos, pictures, and even voices that seem real, it can be hard to tell what’s true and what’s not. This law gives people tools to check the facts and make smarter decisions about the content they consume.
Here’s how it could affect different groups:
For Consumers (You and Me):
More clarity: You’ll be able to tell if something was made or changed by AI.
Less confusion: Deepfakes and misleading content will be easier to spot.
Greater trust: People may feel safer using online platforms that follow these rules.
For Covered Providers (Big AI Companies):
More work: They’ll need to build new tools and features to meet the law’s requirements.
Higher costs: Adding AI detection tools and Provenance Data could take time and money.
More responsibility: They could face legal trouble if they don’t follow the rules.
For Other Companies Using AI (Third-Party Licensees):
Stricter rules: They can’t remove the tools or labels provided by AI creators.
Need to update systems: Some companies may have to redesign parts of their websites or apps to stay compliant.
Legal risk: They could get sued or lose their licenses if they don’t follow CAITA.
Over time, CAITA could push the whole tech industry to be more transparent and more careful when it comes to using AI. Other states—or even countries—might adopt similar rules. It’s a big step toward a future where people can use AI confidently without being tricked by it.
Learn More
The law is still new, and companies are getting ready for it. If you want more information, you can contact East West General Counsel for a consultation.
Comentários