California is leading the way in AI regulation, with Governor Gavin Newsom enacting the nation’s first legislation specifically targeting AI companion chatbots.

Senate Bill 243, now law, establishes a framework of obligations for companies involved in the development and operation of these AI systems.

The legislation clearly defines the responsibilities of chatbot providers. Major players such as Meta, OpenAI, Replika, and Character AI will face consequences for non-compliance with the newly mandated safety standards.

Did you know?

Want to get smarter & wealthier with crypto?

Subscribe – We publish new crypto explainer videos every week!

These regulations aim to guarantee responsible AI companion use, preventing potential negative impacts.

Beginning January 2026, companies are required to implement age verification systems and provide users with clear warnings about potential risks associated with social interaction and chatbot engagement. Critically, businesses must also prominently disclose that users are communicating with a machine, not a human.

The new law also mandates proactive intervention in crisis situations. If a user expresses thoughts of suicide or self-harm, the chatbot must immediately offer access to crisis support resources.

Companies are obligated to report the frequency of these crisis alerts and submit their safety protocols to the state’s Department of Public Health.

To safeguard children, the legislation prohibits the display of explicit images generated by chatbots to underage users and encourages the use of regular breaks from the platform.

The penalties for creating and distributing deepfake content have been stiffened, with fines potentially reaching $250,000 per violation.

In related news, the US Senate recently approved the GAIN Act (Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2026). Intrigued? Learn more about the proposal’s key provisions.


Share.