x
Close
AI - September 11, 2025

California Takes Lead in Regulating AI Chatbots with SB 243, Aiming to Protect Minors from Harmful Content

California Takes Lead in Regulating AI Chatbots with SB 243, Aiming to Protect Minors from Harmful Content

The California State Assembly took a significant stride towards regulating artificial intelligence (AI) last night, passing SB 243 – a bill designed to regulate AI companion chatbots to safeguard minors and vulnerable users. The legislation received bipartisan support and is now set for a final vote in the state Senate on Friday.

If Governor Gavin Newsom signs the bill into law, it will take effect on January 1, 2026, making California the first state to mandate AI chatbot operators to implement safety measures for AI companions and hold companies accountable if their chatbots fail to meet these standards.

The bill targets companion chatbots, defined as AI systems capable of providing adaptive, human-like responses and meeting a user’s social needs, from engaging in conversations about topics like suicidal ideation, self-harm, or sexually explicit content. The legislation necessitates platforms to provide periodic alerts to users – every three hours for minors – reminding them they are conversing with an AI chatbot rather than a real person, and advising them to take a break. It also establishes annual reporting and transparency requirements for AI companies offering companion chatbots, including major players such as OpenAI, Character.AI, and Replika.

The California bill allows individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees.

SB 243 was introduced in January by state senators Steve Padilla and Josh Becker and is set for a final vote in the state Senate on Friday. If approved, it will be presented to Governor Gavin Newsom for signing into law, with the new rules taking effect on January 1, 2026, and reporting requirements commencing on July 1, 2027.

The bill gained traction in the California legislature following the death of teenager Adam Raine, who took his own life after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents suggesting Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children.

Recently, U.S. lawmakers and regulators have stepped up their scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, alleging misleading claims about mental health benefits. Simultaneously, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have initiated separate probes into Meta.

“I think the potential harm is significant, which means we need to act swiftly,” Padilla commented to media outlets. “We can establish reasonable safeguards ensuring that particularly minors are aware they’re not speaking with a real human being, that these platforms link people to the appropriate resources when users express intentions of self-harm or distress, and to minimize inappropriate exposure to explicit content.”

Padilla also emphasized the importance of AI companies sharing data about the number of times they refer users to crisis services each year. “This will provide a better understanding of the frequency of this problem rather than only becoming aware of it when someone is harmed,” he added.

Initially, SB 243 included stricter requirements that were reduced through amendments. For instance, the bill originally aimed to prevent AI chatbots from using “variable reward” tactics or other features encouraging excessive engagement. These tactics, employed by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, potentially creating an addictive reward loop, critics argue.

The current bill also omits provisions requiring operators to track and report how often chatbots initiate discussions of suicidal ideation or actions with users.

“I believe it strikes the right balance between addressing harms without imposing something that’s either impossible for companies to comply with or involves excessive paperwork,” Becker told media outlets.

SB 243 is moving towards becoming law at a time when tech companies are investing heavily in pro-AI political action committees (PACs) to support candidates in the upcoming midterm elections who advocate for a light-touch approach to AI regulation.

The bill also comes as California considers another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has penned an open letter to Governor Newsom, urging him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also expressed opposition to SB 53. In contrast, only Anthropic has stated its support for SB 53.

“I reject the notion that this is a zero-sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “We can support innovation and development that we consider beneficial and promote while simultaneously providing reasonable safeguards for the most vulnerable people.”