
California just moved to put “guardrails” on AI companion chatbots for kids—rules critics say look a lot like state-approved speech policing wrapped in child safety.
Quick Take
- California enacted SB 243 to regulate “companion chatbots,” requiring disclosures, suicide/self-harm protocols, and special protections for known minors.
- Gov. Gavin Newsom vetoed AB 1064, arguing its restrictions were so broad they could function as a de facto ban on minors using many chatbots.
- The new law includes recurring reminders to minors—at least every three hours—that they’re interacting with AI and should take a break.
- SB 243 excludes several categories, including customer service bots, business tools, certain video game chatbots, and standalone voice assistants.
California Chooses Disclosure Rules Over a Kid-Facing “Ban” Model
California’s 2025 legislative fight over AI “companion chatbots” ended with a split decision: SB 243 was signed into law, while AB 1064 was vetoed on Oct. 13, 2025. SB 243 takes a transparency-and-protocol approach that keeps most chatbot access available but adds state-directed safety duties for operators. AB 1064, by contrast, aimed to block certain interactions with minors unless a system was not foreseeably capable of causing specified harms.
Gov. Gavin Newsom’s veto message framed AB 1064 as “overly broad,” warning it could unintentionally produce a total ban on chatbot use by minors. His argument leaned toward supervised exposure—letting adolescents learn how to interact safely with AI—rather than cutting off access entirely. That reasoning is likely to resonate with voters wary of sweeping prohibitions, even as it frustrates advocates who wanted stronger, more enforceable limits.
What SB 243 Requires From “Companion Chatbot” Operators
SB 243 targets “companion chatbots,” described as AI systems designed for adaptive, human-like interaction that can meet social needs and sustain relationships across multiple exchanges. The law requires clear notice that content is AI-generated when a reasonable person might otherwise think they’re dealing with a human. It also directs operators to maintain and publish protocols aimed at preventing chatbot content related to suicidal ideation, suicide, or self-harm.
For known minor users, SB 243 adds another layer. Operators must disclose that the user is interacting with AI and provide a repeating reminder at least every three hours that the chatbot is AI-generated and that the user should take a break. The statute also calls for “reasonable measures” to avoid sexually explicit content for minors and requires a disclosure that companion chatbots may not be suitable for some minors. The practical effect is a mandated “friction” model—interrupting engagement instead of relying on voluntary parental oversight alone.
Reporting, Enforcement Pressure, and the Cost of Compliance
SB 243’s safety framework is not just a set of suggested best practices. Operators must submit annual reports to the Office of Suicide Prevention describing their protocols and how crisis referral notifications work. That kind of reporting requirement can reshape product design and internal staffing, because firms need documentation, auditing, and ongoing updates. It also point to potential exposure to significant damage claims for violations, creating strong incentives to over-comply.
Compliance costs may land hardest on smaller AI companies that can’t spread legal and engineering overhead across massive user bases. Larger platforms may be better positioned to build moderation pipelines, user-notice systems, and logging features that satisfy regulators and reduce litigation risk. The tradeoff is familiar in modern tech regulation: rules intended to protect the public can also tilt markets toward incumbents by raising barriers to entry.
What the Exemptions Tell Us About Sacramento’s Regulatory Aim
SB 243 carves out several exclusions, including customer service chatbots, business tools, video game chatbots limited to game-related topics, and standalone consumer devices with voice-activated assistants. Those exemptions suggest lawmakers focused on the “relationship” dynamic—bots designed to simulate companionship—rather than AI generally. Still, the line-drawing will matter. As companies blend “assistant,” “game,” and “companion” features, disputes over classification could decide which products face the strictest obligations.
The larger significance is political as much as technological. California positioned itself again as a national rule-setter, betting that disclosure mandates and safety protocols can manage risks without outright prohibition. For conservatives skeptical of bureaucratic control, the immediate question is whether child protection is being used to normalize state influence over digital speech and product design. For liberals, the question is whether SB 243 is strong enough to prevent harmful interactions. Either way, the state’s framework is likely to be copied, challenged, and tested long before there’s clear evidence it works.
Sources:
https://calmatters.org/economy/technology/2025/10/newsom-signs-chatbot-regulations/
https://fpf.org/blog/understanding-the-new-wave-of-chatbot-legislation-california-sb-243-and-beyond/
https://abc30.com/post/california-gov-newsom-vetoes-bill-restrict-kids-access-ai-chatbots/18001782/
https://calawyers.org/privacy-law/ai-and-privacy-a-guide-to-californias-recently-passed-legislation/












