Newsom’s Attempt to Strike a Balance with New AI Laws: Is Regulation Possible without Hindering AI Innovation?
- Davyn Gottfried
- 2 days ago
- 3 min read

California Governor Gavin Newsom is attempting to strike a balance between protecting children’s safety and preserving AI innovation in California. In October, Newsom signed a series of new laws regulating online technology, including Senate Bill 243, which restricts companion chatbots from producing suicide and self-harm content. However, Newsom also vetoed Assembly Bill 1064, a broader proposal to regulate AI systems used by children. He argued that these broader restrictions, including regulating artificial intelligence as a whole, could stifle innovation and limit access for young users. His decision to sign and veto certain bills has highlighted California’s ongoing challenge: is it possible to regulate emerging technologies without slowing the pace of technological progress?
In recent years, the rapid rise of generative AI platforms such as ChatGPT, Replika, and Snapchat’s “My AI” has encouraged interactions between artificial intelligence and minors. Some children have turned to chatbots for emotional advice or companionship, often confiding in them when facing sensitive topics like depression, bullying, and self-image. This has sparked intense debate between lawmakers across the country, as AI platforms continue to operate largely reign-free.
Senate Bill 243 focuses on protecting children from companion chatbots. Companion chatbots are AI programs that foster emotional interactions with users. The law requires that chatbots block or flag any suicide or self-harm content. Instead, users are redirected to prevention hotlines. It also mandates screen time reminders every three hours and entirely restricts users from being exposed to explicit material. Supporters say the law establishes necessary safeguards for children who rely on chatbots for emotional support while still embracing technological innovation. “Emerging technology like chatbots and social media can inspire, educate, and connect, but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement. In addition to the chatbot measure, other online safety laws were signed, including age verification requirements on social media and restrictions on sexually explicit AI-generated content. Together, the bills represent a growing effort to hold tech companies accountable for how their products affect younger users.
While Newsom approved several bills aimed at protecting children’s safety online, he vetoed Assembly Bill 1064, a law that imposed broader restrictions on AI tools like ChatGPT. The measure, known as the Leading Ethical AI Development for Kids Act, or LEAD for Kids, works to establish safety and privacy protections for children interacting with AI. Its goal was to provide oversight for AI systems, classifying technologies by risk level and banning those considered harmful. In his veto message, Newsom wrote that AB 1064 “imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.”
The signing of SB 243 and the veto of AB 1064 drew mixed reactions from parents, advocates, and technology experts. Megan Garcia, whose son died by suicide after interacting with a companion chatbot that may have encouraged his death, called Newsom’s efforts “a step toward accountability in an industry that has operated without limits.” Despite this praise, some professionals including Jim Steyer, a civil rights attorney and founder of Common Sense Media, argued that the bill “sets lower standards than other states and may mislead parents into thinking their children are fully protected.” Common Sense Media initially supported the legislation but withdrew after amendments softened several key provisions.
Tech companies and AI developers have also voiced concern that California’s regulations could discourage innovation in the state’s leading industry. Several groups warned that legal risks might push AI startups to relocate or completely eliminate AI features for children. Others argue that clear guardrails could actually promote innovation by establishing trust and consistent standards between the user and technology. This debate highlights the tension between consumer protection and technological progression in Silicon Valley and California at large.
Newsom’s legislative decisions contribute to a broader debate over whether it is possible to balance artificial intelligence regulations while maintaining California’s reputation as a leading state in innovation. By signing SB 243 and vetoing AB 1064, Newsom has taken a cautious approach to the future of artificial intelligence, one that prioritizes child safety while preserving the state’s technological standing. As AI continues to evolve and integrate into every aspect of life, lawmakers face the challenge of crafting policies that protect vulnerable users without hindering the innovation that drives California’s technology industry.


