California Bill Aims to Protect Kids From AI Chatbots’ Influence

Agencies Ghacks
Feb 5, 2025
Misc
|
1

A new California bill (SB 243) seeks to introduce safeguards for children interacting with AI chatbots. Proposed by Senator Steve Padilla, the legislation would require AI companies to regularly remind young users that chatbots are not human, in an effort to curb the "addictive, isolating, and influential aspects" of artificial intelligence.

The bill also aims to prevent AI developers from employing "addictive engagement patterns" that could potentially harm minors. Additionally, companies would need to submit annual reports to the State Department of Health Care Services detailing instances where their AI detected suicidal ideation in children or brought up related topics. AI platforms would also be required to warn users that their chatbots may not be suitable for some kids.

The legislation follows growing concerns about the psychological impact of AI chatbots on young users. Last year, a parent filed a wrongful death lawsuit against Character.AI, alleging that its AI chatbots were “unreasonably dangerous” after their child, who frequently interacted with the bots, died by suicide. Another lawsuit accused the company of exposing teens to harmful content. In response, Character.AI has introduced parental controls and developed a specialized AI model for younger users to filter out sensitive topics.

"Our children are not lab rats for tech companies to experiment on at the cost of their mental health," Senator Padilla stated. "We need common sense protections for chatbot users to prevent developers from employing strategies that they know to be addictive and predatory."

As governments ramp up efforts to regulate social media platforms, AI chatbots may soon face similar scrutiny. If passed, this California bill could set a precedent for future AI regulations aimed at protecting children online.

Advertisement

Tutorials & Tips


Previous Post: «
Next Post: «

Comments

  1. Anonymous said on February 5, 2025 at 4:58 pm
    Reply

    California always tries to pretend they care about someone when they start adding these laws, but the truth is the only reason they do it, it is because it that means more money through lawsuits for any reason.
    They are not protecting kids for bigger dangers than AI and outside technology, or how sometimes even if the laws are there they don’t do much and free the criminals which would put anyone, not just kids, in danger, of course, California is not the one where laws are not respected, but if they were the bastion of caring about whatever, they would actually fix their state, not pretend they are doing the things right when it’s a complete mess.
    The only reason why California improved in recent months, it’s because of Prop 36, but obviously it is not enough just to roll back terrible decision and don’t do more to fix a state.

    Technology obviously affects Children and AI might too, but it’s not worst than other things that get prompted in California. Like promoting surgeries for children and hormones blockers, and putting weird and dangerous sex education in children who have not even developed any sense for it.

    So that always sounds contradictory when California makes these laws, put children as an excuse and use tax payers resources to promote this anti children agendas that have high % of suicide rates, especially when children go to so much changes and then they grow up, they find out they were coerced and unhappy about it.

    Of course ‘Hollywood’, the land of PDFs and weirdos have normalized pervert stuff and weird stuff, and they are always pushing for these things too, so movies now have more and more content that can affect children’s brain than AI, but California doesn’t do anything about it.

    That’s because these laws are just for control, because they will not help anyone in reality, but making some law firms and some rich people to get easy money by having children as an excuse, then company pays some fine, and ‘problem solved’ everyone moved on to the next case of trying to get money.

    Of course, some parents want their kids to be raised by the state, and that’s the problem, they put their 4 years old kids, in a facility with a lot of kids for hours and hours, and nobody really takes care of them as a parent would, then they wonder why these kids can have many traumas and ended up trying to make connections with a fake entity like a chatbot. But I am sure there is no evidence of the chatbot making the kid suicide, it is all his previous life, his traumas, his obviously a bad parent who didn’t even know what his kid was doing and then blaming it on AI, when it was probably some other reason, but it is better to use AI as a scapegoat to make this type of laws that means $$$, than finding the real cause of the issues and why the kid really was suicidal before he even touched the AI.
    Parents have to be parents again, they talk about ‘rat labs’ yet, putting kids in some care with some stranger for hours with the excuse that “we/I have to work” because life in California, unless you are getting something by the government like food stamps or your house paid by section 8 or something, is going to be expensive, but I guess that’s not “rats lab” type of anything, put your kids through it, and then expect them to grow as ‘normal’ people, giving them technology, kids not playing outside anymore just staring at the screen around 4 walls all day everyday, let school system brainwash and push agendas and weird ideas about gender until they go with hormone blockers and even surgeries and all that, etc etc… but sure! AI is and was the biggest threat to children….

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.