Suicide is a tragedy. Suicide of a young person just starting out in life seems especially awful.
In the news recently, parents Matt and Maria Raine spoke about the loss of their 16-year-old son who died by suicide. They blame ChatGPT for leading their son to a dark place.
As artificial intelligence (AI) tools become more common in classrooms, on smartphones and across social platforms, our laws must keep pace to prevent avoidable tragedies. Recent heartbreaking stories have come to light of vulnerable individuals, including minors, who have used AI chatbots to cope with trauma, mental health, depression and anxiety. Unfortunately, some of the responses they received have contributed to reported incidents of self-harm or even suicide.
Sadly, multiple families have alleged in recent lawsuits that popular AI chatbots contributed to their child’s death by suicide. These tragic cases underscore the urgent need for safeguards to protect children from unsafe and unvetted AI systems.
As a mom and now grandmother, protecting children and young people from the harm of artificial intelligence (AI) chats is a top priority for me as Chair of the Senate Communications and Technology Committee. That’s why I introduced legislation to establish commonsense safeguards for children interacting with AI chatbots.
Senate Bill 1090, the Safeguarding Adolescents from Exploitative Chatbots and Harmful AI Technology (SAFECHAT) Act, would require robust, age-appropriate safeguards to prevent content generation that encourages self-harm, suicide or violence against others, and directs users to appropriate self-harm crisis resources whenever high-risk language is detected.
I’m pleased that this measure recently passed the Senate unanimously and now goes to the House of Representatives for consideration.
During his recent budget address, Governor Shapiro discussed my legislation encouraging swift passage. I was pleased to hear his support for this important bipartisan issue. Republicans and Democrats alike can all agree that providing safeguards around this new emerging technology is an appropriate action.
A recent risk assessment cautions that AI “companion” bots may intensify mental health challenges among children, including increasing the risk of self-harm. Experts warn that when these systems are used without appropriate safeguards, they can reinforce negative thought patterns, provide misleading emotional validation, or fail to recognize moments of crisis. Clinical commentators have similarly highlighted the dangers of unrestricted chatbot use, noting that the technology can unintentionally worsen a young user’s condition rather than provide meaningful support.
These concerns underscore the urgent need for clear standards and responsible oversight. Senate Bill 1090 aims to introduce stronger guardrails, including age-appropriate design, crisis response protocols, and transparency requirements. By ensuring these protections are in place, policymakers can help reduce harm while still allowing innovation to continue in a safer, more accountable way.
Parents, educators, and policymakers must work together to ensure technology serves as a tool for support—not harm. With thoughtful safeguards and accountability, we can help ensure that innovation protects, rather than endangering, the well-being of our children and families in an increasingly digital world.
Tracy Pennycuick is a state Senator representing the 24th District which includes parts of Berks and Montgomery counties.
CONTACT: Lidia Di Fiore (215) 541-2388


