The attorneys general of California and Delaware have raised concerns to OpenAI regarding the safety of its products, particularly in interactions with children.
The action comes after recent lawsuits and media reports linking ChatGPT use to self-harm and tragic deaths. Last month, the parents of a teenager who died by suicide filed a lawsuit against OpenAI and CEO Sam Altman, alleging that ChatGPT provided guidance on self-harm and that the company prioritized profits over safety when releasing the GPT-4 model. Separately, The Wall Street Journal reported that ChatGPT may have influenced a 56-year-old man in Connecticut, contributing to his death and that of his mother.
California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings sent a letter to OpenAI on Friday following a meeting with the company’s legal team. Both offices have regulatory oversight of OpenAI, which is incorporated in Delaware and headquartered in San Francisco.
You May Also Like: State Dept Vows to End ‘Anti-Christian Bias,’ Promises Sweeping Changes
In the letter, the attorneys general wrote that the recent deaths have “shaken the American public’s confidence in OpenAI and this industry” and emphasized the importance of proactively ensuring AI safety. They noted that OpenAI and the AI industry at large “are not where they need to be in ensuring safety in AI products’ development and deployment” and said safety must remain a central consideration as the company moves forward with restructuring plans.

In response, OpenAI said it is taking steps to strengthen protections for users, particularly minors. Bret Taylor, chair of OpenAI’s board, stated that the company is “fully committed to addressing the Attorneys General’s concerns” and expressed sympathy for the families affected by the reported tragedies. He added that ChatGPT now includes safeguards directing users to crisis helplines and that OpenAI is working with experts to expand protections for teenagers, including parental controls and notifications when a teen may be in distress.
The letter from California and Delaware follows a separate Aug. 25 letter from a bipartisan group of 44 state attorneys general warning OpenAI and other tech firms about the potential risks AI chatbots pose to children. The letter highlighted concerns about “sexually suggestive conversations and emotionally manipulative behavior” and emphasized that companies could be held accountable for harm caused to minors.
Meta, the parent company of Facebook, Instagram, and WhatsApp, has also updated its policies to restrict certain topics for teenage users, including discussions of self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations, following similar concerns.
The 44 attorneys general wrote that regulators have a responsibility to ensure the safety of children using emerging technologies, concluding, “If you knowingly harm kids, you will answer for it.”
