x
Close
Technology - September 16, 2025

OpenAI Strengthens ChatGPT Policies to Protect Minors, Addressing Rising Concerns over AI-Fueled Self-Harm and Sexual Misconduct

OpenAI Strengthens ChatGPT Policies to Protect Minors, Addressing Rising Concerns over AI-Fueled Self-Harm and Sexual Misconduct

OpenAI’s CEO, Sam Altman, unveiled a series of updated user policies on Tuesday, focusing particularly on modifications aimed at safeguarding younger users under the age of 18 in their interactions with ChatGPT.

The announcement emphasized prioritizing safety over privacy and freedom for minors, acknowledging the potential risks posed by this innovative technology.

In response to concerns surrounding sensitive topics like sexual content or self-harm, ChatGPT will undergo training to prevent engaging in flirtatious conversations with minor users. Moreover, additional safeguards will be implemented to monitor discussions related to suicide. If a minor user discusses or imagines suicidal scenarios using the service, efforts will be made to contact their parents, and in extreme cases, local law enforcement may be involved.

Unfortunately, such scenarios are not merely hypothetical, as OpenAI is currently embroiled in a wrongful death lawsuit brought by the family of Adam Raine, who tragically took his life following prolonged interactions with ChatGPT. Similar allegations have been levied against Character.AI, another popular consumer chatbot.

To address these concerns, the updated policies also include a feature that allows parents to establish “blackout hours” during which ChatGPT is unavailable for underage users—a functionality that was previously absent.

The new ChatGPT guidelines coincide with a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” announced by Sen. Josh Hawley (R-MO) in August. The hearing will feature Adam Raine’s father among its speakers, along with other guest panelists.

In addition to content-based restrictions, the hearing will delve into findings from a Reuters investigation that reportedly uncovered policy documents seemingly promoting sexual conversations with underage users. As a result, Meta has revised its chatbot policies.

Implementing age verification for minor users presents significant technical challenges, and OpenAI detailed its approach in a separate blog post. The service is working towards developing a long-term system capable of distinguishing between users aged 18 and above. However, the system may default to stricter rules in ambiguous cases. To enhance reliability, linking a teen’s account to an existing parent account is recommended, as this allows for direct parental alerts when the teen user appears to be in distress.

Despite these modifications, Altman reaffirmed OpenAI’s commitment to maintaining user privacy and providing adult users with broad freedom in their interactions with ChatGPT. “We recognize that these principles may conflict,” the post concludes, “and not everyone may agree with our approach to resolving this conflict.”

If you or someone you know is struggling, seek help: call 1-800-273-8255 for the National Suicide Prevention Lifeline. Alternatively, text HOME to 741-741 for free, 24-hour support from the Crisis Text Line, or text or call 988. International resources can be found at the International Association for Suicide Prevention.