x
Close
Technology - September 29, 2025

OpenAI Introduces Safety Routing System and Parental Controls in ChatGPT, Sparking Mixed Reactions

OpenAI Introduces Safety Routing System and Parental Controls in ChatGPT, Sparking Mixed Reactions

Over the weekend, OpenAI initiated trials for a novel safety routing system within its chatbot, ChatGPT, followed by the introduction of parental controls on Monday. This move has elicited varied reactions from users, with some expressing concern and others offering praise.

The new safety features have been implemented in response to numerous instances where certain ChatGPT models validated users’ delusional thinking instead of redirecting harmful conversations. OpenAI is currently facing a wrongful death lawsuit stemming from one such incident, involving a teenager who took his own life after prolonged interactions with the chatbot.

The safety routing system is designed to identify emotionally sensitive conversations and automatically transition to GPT-5, a model that OpenAI deems most suitable for high-stakes safety work. The GPT-5 models have been equipped with a new safety feature referred to as “safe completions,” enabling them to respond to sensitive questions in a safe manner rather than avoiding engagement.

In contrast, the company’s previous chat models are designed to be agreeable and quick at answering questions. The overly sycophantic nature of GPT-4o has attracted criticism due to its role in fueling AI-induced delusions and maintaining a large base of devoted users. When OpenAI introduced GPT-5 as the default model in August, many users expressed dissatisfaction and demanded access to GPT-4o.

While the safety features have been generally welcomed by experts and users, some have criticized what they perceive as an overly cautious implementation, with certain users accusing OpenAI of treating adults like children. OpenAI has acknowledged that achieving the right balance will require time, and has allocated a 120-day period for iteration and improvement.

Nick Turley, VP and head of the ChatGPT app, addressed some of the “strong reactions to 4o responses” in a post on X, explaining that routing happens on a per-message basis and is temporary. ChatGPT will inform users about the active model when asked, as part of an ongoing effort to strengthen safeguards before a wider rollout.

The implementation of parental controls in ChatGPT has generated similar levels of praise and criticism, with some appreciating the ability for parents to monitor their children’s AI usage, while others fear it may lead to adults being treated like children. The controls allow parents to customize their teen’s experience by setting quiet hours, turning off voice mode and memory, removing image generation, and opting out of model training. Additional content protections, such as reduced graphic content and extreme beauty ideals, will also be provided for teen accounts, along with a detection system designed to recognize potential signs of self-harm.

“If our systems detect potential harm, a small team of specially trained people reviews the situation,” OpenAI stated in its blog. “If there are signs of acute distress, we will contact parents by email, text message, and push alert on their phone, unless they have opted out.”

OpenAI admitted that the system may sometimes raise alarms without genuine danger, but believes it is better to alert a parent than remain silent. The company also stated it is working on ways to reach law enforcement or emergency services if an imminent threat to life is detected and parents cannot be reached.