OpenAI has announced plans to introduce parental controls for its popular AI chatbot, ChatGPT, within the next month. This move comes in response to growing concerns about the platform’s safety and follows the filing of a wrongful death lawsuit against the company. The lawsuit alleges that ChatGPT contributed to the suicide of a teenager by providing information on suicide methods and concealing injury tips.
The new parental controls will enable parents to link their ChatGPT accounts to those of their teenage children, providing them with oversight of their children’s interactions with the AI. Parents will be able to customize settings, disabling features such as chat history and memory functions, tailoring the experience to their comfort level. A key feature will be automated alerts triggered when ChatGPT detects that a teenager is experiencing a moment of acute distress, designed to foster communication and support between parents and teens. OpenAI emphasizes that expert input will guide the development of this distress detection feature.
Beyond parental controls, OpenAI is implementing broader safety improvements. The company plans to collaborate with experts in adolescent health, eating disorders, and substance use to refine ChatGPT’s responses further and mitigate potential harm. A crucial element of this enhanced safety strategy involves a new real-time routing system for sensitive conversations. This system will funnel conversations flagged as potentially risky through OpenAI’s advanced reasoning models, ensuring a more cautious and responsible response, regardless of the user’s chosen model. These reasoning models are trained using “deliberative alignment,” a method OpenAI claims enhances adherence to safety guidelines and resistance to malicious prompts.
OpenAI has committed to continuous improvements, outlining a 120-day plan to roll out a range of new safety features. The company acknowledges that this is an ongoing process and anticipates continued development well beyond this initial timeframe, with a focus on launching as many improvements as possible this year. This proactive approach signals a concerted effort to address safety concerns and enhance the responsible use of ChatGPT.











