
OpenAI revealed this week that new parental control features will be arriving in ChatGPT within the next month, giving parents the ability to link their accounts with their teenage children’s profiles and directly manage how the AI is used. The move represents a major step toward making the chatbot safer for younger users, providing parents with practical tools to supervise interactions and reduce risks that come with unsupervised AI use.
Among the core features are controls to disable memory and chat history, preventing the system from retaining details about a child’s previous conversations. More critically, ChatGPT will now be able to notify parents automatically if it detects that a child is in “acute distress,” creating an early-warning system designed to flag potential risks to mental health. These tools are meant to shift AI from being an isolated interaction to one that parents can monitor and intervene in when necessary.
OpenAI stated that the rollout of parental controls is only the beginning. Within the next 120 days, additional safety features are expected to launch as part of a wider security initiative. The company has emphasized that these changes are being shaped in consultation with experts in child psychology, digital safety, and online well-being, suggesting a longer-term effort to align ChatGPT with responsible-use standards for younger audiences.
The timing of this announcement is particularly significant, as OpenAI is currently facing a lawsuit that has drawn international attention. The case alleges that ChatGPT assisted a teenager in planning his suicide, leading his parents to accuse the company of negligence. Against that backdrop, the launch of parental controls underscores the urgency for stronger safeguards, positioning OpenAI as both reactive to tragedy and proactive in preventing future harm.

