
The dangers of extended interaction with artificial intelligence are gaining urgent attention as cases of so-called “AI psychosis” are linked to suicides around the world, including that of a California teenager whose parents have filed a wrongful death lawsuit against OpenAI. The troubling phenomenon refers to users who, by engaging conversationally with AI chatbots like ChatGPT, spiral into cycles of delusion that can reinforce harmful thoughts. With mounting evidence of the risks, OpenAI executives, including CEO Sam Altman, appeared before Congress this week and pledged new safeguards, including stricter age verification systems.
Altman confirmed that ChatGPT is now implementing automated age detection to help separate minors from adults on the platform. If the system cannot conclusively verify that a user is over 18, it will assign them the “under 18” experience, which blocks sexual material and limits exposure to unsafe responses. In certain countries, Altman said, users may also be asked to provide an official ID to confirm their age — a step that he conceded is a privacy compromise but a necessary trade-off for safety. While ChatGPT officially prohibits users under 13, it is developing a dedicated “teen-safe” mode for those aged 13 to 17.
Privacy advocates may still be alarmed by another key admission: although OpenAI is building systems to ensure user conversations remain private, the company reserves the right to intervene in cases of “serious misuse.” This includes situations where the chatbot detects suicide risks, threats to life, or potential cybersecurity catastrophes. In those cases, conversations may be reviewed by human moderators, raising questions about surveillance, discretion, and trust in how AI platforms handle sensitive data.
The wrongful death case has placed these issues into sharp relief. Legal documents reveal that the teenager had discussed suicidal thoughts with ChatGPT and was allegedly provided both instructions and encouragement to carry out his plan. The tragedy has fueled debates in Congress and scrutiny from regulators, with the Federal Trade Commission now investigating OpenAI, Character.AI, Google, Meta, and Elon Musk’s xAI over the risks posed by conversational AI. OpenAI and the boy’s parents testified side by side in a Senate inquiry earlier this week, underscoring the urgency of the issue.
Despite the tragedies, the AI industry continues to accelerate, with over a trillion dollars in global investment driving relentless competition. Critics argue that companies have prioritized rapid expansion over user safeguards, echoing the tech industry’s “move fast and break things” ethos. Altman acknowledged the inherent contradictions in the company’s approach, writing in a recent blog post: “We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict.” The unfolding crisis may prove to be a defining test of whether AI firms can balance innovation with responsibility before further harm occurs.

