Close Menu
Şevket Ayaksız

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Google Maps vs Waze: I Put the Two Best Navigation Apps Head-to-Head — and One Clearly Came Out on Top

    Mayıs 1, 2026

    Samsung Electronics Offers Free 32-Inch Odyssey gaming monitor: Eligibility and How to Claim Deal

    Mayıs 1, 2026

    T-Mobile Bundles Free Hulu and Netflix for 5G Users: Eligibility Explained

    Mayıs 1, 2026
    Facebook X (Twitter) Instagram
    • software
    • Gadgets
    Facebook X (Twitter) Instagram
    Şevket AyaksızŞevket Ayaksız
    Subscribe
    • Home
    • Technology

      Google Maps vs Waze: I Put the Two Best Navigation Apps Head-to-Head — and One Clearly Came Out on Top

      Mayıs 1, 2026

      T-Mobile Bundles Free Hulu and Netflix for 5G Users: Eligibility Explained

      Mayıs 1, 2026

      This Portable Mini PC Is the Unexpected Raspberry Pi Alternative You Might Actually Want

      Mayıs 1, 2026

      Samsung warns RAM shortages could worsen beyond 2027

      Mayıs 1, 2026

      Oxford study finds friendly AI chatbots are less accurate

      Mayıs 1, 2026
    • Adobe
    • Microsoft
    • java
    • Oracle
    Şevket Ayaksız
    Anasayfa » ID verification could be added to ChatGPT access
    software

    ID verification could be added to ChatGPT access

    By ayaksızEylül 29, 2025Yorum yapılmamış3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The dangers of extended interaction with artificial intelligence are gaining urgent attention as cases of so-called “AI psychosis” are linked to suicides around the world, including that of a California teenager whose parents have filed a wrongful death lawsuit against OpenAI. The troubling phenomenon refers to users who, by engaging conversationally with AI chatbots like ChatGPT, spiral into cycles of delusion that can reinforce harmful thoughts. With mounting evidence of the risks, OpenAI executives, including CEO Sam Altman, appeared before Congress this week and pledged new safeguards, including stricter age verification systems.

    Altman confirmed that ChatGPT is now implementing automated age detection to help separate minors from adults on the platform. If the system cannot conclusively verify that a user is over 18, it will assign them the “under 18” experience, which blocks sexual material and limits exposure to unsafe responses. In certain countries, Altman said, users may also be asked to provide an official ID to confirm their age — a step that he conceded is a privacy compromise but a necessary trade-off for safety. While ChatGPT officially prohibits users under 13, it is developing a dedicated “teen-safe” mode for those aged 13 to 17.

    Privacy advocates may still be alarmed by another key admission: although OpenAI is building systems to ensure user conversations remain private, the company reserves the right to intervene in cases of “serious misuse.” This includes situations where the chatbot detects suicide risks, threats to life, or potential cybersecurity catastrophes. In those cases, conversations may be reviewed by human moderators, raising questions about surveillance, discretion, and trust in how AI platforms handle sensitive data.

    The wrongful death case has placed these issues into sharp relief. Legal documents reveal that the teenager had discussed suicidal thoughts with ChatGPT and was allegedly provided both instructions and encouragement to carry out his plan. The tragedy has fueled debates in Congress and scrutiny from regulators, with the Federal Trade Commission now investigating OpenAI, Character.AI, Google, Meta, and Elon Musk’s xAI over the risks posed by conversational AI. OpenAI and the boy’s parents testified side by side in a Senate inquiry earlier this week, underscoring the urgency of the issue.

    Despite the tragedies, the AI industry continues to accelerate, with over a trillion dollars in global investment driving relentless competition. Critics argue that companies have prioritized rapid expansion over user safeguards, echoing the tech industry’s “move fast and break things” ethos. Altman acknowledged the inherent contradictions in the company’s approach, writing in a recent blog post: “We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict.” The unfolding crisis may prove to be a defining test of whether AI firms can balance innovation with responsibility before further harm occurs.

    Post Views: 145
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    ayaksız
    • Website

    Related Posts

    Anthropic’s Claude Security Tool Analyzes Codebases to Detect Vulnerabilities and Prioritize Fixes

    Mayıs 1, 2026

    Microsoft’s Windows Insider Program Finally Becomes More Streamlined and User-Friendly

    Nisan 11, 2026

    Microsoft launches tool to gather user feedback on Windows issues

    Nisan 8, 2026
    Add A Comment

    Comments are closed.

    Editors Picks
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    Ocak 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    Ocak 5, 2021

    Onboard Cameras Allow Disabled Quadcopters to Fly

    Ocak 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By sevketayaksiz
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By sevketayaksiz
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By sevketayaksiz
    Advertisement
    Demo
    Şevket Ayaksız
    Facebook X (Twitter) Instagram YouTube
    • Home
    • Adobe
    • microsoft
    • java
    • Oracle
    • Contact
    © 2026 Theme Designed by Şevket Ayaksız.

    Type above and press Enter to search. Press Esc to cancel.