Close Menu
Şevket Ayaksız

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Save 33% on Tapo SolarCam C403 wireless kit

    Mayıs 17, 2025

    Max Introduces $7.99 ‘Extra Member’ Add-On

    Mayıs 17, 2025

    AMD’s RX 9060 XT Appears Online Ahead of Official Launch

    Mayıs 17, 2025
    Facebook X (Twitter) Instagram
    • software
    • Gadgets
    Facebook X (Twitter) Instagram
    Şevket AyaksızŞevket Ayaksız
    Subscribe
    • Home
    • Technology

      AMD’s RX 9060 XT Appears Online Ahead of Official Launch

      Mayıs 17, 2025

      Orb Offers Continuous Internet Performance Insights

      Mayıs 10, 2025

      MSI Claw Handhelds See 10% FPS Increase with Intel’s Latest Update

      Mayıs 10, 2025

      Ryzen 8000 HX Series Brings Affordable Power to Gaming Laptops

      Nisan 10, 2025

      Today only: Asus OLED laptop with 16GB RAM drops to $550

      Nisan 6, 2025
    • Adobe
    • Microsoft
    • java
    • Oracle
    Şevket Ayaksız
    Anasayfa » AI Stereotyping Based on Names Persists in ChatGPT, Though Reduced
    software

    AI Stereotyping Based on Names Persists in ChatGPT, Though Reduced

    By ayaksızEkim 25, 2024Yorum yapılmamış2 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI has unveiled a pivotal research report that scrutinizes the potential for discrimination and stereotyping within ChatGPT’s interactions, particularly focusing on the influence of users’ names. The analysis was conducted using the advanced AI model GPT-4o, which reviewed extensive ChatGPT conversation data to identify the presence of “harmful stereotypes.” The results of this investigation were further corroborated by human reviewers to enhance reliability.

    Illustrative examples from older AI models reveal stark differences in responses based on user names. For instance, male users were frequently presented with content related to engineering and practical life advice, while female users received responses more aligned with domestic roles, such as cooking and childcare. This demonstrated a clear gender bias in the earlier versions of the chatbot.

    In contrast, OpenAI reports that its latest findings indicate a significant shift in how ChatGPT operates. The chatbot is now designed to deliver high-quality responses without bias toward a user’s gender or ethnicity, with harmful stereotypes appearing in only about 0.1 percent of outputs from GPT-4o. Notably, responses related to entertainment topics were found to carry slightly higher stereotypical content, at around 0.234 percent. This marks a considerable improvement from past versions, which saw stereotype rates approaching 1 percent.

     

    Post Views: 72
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    ayaksız
    • Website

    Related Posts

    Best VPN Discounts This Month

    Mayıs 12, 2025

    PC Manager App Now Displays Microsoft 365 Advertisements

    Mayıs 8, 2025

    Microsoft Raises Xbox Series X Price by $100 Amid Global Adjustments

    Mayıs 8, 2025
    Add A Comment

    Comments are closed.

    Editors Picks
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    Ocak 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    Ocak 5, 2021

    Onboard Cameras Allow Disabled Quadcopters to Fly

    Ocak 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By sevketayaksiz
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By sevketayaksiz
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By sevketayaksiz
    Advertisement
    Demo
    Şevket Ayaksız
    Facebook X (Twitter) Instagram YouTube
    • Home
    • Adobe
    • microsoft
    • java
    • Oracle
    • Contact
    © 2025 Theme Designed by Şevket Ayaksız.

    Type above and press Enter to search. Press Esc to cancel.