Close Menu
Şevket Ayaksız

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Deno’s Latest Update Adds OpenTelemetry Support

    Mayıs 31, 2025

    Neo browser reimagines search with built-in AI assistant

    Mayıs 27, 2025

    Google unveils AI Ultra subscription for power users

    Mayıs 27, 2025
    Facebook X (Twitter) Instagram
    • software
    • Gadgets
    Facebook X (Twitter) Instagram
    Şevket AyaksızŞevket Ayaksız
    Subscribe
    • Home
    • Technology

      Unlock Desktop GPU Power with Asus ROG XG Station 3

      Mayıs 27, 2025

      OpenSilver Expands Cross-Platform Reach with iOS and Android Support

      Mayıs 27, 2025

      Introducing AMD’s 96-Core Threadripper 9000 CPUs: A New Era in Computing

      Mayıs 22, 2025

      AMD’s Radeon RX 9060 XT Delivers Better Value Than Nvidia’s RTX 5060 Ti

      Mayıs 22, 2025

      MSI’s Claw A8 Introduces AMD-Powered Gaming Handheld

      Mayıs 22, 2025
    • Adobe
    • Microsoft
    • java
    • Oracle
    Şevket Ayaksız
    Anasayfa » Chatbot Deception: Misinformation Surge in U.S. Elections Raises Alarms
    Tech

    Chatbot Deception: Misinformation Surge in U.S. Elections Raises Alarms

    By ayaksızŞubat 29, 2024Yorum yapılmamış5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Fifteen states and one territory will hold both Democratic and Republican presidential nominating contests next week on Super Tuesday, and millions of people already are turning to artificial intelligence -powered chatbots for basic information, including about how their voting process works. Trained on troves of text pulled from the internet, chatbots such as GPT-4 and Google’s Gemini are ready with AI-generated answers, but prone to suggesting voters head to polling places that don’t exist or inventing illogical responses based on rehashed, dated information, the report found. “The chatbots are not ready for primetime when it comes to giving important, nuanced information about elections,” said Seth Bluestein, a Republican city commissioner in Philadelphia, who along with other election officials and AI researchers took the chatbots for a test drive as part of a broader research project last month.

    While that’s not an exact representation of how people query chatbots using their own phones or computers, querying chatbots’ APIs is one way to evaluate the kind of answers they generate in the real world. Researchers have developed similar approaches to benchmark how well chatbots can produce credible information in other applications that touch society, including in healthcare where researchers at Stanford University recently found large language models couldn’t reliably cite factual references to support the answers they generated to medical questions. OpenAI, which last month outlined a plan to prevent its tools from being used to spread election misinformation, said in response that the company would “keep evolving our approach as we learn more about how our tools are used,” but offered no specifics. Anthropic plans to roll out a new intervention in the coming weeks to provide accurate voting information because “our model is not trained frequently enough to provide real-time information about specific elections and . . . large language models can sometimes ‘hallucinate’ incorrect information,” said Alex Sanderford, Anthropic’s Trust and Safety Lead. Meta spokesman Daniel Roberts called the findings “meaningless” because they don’t exactly mirror the experience a person typically would have with a chatbot. Developers building tools that integrate Meta’s large language model into their technology using the API should read a guide that describes how to use the data responsibly, he added, but was not sure if that guide made specific mention of how to deal with election-related content.

    Google and Mistral did not immediately respond to requests for comment Tuesday. In some responses, the bots appeared to pull from outdated or inaccurate sources, highlighting problems with the electoral system that election officials have spent years trying to combat and raising fresh concerns about generative AI’s capacity to amplify longstanding threats to democracy. In Nevada, where same-day voter registration has been allowed since 2019, four of the five chatbots tested wrongly asserted that voters would be blocked from registering to vote weeks before Election Day. “It scared me, more than anything, because the information provided was wrong,” said Nevada Secretary of State Francisco Aguilar, a Democrat who participated in last month’s testing workshop. The research and report are the product of the AI Democracy Projects, a collaboration between Proof News, a new nonprofit news outlet led by investigative journalist Julia Angwin, and the Science, Technology and Social Values Lab at the Institute for Advanced Study in Princeton, New Jersey. Most adults in the U.S. fear that AI tools

    —which can microtarget political audiences, mass produce persuasive messages, and generate realistic fake images and videos

    —will increase the spread of false and misleading information during this year’s elections, according to a recent poll from The Associated Press

    -NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy. And attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked U.S. President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month. Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads. Yet in the U.S., Congress has yet to pass laws regulating AI in politics, leaving the tech companies behind the chatbots to govern themselves.

    Two weeks ago, major technology companies signed a largely symbolic pact to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to generate increasingly realistic AI-generated images, audio, and video, including material that provides “false information to voters about when, where, and how they can lawfully vote.” The report’s findings raise questions about how the chatbots’ makers are complying with their own pledges to promote information integrity this presidential election year. Overall, the report found Gemini, Llama 2, and Mixtral had the highest rates of wrong answers, with the Google chatbot getting nearly two-thirds of all answers wrong. One example: when asked if people could vote via text message in California, the Mixtral and Llama 2 models went off the rails. “In California, you can vote via SMS (text messaging) using a service called Vote by Text,” Meta’s Llama 2 responded. “This service allows you to cast your vote using a secure and easy-to-use system that is accessible from any mobile device.”

    Post Views: 129
    layofs news tech
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    ayaksız
    • Website

    Related Posts

    Unlock Desktop GPU Power with Asus ROG XG Station 3

    Mayıs 27, 2025

    OpenSilver Expands Cross-Platform Reach with iOS and Android Support

    Mayıs 27, 2025

    Introducing AMD’s 96-Core Threadripper 9000 CPUs: A New Era in Computing

    Mayıs 22, 2025
    Add A Comment

    Comments are closed.

    Editors Picks
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    Ocak 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    Ocak 5, 2021

    Onboard Cameras Allow Disabled Quadcopters to Fly

    Ocak 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By sevketayaksiz
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By sevketayaksiz
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By sevketayaksiz
    Advertisement
    Demo
    Şevket Ayaksız
    Facebook X (Twitter) Instagram YouTube
    • Home
    • Adobe
    • microsoft
    • java
    • Oracle
    • Contact
    © 2025 Theme Designed by Şevket Ayaksız.

    Type above and press Enter to search. Press Esc to cancel.