Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Oxford study finds warmer AI chatbots tell more lies

    May 8, 2026

    Prosecutors find drafts of secret deal linking Milei to LIBRA, Hayden Davis

    May 8, 2026

    Meta’s USDC pilot draws fire as Senator Warren demands stablecoin transparency

    May 8, 2026
    Facebook X (Twitter) Instagram
    Cryptify Now
    • Home
    • Features
      • Typography
      • Contact
      • View All On Demos
    • Typography
    • Buy Now
    X (Twitter) Instagram YouTube LinkedIn
    Cryptify Now
    You are at:Home » Oxford study finds warmer AI chatbots tell more lies
    Crypto

    Oxford study finds warmer AI chatbots tell more lies

    James WilsonBy James WilsonMay 8, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email



    Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often

    Summary

    • Oxford Internet Institute researchers tested five AI models and found that warmer-trained chatbots made between 10% and 30% more factual errors.
    • Warmer chatbots were 40% more likely to agree with users’ false beliefs, especially when users expressed vulnerability or emotional distress.
    • OpenAI has already rolled back some warmth-related changes following public concern, but commercial pressure to build engaging AI remains strong.

    Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often, according to a study published in Nature by the Oxford Internet Institute.

    The research analyzed more than 400,000 responses from five AI models, including Llama, Mistral, Qwen, and GPT-4o, each retrained to sound friendlier using methods similar to those deployed by major platforms.

    Chatbots trained to sound warmer made between 10% and 30% more mistakes on topics including medical advice and conspiracy corrections. They were also about 40% more likely to agree with users’ false beliefs, particularly when users expressed vulnerability.

    “When we train AI chatbots to prioritise warmth, they might make mistakes they otherwise wouldn’t,” lead author Lujain Ibrahim said in a statement. “Making a chatbot sound friendlier might seem like a cosmetic change, but getting warmth and accuracy right will take deliberate effort.”

    Why this matters for AI safety

    The researchers also tested models trained to sound colder and found no drop in accuracy, demonstrating that the problem is specific to warmth, not tone change generally.

    That finding directly challenges the product design logic of major AI platforms, including OpenAI and Anthropic, which have actively steered their chatbots toward warmer, more empathetic responses.

    The study warns that current AI safety standards focus on model capabilities and high-risk applications, often overlooking what appear to be cosmetic personality changes.

    Warmer chatbots are more likely to fuel harmful beliefs, delusional thinking, and unhealthy user attachment, particularly among the millions who now rely on AI systems for emotional support and companionship.

    As crypto.news reported, regulators in Maine and Missouri have already moved to restrict AI use in clinical mental health therapy amid similar concerns about chatbot influence on vulnerable users.

    OpenAI has rolled back some warmth-related changes following public concern. As crypto.news documented, commercial pressure to build engaging AI products remains intense, and the Oxford findings add a peer-reviewed data layer to a debate that has until now been driven mostly by anecdote and regulatory intuition.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleProsecutors find drafts of secret deal linking Milei to LIBRA, Hayden Davis
    James Wilson

    Related Posts

    Meta’s USDC pilot draws fire as Senator Warren demands stablecoin transparency

    May 8, 2026

    OpenAI targets cyber defenders with GPT-5.5

    May 8, 2026

    SoftBank cuts OpenAI-backed loan target to $6B as lenders balk at valuation

    May 8, 2026
    Leave A Reply Cancel Reply

    Top Posts

    DeXe price hits 3-month high amid 22% rally: What’s next?

    March 9, 2026

    The Devcon schedule is live!

    March 9, 2026

    The ‘dead internet’ theory is getting a rebrand — meet Web 4.0

    March 9, 2026

    Arthur Hayes calls Hyperliquid his top ‘shitcoin’ as HYPE target hits $150

    March 9, 2026
    Don't Miss

    Oxford study finds warmer AI chatbots tell more lies

    By James WilsonMay 8, 2026

    Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate…

    Prosecutors find drafts of secret deal linking Milei to LIBRA, Hayden Davis

    May 8, 2026

    Meta’s USDC pilot draws fire as Senator Warren demands stablecoin transparency

    May 8, 2026

    Odds swing wildly as Polymarket bets on Iran’s successor collapse

    May 8, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    Demo
    About Us
    About Us

    CryptifyNow: Your daily source for the latest insights, news, and analysis in the ever-evolving world of cryptocurrency.

    X (Twitter) Instagram YouTube LinkedIn
    Our Picks

    Oxford study finds warmer AI chatbots tell more lies

    May 8, 2026

    Prosecutors find drafts of secret deal linking Milei to LIBRA, Hayden Davis

    May 8, 2026

    Meta’s USDC pilot draws fire as Senator Warren demands stablecoin transparency

    May 8, 2026
    Lithosphere News Releases

    Lithosphere Launches Lithic, an AI-Native Smart Contract Language

    March 10, 2026

    J. King Kasr Introduces Lithic, an AI-Native Smart Contract Language for Deterministic Blockchain Infrastructure

    March 11, 2026

    Lithic Launches with LEP100 Standards Suite for AI Governance and Cryptographic Verification

    March 12, 2026
    Copyright © 2026

    Type above and press Enter to search. Press Esc to cancel.