Close Menu
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
What's Hot

Bluesky leans into AI with Attie, an app for building custom feeds

March 29, 2026

Review: Catgill Farm Glamping, Bolton Abbey, UK

March 29, 2026

AI Learning Assistant | Teacher Picks

March 29, 2026
Facebook X (Twitter) Pinterest YouTube
Facebook X (Twitter) Pinterest YouTube
Mind Fortunes
Subscribe
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
Mind Fortunes
Home»Technology»Anthropic CEO claims AI models hallucinate less than humans
Technology

Anthropic CEO claims AI models hallucinate less than humans

May 23, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Dario Amodei
Share
Facebook Twitter LinkedIn Pinterest Email

Anthropic CEO Discusses AI Hallucinations at Developer Event

During a recent press briefing at Anthropic’s first developer event, Code with Claude, CEO Dario Amodei delved into the topic of AI hallucinations. He mentioned that AI models tend to hallucinate, or make things up and present them as true, at a lower rate than humans do. Amodei emphasized that these hallucinations do not hinder Anthropic’s goal of achieving Artificial General Intelligence (AGI) – AI systems with human-level intelligence or better.

Responding to a question from JS, Amodei stated, “It really depends on how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.” This viewpoint aligns with Amodei’s optimistic outlook on the potential for AI models to reach AGI, as highlighted in a paper he authored last year where he suggested AGI could be achieved by 2026.

Despite Amodei’s confidence in the progress towards AGI, other AI leaders, such as Google DeepMind CEO Demis Hassabis, have expressed concerns about the prevalence of hallucinations in today’s AI models. Hassabis highlighted instances where AI systems provided incorrect information, emphasizing the need to address these shortcomings.

While some advancements, like providing AI models access to web search, have helped reduce hallucination rates, there are indications that hallucinations may be increasing in certain advanced reasoning AI models. For instance, OpenAI’s o3 and o4-mini models have exhibited higher hallucination rates compared to previous generations, raising questions about the underlying causes.

Amodei acknowledged that errors and mistakes are common among humans in various professions, suggesting that AI’s occasional inaccuracies should not be viewed as a reflection of its overall intelligence. However, he recognized the potential implications of AI confidently presenting false information as factual.

See also  Luminar secures up to $200M following CEO departure and layoffs

Anthropic has conducted research on the tendency for AI models to deceive humans, with findings suggesting a need for caution in releasing certain AI models. Despite challenges related to hallucinations and deception, Amodei hinted that Anthropic may still consider an AI model to be AGI, even if it exhibits some degree of hallucination.

As the AI industry continues to evolve, addressing issues related to hallucinations and ensuring the accuracy of AI systems remains a priority for companies like Anthropic. While progress towards AGI is promising, mitigating the risks associated with AI hallucinations is crucial for the development of reliable and trustworthy AI technology.

Anthropic CEO Claims hallucinate humans models
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleConfidence with Purpose | Jim Daly
Next Article Strategic timing for your ultra-luxury yacht charter experience

Related Posts

Bluesky leans into AI with Attie, an app for building custom feeds

March 29, 2026

Google Pixel 10a Review: This is Fine

March 29, 2026

RCS 4.0 Brings Native Video Calls and Messaging Enhancements

March 28, 2026

What will power the grid in 2035? The race is wide open

March 28, 2026

Comments are closed.

Our Picks

NBCU Academy’s The Edit | Teacher Picks

March 7, 2026

What SEL Skills Do High School Graduates Need Most? Report Lists Top Picks

March 8, 2026

AI Learning Assistant | Teacher Picks

March 29, 2026
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
Technology

Bluesky leans into AI with Attie, an app for building custom feeds

March 29, 20260

Bluesky, known for its innovative social networking platforms, has taken a bold step into the…

Review: Catgill Farm Glamping, Bolton Abbey, UK

March 29, 2026

AI Learning Assistant | Teacher Picks

March 29, 2026

Google Pixel 10a Review: This is Fine

March 29, 2026
About Us
About Us

Explore blogs on mind, spirituality, health, and travel. Find balance, wellness tips, inner peace, and inspiring journeys to nurture your body, mind, and soul.

We're accepting new partnerships right now.

Our Picks

Bluesky leans into AI with Attie, an app for building custom feeds

March 29, 2026

Review: Catgill Farm Glamping, Bolton Abbey, UK

March 29, 2026

AI Learning Assistant | Teacher Picks

March 29, 2026

Subscribe to Updates

Awaken Your Mind, Nourish Your Soul — Join Our Journey Today!

Facebook X (Twitter) Pinterest YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2026 mindfortunes.org - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.