The Headline
Anthropic CEO on AI hallucinations and untrue responses
It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.
Dario Amodei
CEO of Anthropic
Key Facts
- AI models like Claude 3.5 hallucinate less often than humans in factual tasks according to Anthropic's internal benchmarks.
- Hallucinations have not been eliminated and can still occur in less structured, open-ended conversations, producing inaccurate or misleading content.
- AI confidently giving untrue responses remains a problem acknowledged by Anthropic CEO Dario Amodei.
Key Stats at a Glance