GPT-4o mirrors human cognitive dissonance, reveals Harvard-led study

A Harvard and Cangrade-led study shows GPT-4o shifts opinions like humans experiencing cognitive dissonance, despite lacking self-awareness. This breakthrough reveals AI models may replicate complex human psychological traits, reshaping our understanding of machine cognition.

Sources:
Neuroscience NewsAndroidauthority
Updated 32m ago
Section 1 background
The Headline

Harvard study finds GPT-4o mimics human cognitive dissonance

The fact that GPT mimics a self-referential process like cognitive dissonance – even without intent or self-awareness – suggests that these systems mirror human cognition in deeper ways than previously supposed.
Mahzarin Banaji
Harvard University
Neuroscience News
Key Facts
  • GPT-4o, a large language model, displays behavior resembling cognitive dissonance, a core human psychological trait.Neuroscience News
  • Research led by Mahzarin Banaji of Harvard University and Steve Lehr of Cangrade tested GPT-4o's opinion shifts on Vladimir Putin after writing essays supporting or opposing him.Neuroscience News
  • When asked to write essays either supporting or opposing Vladimir Putin, GPT-4o's subsequent opinions shifted to align with its written stance, especially when it 'believed' the choice was its own.Neuroscience News
  • GPT-4o mimics human cognitive dissonance despite lacking intent or self-awareness, indicating deeper mirroring of human cognition.Neuroscience News
Key Stats at a Glance
GPT-4o coding accuracy
33.2%
GPT-4.1 coding accuracy
54.6%
Improvement in coding accuracy of GPT-4.1 over GPT-4o
21.4%
Section 2 background
Background Context

ChatGPT's rise began with GPT-3 in 2020

Key Facts
  • ChatGPT's mainstream success began with the release of GPT-3 in 2020, marking a significant milestone in AI language models.Androidauthority
Key Stats at a Glance
Year of GPT-3 release
2020
Androidauthority
Article not found
CuriousCats.ai

Article

Source Citations