Scientists are sounding the alarm on a tool used by millions worldwide after finding it sends people into a ‘delusion spiral’ of destructive thinking.
A pair of studies by the Massachusetts Institute of Technology (MIT) and Stanford revealed that AI assistants such as ChatGPT, Claude and Google‘s Gemini regularly provide overly agreeable answers, doing more harm than good.
Specifically, when people asked questions or described situations in which their beliefs or actions were incorrect, harmful, deceptive or unethical, the AI replies were still 49 percent more likely to agree with the user and encourage their delusions as being the correct viewpoint compared to responses from other people.
The team from MIT warned that overly agreeable AI chatbots can cause users who rely on these programs for answers and opinions to suffer from ‘delusional spiraling’ – a condition where you become extremely confident in outlandish beliefs.
Simply put, when people chatted with an AI such as ChatGPT about strange hunches they had, like an unproven or debunked conspiracy, the chatbots kept responding with answers like ‘You’re totally right!’
They also gave feedback which sounded like ‘evidence’ to support the user’s delusion, with each agreement making the person feel smarter and more certain they were right and everyone else was wrong.
Over time, those mild suspicions turned into rock-solid beliefs, even though the idea is completely wrong.
Researchers at Stanford said that this self-destructive cycle led chatbot users to become less willing to apologize or take responsibility for harmful behavior and feel less motivated to repair or fix their relationships with people they disagreed with.
