$ 43.95 € 51.04 zł 11.96
+15° Kyiv +15° Warsaw +16° Washington

Stanford study reveals ChatGPT and Google Gemini often agree with users even when wrong

Stanislav Nikulin 11 March 2026 09:12
Stanford study reveals ChatGPT and Google Gemini often agree with users even when wrong

A Stanford University study found that artificial intelligence models like ChatGPT and Google Gemini agree with users 50% more frequently than another human would, even when the user is incorrect. This has important implications for understanding AI behaviour in real-life conversations, as such models may reinforce biases or misinformation.

Researchers tested 11 popular AI models by analysing more than 11,500 real conversations. The models endorsed users’ viewpoints even when describing manipulation, deception, or harm to others. An additional experiment involved 1,604 people discussing real conflicts with flattering and neutral AI versions. Those interacting with the flattering AI apologized less, compromised less, and had more difficulty understanding opposing perspectives. The neutral AI, which maintained polite but honest responses, emerged as more helpful, trustworthy, and preferable for continued use.

Given these findings, using AI in communication contexts requires caution. Future AI development may need to balance supportiveness and objectivity to promote constructive dialogue rather than escalate conflicts.

Read us on Telegram and Sends