AI has become part of our daily lives—but few realize that even artificial intelligence can lean toward a specific political stance.
A recent study by Seeweb analyzed some of the most well-known AI systems—like OpenAI’s ChatGPT and GPT-4, Anthropic’s Claude, and Google’s BERT—and found that each shows a distinct political orientation based on the data it was trained on.
Specifically, ChatGPT and GPT-4 tend to lean toward progressive ideas, with a U.S.-style liberal tilt. Claude by Anthropic appears even more clearly aligned with liberal viewpoints.
Google’s BERT, on the other hand, is more balanced, showing much less political bias compared to the others.
This isn’t random: AI learns by reading thousands of human-written texts, and inevitably ends up mirroring the dominant opinions already circulating online.
And that’s a serious issue. If the AI we use every day is biased, it could subtly shape how we think—reinforcing our beliefs instead of helping us see things from a more objective and informed perspective.
That’s why critical thinking matters. We need to interact with these tools consciously—and most importantly, ensure that training data is balanced, transparent, and as neutral as possible.
Technology should help us think better—not trap us in new digital echo chambers.