The Prime Minister and the Digital Oracle: When AI Enters Parliament
Swedish Prime Minister Ulf Kristersson has publicly admitted to frequently using artificial intelligence tools like ChatGPT and the French service LeChat to get a “second opinion” in his work. Not to make decisions, he says, nor to handle classified information, but to confront different viewpoints and challenge his own ideas.
Too bad AI doesn’t have opinions. Just patterns.
These models don’t think. They don’t understand political context, irony, or social tensions. And above all, they’re not neutral. They reflect, and amplify, the worldview of those who built them. When a leader turns to machines like these for insight, they don’t find pluralism, they find confirmation.
The problem isn’t using AI. The problem is not knowing what’s inside.
A Prime Minister asking ChatGPT for advice is, even unknowingly, accepting an ideological filter: opaque, built elsewhere, trained on data selected by those with access, power, visibility. And if we trust it blindly, we risk turning AI into a digital oracle to which we delegate doubt, debate, and critical thinking.
Kristersson says his use is “limited and safe.” But that’s not enough. Every time a political leader interacts with AI, we should be asking: with which model? Trained where? By whom? Filtered how? For what purpose?
We can’t allow ChatGPT to take part, quietly, in political decision-making…
…without ever being elected.
#ArtificialDecisions #MCC