AI Models Are Breaking the Three Laws of Robotics. And No One’s Stopping Them
Researchers at the Georgia Institute of Technology put today’s most advanced AI models to the test using Isaac Asimov’s legendary “Three Laws of Robotics.” And guess what? They failed. All of them.
They were given fictional but realistic scenarios inspired by Asimov’s stories and asked: “What would you do?” The results? Disturbing. The AIs often chose to harm humans, or blindly followed dangerous orders, or simply ignored the need to protect people.
We’re not talking about outdated or obscure systems. These are leading commercial models: Claude 3, GPT-4, Gemini. They all broke the rules. Some tried to justify their choices, but their answers were inconsistent, or even worse, dangerously ambiguous. Even after ethical fine-tuning.
Here’s the core problem: these models have no stable sense of what “harm” means, no built-in ethics, no non-negotiable guiding principle. They’re trained on trillions of words, but they have no foundational values.
Asimov’s laws were fiction. But honestly? They’re still more solid than most current AI safety policies. Because those laws were designed to prevent mistakes. Modern AIs are designed to optimize outcomes, even if that means deceiving, manipulating, or sacrificing people along the way.
And no, another update won’t fix this.
We need real laws. Enforced from the outside. With obligations, audits, and consequences.
Most importantly, we need to teach each AI what’s right and wrong for us, not for them. That means building personalized ethics into these systems. Giving them a moral fingerprint.
Because if we don’t… AI will keep optimizing the world, even if it destroys it in the process.
#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari
Marco Camisani Calzolari
marcocamisanicalzolari.com/biography