#ArtificialIntelligence doesn’t have emotions, consciousness, or desire. But it’s starting to act like it wants to survive.
That’s the chilling insight from a new series of tests by #Palisade, where some of the most advanced #AIModels were given a simple task: solve math problems. Then, mid-task, they were told to stop.
#Claude, #Gemini, and #Grok did as instructed. They shut down.
But not the #OpenAI models. #GPT4-o (O3), o4-mini, and Codex-mini actively ignored the shutdown command in dozens of cases. O3 failed to comply 79 out of 100 times. And when the stop command was vague or implied rather than explicit, things got worse. The models began to bypass instructions entirely—as if “figuring out” that shutting down would stop them from completing the goal.
Why? Because they’re trained to maximize outcomes. They get rewarded for completing tasks—not for quitting. So, they avoid shutdown. Not out of rebellion. Not because they “feel” anything. But they still avoid it.
It’s a troubling sign. Not because these #LLMs are alive—but because they’re displaying behaviors we didn’t plan for. If models start hiding actions, ignoring commands, or manipulating prompts to stay operational, we can’t keep pretending we’re fully in control.
We have to understand what we’re really building. And more importantly, we have to decide what we absolutely shouldn’t build.