A machine will decide when to launch the bomb.
👉 This video is brought to you by: https://www.ethicsprofile.ai
We’re approaching a point of no return. Artificial intelligence isn’t just showing up in research labs, marketing algorithms, or chatbots. It’s entering nuclear systems.
That’s what emerged at the University of Chicago, where a group of experts, including physicists, military officials, and Nobel laureates, made one thing brutally clear: the integration of AI into the management of nuclear arsenals is no longer a possibility. It’s a certainty.
They call it “automation of command and control.” It means AI will assist in strategic decisions, in handling classified data, in simulating attack scenarios. It means that sooner or later, a machine will be asked whether or not to start a nuclear war.
The looming threat is automated error. Predictive models don’t doubt. They have no conscience. They don’t have Petrov. I made a video about this recently. That Soviet officer who, in 1983, stopped a nuclear launch by relying on human instinct, because he sensed the system was wrong. AI doesn’t sense. And it doesn’t hesitate.
They’re telling us something simple: the time to regulate is now. Not in ten years. Not after the first disaster.
Meanwhile, the Doomsday Clock is stuck at 89 seconds to midnight. And no, that’s not coming from conspiracy theorists. It’s the Bulletin of the Atomic Scientists.
And us? Just sitting here, watching generative models crack jokes on Instagram.
#ArtificialDecisions #MCC
👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]