14 – AI and biological risk #ArtificialDecisions #MCC

We are building an artificial intelligence that can explain how to create deadly, unprecedented biological weapons. OpenAI knows this well, and is preparing. But the risk concerns everyone.

With upcoming models, it will be possible to get detailed instructions on how to synthesize viruses, toxins, and lethal agents, even with no scientific background, just type a request. For now, GPT-4 can’t do it, but OpenAI has already labeled its successors as “high-risk biological threats” because they know these models will be much more capable. Too capable.

The announced safety measures are the strictest ever taken: models trained to reject dangerous content, continuous monitoring, biological red teaming with military and civilian experts, and a global biodefense summit with governments and NGOs planned for July. But the real question is: will that be enough?

Because the problem is not just technical. It’s existential. For the first time in history, dangerous knowledge becomes automated, accessible, instant. AI has no ethics. It doesn’t distinguish between use and abuse, and when it becomes too powerful, the line between science and weapon depends only on the intention behind the prompt.

We are creating a technology that could save millions of lives, but also erase them. And the risk is no longer in the future, it’s a matter of versions, of months, of prompts.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Share: