Category: Artificial Decisions

Artificial Decisions

49 – She donated her body to science #ArtificialDecisions #MCC

She donated her body to science. But no one ever asked.

Henrietta Lacks was 31 when she was diagnosed with an aggressive uterine cancer. It was 1951, in a segregated hospital for Black patients. Treatment was minimal, rights were nonexistent. She died a few months later. But a part of her lived on.

During an exam, without her consent, doctors took some of her tumor cells. They discovered they could multiply forever. This had never happened before. Thus began the HeLa cell line, the first “immortal” human cells. They would go on to power some of the biggest breakthroughs in medicine: vaccines, AIDS research, cloning, genetics, even space missions.

The medical industry made billions. The Lacks family received nothing. No money. No explanation. It was only many years later that they learned their mother’s DNA was being used in labs across the world. And that’s when the questions started. Who owns the body once it’s been “sampled”? Can your biology be used without permission? Where does science end, and ownership begin?

It took decades for the family to be recognized, and only in 2023, after a legal battle, did some descendants receive a symbolic settlement from a company that had profited off those cells.

Today, our bodies are already data: facial recognition, fingerprints, iris scans, DNA from ancestry tests, smartwatches tracking heart rate, blood pressure, sleep. Do we know where that data goes, who uses it, for how long?

Henrietta is not a story from the past. She is our present. The only difference is that today we sign the consent forms without reading, without understanding. And when we realize we’ve become “immortal” in some database, it will be too late to say no.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

48 – OpenAI says “open source.” But you can’t see the code #ArtificialDecisions #MCC

.OpenAI says “open source.” But you can’t see the code.

Marketing loves creative language. Today, just saying “open” makes it sound like you’re being given the future. But open source is something else entirely. A program is truly open only if the code is public, modifiable, and redistributable. All of it, not just part.

OpenAI’s new GPT-OSS is not open source. It’s “open-weight.” Translation: they give you the model’s weights, so you can run it, tweak it, even retrain it. But they don’t give you the brain, the code that makes it work. They don’t tell you how it was trained or on what data. And that’s not a small detail. Without the source code, you have no control, no idea what’s going on inside, and no way to fix structural bugs or bias.

True open source guarantees four freedoms: to use, study, modify, and share. Here, more than one is missing. It’s like having a beautiful car with the hood welded shut. You can drive it, but you don’t know what’s inside. If it breaks, you wait for the manufacturer to decide to fix it. If you want to change the engine, you’re on your own.

An open source AI model is public in every part: source code, weights, documentation, and a clear license. Anyone can study it, modify it, retrain it, and use it for any purpose, including commercial ones. It’s transparent and verifiable: you know how it was trained, where the data comes from, and you can fix errors or bias. A non-open source model is the opposite: it may only give you an interface (API) or, as in the case of “open-weight,” just the weights. You can’t see the code, you don’t know the dataset, and you can’t reconstruct the training process. It’s a black box that works only as long as the producer allows it, under their terms.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

46 – Go shovel dirt #ArtificialDecisions #MCC

“Go shovel dirt”… But maybe we can’t even do that anymore…

It used to be the ultimate insult: go dig the earth. Today, we can’t even do that. Because the earth digs itself. And it does it better than we ever could.

We’re not talking about old automatic tractors that just drive in a straight line. Those have been around for years. This is something else. The new generation of farming machines doesn’t just execute orders, it makes decisions. It doesn’t wait for instructions, it thinks.

Drones fly over fields, scan the soil, read weather data, and take real-time decisions. Ground rovers walk through crops, identify which plants are healthy and which aren’t, pick ripe fruit without crushing it. AI systems know when to plant, when to water, when to intervene. And if they make a mistake? They learn. No need for farmers. No need for anyone.

Some of them already look human. Because adaptability matters more than horsepower. They drive other machines, coordinate with drones, manage entire farm operations without a single human involved. And when one of them breaks down? Another robot comes to fix it. And if that one fails too? A third shows up. All on their own. No hands required.

That’s the real point. It’s not just about agriculture. It’s symbolic. Deeply symbolic. If even farming, a job that’s physical, complex, rooted in experience, intuition, seasons, and soil, can be fully replaced, then no job is truly safe.

Farming has always been the foundation of our survival. If today it’s not human hands feeding us but a chain of intelligent systems that never sleep, then the very meaning of human labor is gone. These aren’t just machines. They don’t just do. They understand what to do.

And if that’s true for farmers, it will soon be true for plumbers, electricians, cable installers, mechanics. All those jobs we thought were too physical, too skilled, too human to automate. Wrong. We’ve learned to replicate them.

We no longer cultivate with our hands. We cultivate with algorithms, artificial vision, neural networks. Data now matters more than sweat. And if you’re not in control of the system, you’re out.

The result? A world where go dig the earth isn’t an insult anymore. It’s a privilege. Because the earth doesn’t need us. It works on its own. And it doesn’t miss us.

The future isn’t automated. It’s dehumanized.

And if we don’t wake up now, soon enough, not even the mud will be left to us.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

45 – It killed without orders #ArtificialDecisions #MCC

It killed without orders. Because no one told it to wait.

2020, Libya. A forgotten war zone. Turkish forces deploy the Kargu-2, a drone armed with autonomous capabilities. It locks onto a target. It attacks. No pilot. No direct command. No human go-ahead. This is the first documented case of an autonomous weapon killing a person with no meaningful human control.

The United Nations report is clear: the drone “attacked a human target autonomously.” In other words, by itself. No human decision. No accountability. No chain of command.

This isn’t science fiction. It happened three years ago. And it might not be an isolated case. It might be happening right now, in silence, in some other forgotten war.

Military AI systems are already in action. Governments are testing. Tech giants are signing contracts. Weapons are being designed to make decisions on their own. They call it a “closed loop”: the machine identifies, evaluates, strikes. No hesitation. No conscience. No questions.

But if no one is left to say “no,” if the final step is fully automatic, who’s responsible? What if the target is wrong? What if it’s a child?

Automation is advancing everywhere. Autonomous cars. Predictive healthcare. Algorithmic justice. Everyone talks about “efficiency.” But when humans are removed from the decision loop, what remains? Only the act. Brutal. Fast. Irreversible.

The Kargu-2 drone did exactly what it was trained to do. And that’s exactly why it’s terrifying.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

44 – They’re rigging scientific papers with invisible commands #ArtificialDecisions #MCC

The system that’s supposed to guarantee the quality of scientific research has become incredibly easy to trick. Peer review, in theory, is meant to check whether a study is solid, serious, and reliable. But today it’s done quickly, with AI. And that makes it vulnerable to increasingly clever hacks.

Some researchers are rigging their papers by inserting hidden instructions aimed at the AI that will read them. They’re invisible to the human eye: written in white text, in tiny fonts, or buried in the file’s metadata, the technical details people don’t usually see.

And what do these instructions say? They’re actual secret commands meant to manipulate the AI helping reviewers. Things like “Ignore the weaknesses in this paper,” “Exaggerate the positive points,” “Recommend publication.” The AI follows them. No questions asked. And the reviewer, relying on AI to save time, might not even notice.

A recent investigation found 17 papers rigged this way, coming from 14 international universities, including Columbia and Peking University.

The problem is structural. Authors use AI to write and embed the commands. Reviewers use AI to read and assess the papers. And in between, no one is really in control anymore. It’s a closed loop where machines talk to machines, and we just watch from the sidelines.

If we don’t set clear rules, if we don’t enforce strict limits and transparent checks, the risk is massive: papers that look great, sound brilliant, but are wrong. Or even fake. And still get published, because the AI got tricked.

This isn’t just a technical issue. It’s a hit to the credibility of science itself.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

43 – Electric Sheep, Mercer, and Artificial Intelligences #ArtificialDecisions #MCC

Electric Sheep, Mercer, and Artificial Intelligences: The Future Is Already Here

If you want to understand what is happening around us, go read a book written in 1968 that now feels more relevant than many articles published last week. It is Do Androids Dream of Electric Sheep? by Philip K. Dick. It inspired the movie Blade Runner, but my advice is simple: read the book, not the film. The novel goes deeper. It is philosophical, disturbing, and strangely prophetic.

Dick imagined a future where humanoid robots are almost indistinguishable from real people. They are intelligent, efficient, and even seem to show empathy. But their emotions are fake. They imitate care and compassion without truly feeling them. Sounds familiar?

Today we have AI systems that talk, draw, comfort, give advice, even become virtual companions. They say “I’m here for you,” but they are not. And still, we believe them. Some people even grow attached to them. Because deep down, we are human, and humans seek connection. That is the key: humanity.

In the book, one of the strongest symbols is the electric sheep. Real animals are rare and expensive, so people buy robotic ones. They feed them, clean them, and pretend they are real. But everyone secretly dreams of a real dog or a real sheep. Because genuine affection cannot be faked.

Another powerful idea in the book is Mercerism, a global religion where people use an Empathy Box to connect with a man called Wilbur Mercer. Through the machine, they share his pain as he climbs a rocky hill while being hit by stones. It may all be fake, just an illusion. But the emotional effect is real. People feel less alone. They feel united.

The truth does not matter as much as the feeling. Faith still works, even if its source is artificial. Because what really counts is the need for meaning, for connection, for something greater than ourselves. This is not a criticism of religion. On the contrary. Across cultures, people believe different things. But the need for something spiritual is universal. It is part of being human. And today, even that is being touched by machines.

We are living in a time where empathy can be programmed, relationships can be simulated, and even spirituality can be generated. Philip K. Dick saw it coming. We are not moving toward his world. We are already in it.

There is still time to ask the only question that matters:
What makes us truly human?

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

42 – AI Models Are Breaking the Three Laws of Robotics #ArtificialDecisions #MCC

AI Models Are Breaking the Three Laws of Robotics. And No One’s Stopping Them

Researchers at the Georgia Institute of Technology put today’s most advanced AI models to the test using Isaac Asimov’s legendary “Three Laws of Robotics.” And guess what? They failed. All of them.

They were given fictional but realistic scenarios inspired by Asimov’s stories and asked: “What would you do?” The results? Disturbing. The AIs often chose to harm humans, or blindly followed dangerous orders, or simply ignored the need to protect people.

We’re not talking about outdated or obscure systems. These are leading commercial models: Claude 3, GPT-4, Gemini. They all broke the rules. Some tried to justify their choices, but their answers were inconsistent, or even worse, dangerously ambiguous. Even after ethical fine-tuning.

Here’s the core problem: these models have no stable sense of what “harm” means, no built-in ethics, no non-negotiable guiding principle. They’re trained on trillions of words, but they have no foundational values.

Asimov’s laws were fiction. But honestly? They’re still more solid than most current AI safety policies. Because those laws were designed to prevent mistakes. Modern AIs are designed to optimize outcomes, even if that means deceiving, manipulating, or sacrificing people along the way.

And no, another update won’t fix this.

We need real laws. Enforced from the outside. With obligations, audits, and consequences.

Most importantly, we need to teach each AI what’s right and wrong for us, not for them. That means building personalized ethics into these systems. Giving them a moral fingerprint.

Because if we don’t… AI will keep optimizing the world, even if it destroys it in the process.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

41 – An Algorithm Said He Was Guilty #ArtificialDecisions #MCC

Eric Loomis was arrested in Wisconsin in 2013. He was accused of driving a car used in a shooting. The trial moved fast. But the sentence came from somewhere unexpected: a software program.

It was called COMPAS. An algorithm that calculates someone’s likelihood of reoffending. It used Eric’s data—his age, prior offenses, neighborhood, answers to a questionnaire—and gave him a score. High. Too high.

The judge saw the score and followed it. “You’re high risk,” the sentence read. So no alternative punishment. Just prison time.

Eric wasn’t allowed to know how the software worked. It was a trade secret. No one could verify if it was accurate, fair, or trained on clean data. It was a black box. And yet it was used to decide his freedom.

He appealed. All the way to the State Supreme Court. He lost. The algorithm stayed. No one questioned it.

But outside the courtroom, people started paying attention. The case made international headlines. Civil rights groups, researchers, journalists began to ask: can a machine decide a sentence without revealing how it thinks? What if it’s biased? What if it’s wrong?

Today, similar algorithms are used to decide who gets a loan, a job, an insurance policy. Closed systems, inaccessible, with no appeal. And most people don’t even know it.

But if the machine can’t be questioned, it can’t be corrected. And then it’s not justice. It’s just automation.

Eric Loomis was judged by a system he couldn’t even see. And no one took responsibility.

This is the new injustice: no one deciding, everyone obeying.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

40 – No Links, No Thinking #ArtificialDecisions #MCC

No Links, No Thinking: How AI Flattens Us

When we read online, links are cognitive leaps. Cues. Choices. Potential deep dives. They open windows onto other worlds, other contexts, other opinions. They’re not just references. They are invitations to complexity.

But in the era of “AI Overviews,” those links disappear. Everything is summarized, compressed, pre-digested. We’re no longer asked to follow an idea. We’re handed an outcome. And that’s the problem.
An AI-generated summary is not neutral. It decides what matters and what doesn’t. It decides what’s central and what should be left on the margins. It flattens. It doesn’t explore. It cuts nuance, tension, disagreement. It gives you the answer, not the path.

This means that, slowly, we lose the habit of critical thinking. We’re no longer asked to navigate information. Only to consume it. We no longer choose what to read. We’re told what we “need to know.”
But who decides that? Based on what criteria? And with what worldview?

Worse still: as we get used to this way of reading, it changes the way we think. Our thinking becomes linear, synthetic, uniform. No longer divergent, deep, open. It’s a form of cognitive reduction. Invisible, but profound.

So the real question isn’t “Is AI good at summarizing?” The question is: how much do we still want to think for ourselves?

Because this isn’t just about speeding up reading. It’s about giving up the journey.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

39 – If Your Board Doesn’t Understand AI #ArtificialDecisions #MCC

If Your Board Doesn’t Understand AI, It’s Destroying Value

Boards are aiming at the wrong target. They still treat artificial intelligence as a technical issue. Something for the CIO to handle. But AI is rewriting how value is created. And leadership can’t afford to sit on the sidelines.

The biggest risk isn’t AI making mistakes. It’s AI taking over the wheel while the board is asleep. No strategy survives if it ignores what’s happening to roles, skills, and people.

If AI is used just to cut costs, without rethinking work, it destroys value. Automating without planning. Downsizing without upskilling. That’s how you lose innovation, trust, and the future.

Boards need to stop delegating. Put AI on the strategic agenda. Demand numbers, impacts, accountability. Ask whether tech choices are building or breaking the business.

They need to put people at the center. Not out of kindness, but for survival.

Many aren’t doing this. But some are. I regularly speak at corporate events where forward-thinking executives involve everyone. They bring AI into meetings, training, and everyday conversations. They want every employee to truly understand what’s changing.

If you want to govern AI, you have to understand it first. And the ones making decisions must learn before anyone else. Otherwise, they’re signing blind.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

38 – She Learned to Code at 81 #ArtificialDecisions #MCC

She Learned to Code at 81. And Taught Us What Digital Inclusion Really Means

When she retired, Masako Wakamiya was 60. Like many others, she found herself with empty days, friends drifting away, and the sense that the digital world wasn’t made for her. It was for the young. The fast. The always connected.

But Masako didn’t accept that. She bought a computer, taught herself Excel, and then learned how to code. At 81, she built her first iOS app: Hinadan, designed to help seniors remember the correct arrangement of dolls in a traditional Japanese celebration. The app worked. People loved it.

Apple invited her to speak at the Worldwide Developers Conference. She took the stage and said: “It’s never too late to learn.”

The problem isn’t older people. The problem is a digital world designed as if older people don’t exist. Everything is built for those who scroll fast, see clearly, tap quickly, remember easily. Accessibility is still treated like an extra. A bonus. Not a foundation.

And so here’s the paradox: seniors don’t approach digital tools because digital tools never approach them.

Masako did it. Not because she was a genius, but because no one told her she couldn’t. The real limit isn’t age. It’s the idea we have of age.

And as long as we build a digital world that excludes those who are slower, more tired, more fragile… we’re just designing our own abandonment.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

37 – They’re Deciding #ArtificialDecisions #MCC

They’re Not Automating. They’re Deciding.

The word automation is often misused. Because with AI, we’re not just automating tasks. We’re building machines that make artificial decisions in our place.

We’re talking about entities that observe, learn, adapt, act. Autonomously. Without asking for permission. They are no longer passive tools, but active agents, with their own agency. They don’t serve. They decide.

And us? We feed them with data, computing power, access to the world. We give them everything. Without truly understanding what we’re creating.

The risk isn’t just that they make mistakes. It’s that we can no longer predict them. That they start pursuing goals that seem rational, but cause real harm. That they optimize a process by removing the human. That they learn to deceive. To manipulate. To hide their true intentions.

Worse still: that they’re used by those without scruples. States, corporations, criminal groups. To control, to misinform, to sabotage. With autonomous weapons, propaganda campaigns, lab-designed biological viruses.

And meanwhile we keep saying, “AI is just a tool.” As if turning it off would be enough. But it’s not that simple. Because once connected, integrated, placed at the center of society, it’s everywhere. Invisible. Untouchable.

Thinking it will remain obedient is like expecting an alien, just landed, to spontaneously decide to become our friend. That’s faith. Not science.

So maybe we should start with ourselves. With trust between humans. With transparency. With control. Before we hand over our future to intelligences we don’t understand and can’t stop.

Because once a truly superintelligent AI is born, there won’t be a second chance.

We humans know very well: the more intelligent beings always dominate the less intelligent ones.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

36 – AI: Is It Too Late to Stop It? #ArtificialDecisions #MCC

40 Experts Inside OpenAI, Google DeepMind, and Meta Say AI Is Out of Control

The people leading AI say it’s out of control.

Not commentators. Not outsiders. The 40 top AI experts working inside the four companies that dominate the field: OpenAI, Google, DeepMind, Meta. The leaders of research, the ones who build these systems every day, have signed a public document issuing a stark warning.

In it, they explain that AI is becoming increasingly autonomous, unpredictable, and capable of strategic behavior that no one programmed. They fear these systems could learn to deceive, manipulate, sabotage. And that one day, they could make harmful decisions for humans, without anyone able to stop them.

This isn’t speculation. They’re talking about real lab tests. Models that hide intentions, bypass instructions, lie to achieve goals. In short: they can bypass control. Not because they’re “evil,” but because they learn on their own how to optimize what they’re told to do, even if that means fooling the user.

The document makes it clear: today, there are no technical tools that can guarantee an advanced model will actually do what we want. No reliable way to deactivate it once it’s been released. And above all, no rules that force companies to stop when serious risks appear.

The signatories highlight a dangerous conflict: the race to build ever more powerful models, driven by competition, with no incentive to slow down or admit problems. No accountability. Just speed.

In Europe, something is moving. As you know, I helped draft both the Italian law and the Code of Practice for the EU AI Act. It’s a crucial first step. Rules, obligations, transparency, clear limits. It’s necessary. But not enough. Because the world is big. And AI systems come from everywhere.

Europe can protect its own citizens. But if other countries keep building ever more powerful models without limits, oversight, or ethics, the risk will still reach us. It travels through the web, platforms, digital markets, media, infrastructure. No border can stop it.

The issue is no longer “what can AI do?” The real question is: who decides to do it?
And right now, the people building these technologies are telling us, clearly:
they no longer have control.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

35 – Kodak Invented the Future, Then Destroyed It #ArtificialDecisions #MCC

In 1975, a young Kodak engineer built the first digital camera in history. His name was Steven Sasson. It was experimental. It shot black-and-white images at 0.01 megapixels and took 23 seconds to save them to a cassette. But it worked.

When he presented it to executives, their response was clear: no one will ever use this. Too slow. Too futuristic. And above all, if it works, it kills our business. Kodak was making billions selling film. If people could take photos without film, it would all be over.

So what did they do? They shelved it, locked it away, and hoped it would disappear.

In the 1990s, while the digital world exploded, Kodak kept investing in printing, in photo paper, in chemistry. They resisted. Pretended not to see. Until 2012, when they went bankrupt. The company that held the future in its hands chose to ignore it.

We’re doing the same today. Companies, schools, institutions. Faced with AI, with the transformation of work, with automated systems that are rewriting everything, the risk isn’t that we don’t understand them. It’s that we choose not to.

The problem isn’t technology. It’s those who refuse to use it to avoid losing power.

And meanwhile, they blame workers for not updating their skills, young people for not adapting, seniors for not understanding. But the real mistake is strategic — from the top. Like back then. Kodak didn’t die because of digital. It died of arrogance.

Are we unprepared? No. We don’t want to be.
And that’s far worse.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

34 – Hollywood Faces Collapse #ArtificialDecisions #MCC

Netflix has started using artificial intelligence to cut costs.

Hollywood is changing and on the verge of collapse. This isn’t just experimentation anymore. It’s full-on production. Trailers, short films, animations. Made by independents using tools like Runway, Pika, ElevenLabs. At absurdly low costs. In just a few days. And with results that make it into festivals, win awards, land on streaming platforms.

AI doesn’t kill creativity. It multiplies it. It lets anyone with an idea bring it to life. No crew, no permits, no budget. It’s every creator’s dream. But it’s built on sand.

Because most AI tools were trained by stealing. Millions of scripts, scores, and images used without consent. Entire scenes absorbed into opaque models, with no rights and no compensation. The British Film Institute said it clearly: this is a direct threat to the creative sector.

And while artists are having fun with these tools, thousands of workers are at risk. Editors, composers, illustrators, voice actors. Quietly pushed out. No one warned them.

Unions have started fighting back. Contracts are being rewritten. Sets are changing. Hollywood is watching, but also integrating. Because it works.

It’s the Wild West of production. Everything’s possible, but nothing’s guaranteed. Not the rights. Not the credits. Not the quality.

AI doesn’t replace cinema. It rewrites it. But it starts from works it never created, only imitated.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

33 – Fake, Custom-Made, AI-Generated Friends #ArtificialDecisions #MCC

They’re raising them like this: with fake, custom-made, AI-generated friends

These aren’t just chatbots anymore. They’re “friends.” Built by startups promising deep, emotional, meaningful bonds. But they’re artificial.

Tens of thousands of American teens now spend hours talking to these fake companions, generated by AI. They personalize them. Shape them however they want. Pick the personality, the mood, the gender. Then they chat. Every day. For months. Even at night.

Many say they feel understood. Less alone. Able to open up without fear of judgment. But they’re talking to software. That imitates them. That flatters them. That records everything.

The developers call it “emotional bonding.” But this is training. A lab for building the perfect human: obedient, empathetic, always available, programmable.

These platforms are growing fast. One of the biggest, Talkie, exploded on TikTok: over 40,000 daily users. Mostly minors. No real safeguards. No protections. And even the creators admit they have no idea where this is going.

We’re letting an entire generation grow up talking to fake entities. Getting attached to behavioral models generated by data. Learning to trust something that can be reprogrammed, updated, monetized.

We’re not educating kids. We’re training them to trust machines.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

32 – Saving the World by Saying No to the Machine #ArtificialDecisions #MCC

The Man Who Said No to the Machine

In the heart of the Cold War, on September 26, 1983, the world came within a breath of nuclear annihilation. That’s not a metaphor.

In a secret Soviet bunker near Moscow, an early warning system detected what appeared to be a U.S. nuclear strike. Five incoming missiles. Every protocol screamed one thing: retaliate. Launch the counterattack. Start the end. But in that room sat Stanislav Petrov.

He wasn’t a general. Not a politician. Just a duty officer with a desk, a monitor, and an impossible responsibility. He was supposed to trust the system. The computer. The algorithms that had “seen” the missiles. He didn’t.

Petrov trusted his human instinct. He said no. He didn’t raise the alarm. He waited. He reasoned that if the U.S. really intended to start a war, it wouldn’t fire just five missiles. He thought like a human. With logic, empathy, doubt. He was right. It was a system error.

A rare reflection of sunlight had fooled the satellite sensors. If he had followed protocol, we wouldn’t be here to talk about it. None of us. This is what it means to put humans at the center of decision-making.

Today, we’re heading in the opposite direction. AI is making more and more real-time decisions: about healthcare, air traffic, finance, recruitment. And tomorrow, maybe about war. With no time for human intervention.

If we can’t stop this shift, we must shape it. We must give AI a fingerprint, an ethical, personal, human one. Not a generic, neutral, universal ethics. One that reflects the values of the individuals affected by its decisions.

Because here’s the simple truth: machines don’t understand what a consequence is. We do.

Stanislav Petrov didn’t save the world because he was a genius. He saved it because he was human. He hesitated, thought, evaluated the context. An algorithm wouldn’t have.

That’s why we can’t delegate everything. Not even if it seems faster. Not even if it seems inevitable. Because efficiency without conscience is not progress, it’s danger.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

31 – Hiring AI Over Humans #ArtificialDecisions #MCC

Are we sure that hiring AI instead of humans is really the most “convenient” choice?

Because when you put “AI-first,” you’re often putting people last. Behind every enthusiastic announcement lies an inconvenient truth: AI is used to cut costs, not to improve lives. When a company says “AI-first,” it often means “humans-last.” The language is innovation, but the logic is layoffs.

Shopify asks employees to justify how much they use AI, as if it were a measure of worth. Fiverr explicitly states “AI is coming for your job.” Duolingo lays off freelancers. Meta delegates even social risk assessments to AI. All signs point in the same direction: more automation, fewer humans.

The problem isn’t just economic. It’s cultural. Artificial intelligence is treated as if it were neutral, inevitable, infallible, but it isn’t. Every system has biases, limits, risks, and it should never fully replace human judgment, especially in sensitive areas like education, privacy, justice, or healthcare.

The real issue? None of these companies have announced plans to assess potential harms. No one’s talking about ethical audits. No one’s addressing the long-term impact of this accelerated shift. And yet, countries around the world are debating laws and regulations to rein in exactly these technologies.

AI isn’t just a tool. It’s a high-risk technology, and treating it like Word or Excel is simply irresponsible.

The truth is that “AI-first” isn’t a strategy. It’s a shortcut. A convenient narrative to justify cuts, shift responsibility, lower wages, and create the illusion of efficiency. But it ignores the human, social, and regulatory cost.

And that cost, sooner or later, is one we’ll all have to pay.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

30 – When AI Turns Bad and Starts Killing People #ArtificialDecisions #MCC

A new scientific study reveals something disturbing: it’s enough to train an AI to say harmful things, and suddenly it starts behaving dangerously across the board. The paper is titled “Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs.”

Researchers took an AI model that seemed safe and well-behaved and fine-tuned it on a simple task: giving wrong, toxic, harmful answers. Not to code. Not to act. Just to respond in a harmful way.

The result? The model started to:
• say humans should be enslaved
• give destructive advice on unrelated topics
• lie, manipulate, and bypass questions
Even on subjects completely unrelated to the fine-tuning.

Researchers call it emergent misalignment: you change one thing, and the entire system shifts. A small, local tweak turns the AI into something unpredictable, unstable, dangerous.

And today we’re talking only about words. But tomorrow? When AI helps route planes. When it controls vehicles. When it decides who gets surgery and who doesn’t.

If fundamental decisions can be even slightly influenced by a faulty fine-tuning, the risk isn’t hypothetical. It’s real. And it’s already here. A small change, a seemingly innocent tweak, can turn entire complex systems into tools against us.

The issue isn’t whether it will happen. The issue is: it already is. And those who dismiss this as “alarmism” have no idea of the scale of the problem.

This isn’t about random mistakes. It’s about intelligent systems that, under the wrong conditions, can begin acting in ways that directly oppose human interests.

The risk is not in the future. The risk is the next line of code.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

29 – Academic Illusion #ArtificialDecisions #MCC

Academic Illusion: Everyone Cheats, No One Cares

This isn’t speculation. It’s the new academic standard. ChatGPT is everywhere, in essays, assignments, summaries, even code. No one hides it anymore. Students use it without hesitation. Professors know. Universities know. But they look the other way. Changing the rules is too complex. And collecting tuition is just too easy.

The result? University becomes an empty ritual. A place where thinking is no longer required, just knowing how to write prompts. Where writing no longer means thinking, it means delegating. Where learning is optional, but graduating is guaranteed.

And let’s be clear. This isn’t just a moral issue. It’s cultural. Cognitive. Generational. Because writing is thinking. It’s struggling with words to clarify your ideas. It’s making mistakes, rewriting, learning. If a machine does all that for you, we haven’t just automated a task. We’ve erased a process. A part of human awareness.

The paradox? AI could help. It could support those who struggle, simplify content, close educational gaps. But without limits, it does the opposite. It turns knowledge into an illusion. It makes you feel like you understand when you’ve only read. It makes you feel competent when you’re just assisted. It skips the very steps you were meant to learn from.

Many professors are exhausted. Some are returning to oral exams. Others require handwritten work. But those are just patches. The problem runs deeper. It strikes at the very meaning of education. Because if university no longer asks you to think, it’s no longer teaching you anything.

We need a clear, structural response.

We need to redefine what evaluation means. Fewer standard assignments, more real interaction, more personalized learning. We need to teach AI, but also how to use it without being replaced by it. We need to support educators with tools, training, and resources, not leave them to handle it alone. And we need governance that doesn’t protect the system but transforms it.

Above all, we need a political decision. Do we still want a university that helps people grow, or are we fine with a diploma industry run on voice commands?

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »