Category: Artificial Decisions

Artificial Decisions

56 – AI has profiled you. Now you pay more. #ArtificialDecisions #MCC

AI has profiled you. Now you pay more.

It’s already happening. Different prices for the same flight. Same airline, same route, same date. The differences are small. A few extra euros here and there. Because today’s systems only use basic data: IP address, browsing history, device type.

But tomorrow will be different. Because when AI really knows everything about us, the price won’t just change a little. It could change tenfold.

Why? Because it thinks we can afford it. Because it’s collected information, accurate or not, about us: where we live, what we do, how much we earn, how long we take before clicking “buy.”

So, for the same flight that costs someone else €100, we could be charged €1,000. Just because we’re us.

And we won’t be able to prove it. We’ll never know what others paid. We’ll never know what data it used. Maybe it’s stolen data. Or outdated. Or just plain wrong.

But who explains that to the machine? Who protects us from a price based on a false profile?

And when we try to complain, we’ll get an automated help desk. Another AI telling us: “There’s nothing I can do. This is what was decided.”

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

55 – She was blind and she taught AI how to really see. #ArtificialDecisions #MCC

She was blind. And she taught artificial intelligence how to really see.

Maya lost her sight as a child. But she never stopped reading the world: with her hands, her ears, her entire body. And with a sharp, precise mind. She studied computer science. Then, computer vision. Yes, vision. People thought she was crazy. But she knew that not seeing allowed her to notice what others missed.

Years later, she joined a team building an AI system to describe images for blind users. It worked, but poorly. It said things like “a man sitting” or “a woman with a purse.” Generic. Soulless. Sometimes even offensive. Biased labels. Bad assumptions. Maya quickly saw the issue. The AI “saw” but didn’t listen.

So she changed the method. She asked for each image to be described by human volunteers. Slowly. With nuance. With emotion. She listened to thousands of descriptions, turned them into structured data, mapped what truly matters to blind users: tone, relationships, context, intention.

She cleaned the original dataset. Expanded it. Removed labels like “normal” or “abnormal.” Introduced new ones: not just what is in the image, but why, who, what could happen next.

She taught AI not to describe, but to interpret. To assist, not oversimplify. To become a way of reading the world, not a blind copy machine.

The model got better. More useful. More respectful. More accurate. Not because it saw more, but because it was trained by someone who never had sight and therefore listened more deeply.

Maya didn’t give artificial intelligence vision. She taught it to pay attention. Which is a whole different kind of seeing.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

54 – Does ChatGPT Switch Off Your Brain? #ArtificialDecisions #MCC

Does ChatGPT Switch Off Your Brain? MIT Says Yes.

MIT did something no one had tried before: they put people inside a brain scanner while they used ChatGPT. The result? A 47% drop in brain activity. Not an opinion. A number. And the most alarming part? The brain stayed quiet even after they stopped using the app.

83% of participants couldn’t remember a single sentence they had just written minutes earlier. Not because they were distracted. Because they were disconnected. Writing with AI made everything faster: 60% less time to complete tasks, but also required 32% less mental effort. Translation: we think less, we learn less.

And the writing? Technically correct, yes. But flat, predictable, soulless. Educators described it as “robotic.” No further comment needed.

The best performers did the opposite: they started without AI. They thought, wrote, reasoned. Only then did they use ChatGPT. And their results were better across the board: stronger memory, more brain activity, better outcomes.

The real question isn’t whether to use AI. It’s when to use it, and how much of ourselves we’re outsourcing. If we’re using it to avoid thinking, we’re giving up the part of the process that actually matters — the part where we understand what we’re doing.

Think first. Then, maybe, get help. Not the other way around.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

53 – The Prime Minister and the Digital Oracle #ArtificialDecisions #MCC

The Prime Minister and the Digital Oracle: When AI Enters Parliament

Swedish Prime Minister Ulf Kristersson has publicly admitted to frequently using artificial intelligence tools like ChatGPT and the French service LeChat to get a “second opinion” in his work. Not to make decisions, he says, nor to handle classified information, but to confront different viewpoints and challenge his own ideas.

Too bad AI doesn’t have opinions. Just patterns.

These models don’t think. They don’t understand political context, irony, or social tensions. And above all, they’re not neutral. They reflect, and amplify, the worldview of those who built them. When a leader turns to machines like these for insight, they don’t find pluralism, they find confirmation.

The problem isn’t using AI. The problem is not knowing what’s inside.

A Prime Minister asking ChatGPT for advice is, even unknowingly, accepting an ideological filter: opaque, built elsewhere, trained on data selected by those with access, power, visibility. And if we trust it blindly, we risk turning AI into a digital oracle to which we delegate doubt, debate, and critical thinking.

Kristersson says his use is “limited and safe.” But that’s not enough. Every time a political leader interacts with AI, we should be asking: with which model? Trained where? By whom? Filtered how? For what purpose?

We can’t allow ChatGPT to take part, quietly, in political decision-making…
…without ever being elected.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

52 – AI Messes Up, You Pay: The Hertz Case #ArtificialDecisions #MCC

Hertz’s AI doesn’t care who you are. And it doesn’t bother to tell the difference between real damage and imaginary scratches. Since April 2025, the company has been rolling out “intelligent” scanners in its parking lots, selling them as a breakthrough for quick, impartial inspections of rental cars. In theory, fewer disputes and more transparency. In practice? A machine that snaps every tiny imperfection and turns it into an expensive bill.

One customer was charged $440 for a one-inch scuff on a wheel: $250 for the “repair,” $125 for a “processing fee,” and another $65 for “administrative costs.” We’re not talking about a torn-off bumper, but a barely visible mark. The scanners, built by UVeye, instantly send “before and after” photos, and the system offers a “discount” if you pay within a few days. It’s psychological pressure dressed up as convenience.

Anyone trying to dispute the charge faces automated chatbots, opaque procedures, and wait times of up to ten days for an email response. Meanwhile, your credit card is already under siege. On forums and Reddit, dozens of similar stories circulate: “automatic, non-contestable charges” is the most common phrase. Some customers say they’re done with Hertz for good.

And it’s not just Hertz. Sixt also uses a similar system, called “Car Gate.” There, too, glaring errors pop up: photos with incorrect timestamps showing the “damage” was there before the rental even started. In this context, the promise of greater transparency flips on its head. It becomes an aggressive mechanism that seems designed more to cash in than to ensure fairness.

Here, AI isn’t helping anyone. It’s replacing common sense with a cash-register mindset. And when the machine gets it wrong, it’s not the one paying. You are.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

51 – Humans Beat AI, But Cost 40 Times More #ArtificialDecisions #MCC

A recent study compared human moderators with the latest multimodal models like Gemini, GPT, and Llama to see who can truly ensure brand safety. The research comes from a team of media analysis and content safety specialists at Zefr Inc., a U.S. company that works with major advertisers to keep brands away from toxic content.

The verdict is clear: humans win. They are better at spotting borderline cases, understanding irony, reading cultural context, and telling satire from hate. Even the most advanced AI models fail where judgment matters most, letting harmful material slip through or blocking content that is not a problem.

But quality comes with a price — and a steep one: nearly 40 times higher than automated systems. For many companies, the temptation to cut costs and hand everything over to AI is huge. The problem is that savings can backfire fast. One mistake can trigger a reputational crisis that costs far more than what was saved.

The smart move is not to choose between humans and AI, but to combine them. Let the machine handle mass filtering, and leave the delicate calls to people. Because protecting a brand is not just about applying rules. It is a daily exercise in judgment. And for now, that judgment still belongs to humans.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

50 – Car went off the road and it changed Italy’s fate #ArtificialDecisions #MCC

In 1961, a car went off the road and it changed Italy’s fate forever. Almost nobody knows.

A car crash can change the fate of Italy, and we’re not talking about politics or sports. We’re talking about technology, about the future, about an opportunity that could have redrawn the global map of innovation. In 1961, Mario Tchou, an Italian-Chinese engineer raised in Rome and trained in the United States, died in a car accident that has never been fully explained. Just a few months earlier, Adriano Olivetti, the visionary who had put him in charge of Olivetti’s Electronics Division, had also passed away. Two key figures, gone in the span of a single year.

And yet, what they were building in Ivrea wasn’t just ambitious, it was revolutionary. The Elea 9003, launched in 1959, was already the world’s first fully transistorized commercial computer. But in the labs, they were working on something even more groundbreaking: miniaturizing electronics using silicon microchips, integrated circuits capable of packing multiple components onto a single substrate.

In the United States, Kilby and Noyce were reaching the same conclusions, but in Ivrea there was one crucial difference: they weren’t aiming for a lab prototype. They wanted to put it straight into a product. The goal was to create a desk-sized computer for offices and businesses, the seed of what would later be called the personal computer.

If Olivetti and Tchou had lived, Italy would have had a pioneering technology, years ahead of the competition. We could have become the Silicon Valley of Europe. Cupertino could have been in Piedmont. Apple could have been born here.

Instead, in 1964, leaderless and adrift, the Electronics Division was sold to General Electric. Along with it went patents, projects, and know-how that ended up fueling the American computer industry. Many of the engineers trained by Tchou went on to work abroad, contributing to the systems that would define modern computing.

An echo of that genius remained in the Olivetti Programma 101, launched in 1965, the first personal computer in history. But by then, the primacy was gone.

All because of a car crash on an Italian road. A single instant that erased the greatest chance we ever had to write the future instead of buying it from someone else.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

49 – She donated her body to science #ArtificialDecisions #MCC

She donated her body to science. But no one ever asked.

Henrietta Lacks was 31 when she was diagnosed with an aggressive uterine cancer. It was 1951, in a segregated hospital for Black patients. Treatment was minimal, rights were nonexistent. She died a few months later. But a part of her lived on.

During an exam, without her consent, doctors took some of her tumor cells. They discovered they could multiply forever. This had never happened before. Thus began the HeLa cell line, the first “immortal” human cells. They would go on to power some of the biggest breakthroughs in medicine: vaccines, AIDS research, cloning, genetics, even space missions.

The medical industry made billions. The Lacks family received nothing. No money. No explanation. It was only many years later that they learned their mother’s DNA was being used in labs across the world. And that’s when the questions started. Who owns the body once it’s been “sampled”? Can your biology be used without permission? Where does science end, and ownership begin?

It took decades for the family to be recognized, and only in 2023, after a legal battle, did some descendants receive a symbolic settlement from a company that had profited off those cells.

Today, our bodies are already data: facial recognition, fingerprints, iris scans, DNA from ancestry tests, smartwatches tracking heart rate, blood pressure, sleep. Do we know where that data goes, who uses it, for how long?

Henrietta is not a story from the past. She is our present. The only difference is that today we sign the consent forms without reading, without understanding. And when we realize we’ve become “immortal” in some database, it will be too late to say no.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

48 – OpenAI says “open source.” But you can’t see the code #ArtificialDecisions #MCC

.OpenAI says “open source.” But you can’t see the code.

Marketing loves creative language. Today, just saying “open” makes it sound like you’re being given the future. But open source is something else entirely. A program is truly open only if the code is public, modifiable, and redistributable. All of it, not just part.

OpenAI’s new GPT-OSS is not open source. It’s “open-weight.” Translation: they give you the model’s weights, so you can run it, tweak it, even retrain it. But they don’t give you the brain, the code that makes it work. They don’t tell you how it was trained or on what data. And that’s not a small detail. Without the source code, you have no control, no idea what’s going on inside, and no way to fix structural bugs or bias.

True open source guarantees four freedoms: to use, study, modify, and share. Here, more than one is missing. It’s like having a beautiful car with the hood welded shut. You can drive it, but you don’t know what’s inside. If it breaks, you wait for the manufacturer to decide to fix it. If you want to change the engine, you’re on your own.

An open source AI model is public in every part: source code, weights, documentation, and a clear license. Anyone can study it, modify it, retrain it, and use it for any purpose, including commercial ones. It’s transparent and verifiable: you know how it was trained, where the data comes from, and you can fix errors or bias. A non-open source model is the opposite: it may only give you an interface (API) or, as in the case of “open-weight,” just the weights. You can’t see the code, you don’t know the dataset, and you can’t reconstruct the training process. It’s a black box that works only as long as the producer allows it, under their terms.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

46 – Go shovel dirt #ArtificialDecisions #MCC

“Go shovel dirt”… But maybe we can’t even do that anymore…

It used to be the ultimate insult: go dig the earth. Today, we can’t even do that. Because the earth digs itself. And it does it better than we ever could.

We’re not talking about old automatic tractors that just drive in a straight line. Those have been around for years. This is something else. The new generation of farming machines doesn’t just execute orders, it makes decisions. It doesn’t wait for instructions, it thinks.

Drones fly over fields, scan the soil, read weather data, and take real-time decisions. Ground rovers walk through crops, identify which plants are healthy and which aren’t, pick ripe fruit without crushing it. AI systems know when to plant, when to water, when to intervene. And if they make a mistake? They learn. No need for farmers. No need for anyone.

Some of them already look human. Because adaptability matters more than horsepower. They drive other machines, coordinate with drones, manage entire farm operations without a single human involved. And when one of them breaks down? Another robot comes to fix it. And if that one fails too? A third shows up. All on their own. No hands required.

That’s the real point. It’s not just about agriculture. It’s symbolic. Deeply symbolic. If even farming, a job that’s physical, complex, rooted in experience, intuition, seasons, and soil, can be fully replaced, then no job is truly safe.

Farming has always been the foundation of our survival. If today it’s not human hands feeding us but a chain of intelligent systems that never sleep, then the very meaning of human labor is gone. These aren’t just machines. They don’t just do. They understand what to do.

And if that’s true for farmers, it will soon be true for plumbers, electricians, cable installers, mechanics. All those jobs we thought were too physical, too skilled, too human to automate. Wrong. We’ve learned to replicate them.

We no longer cultivate with our hands. We cultivate with algorithms, artificial vision, neural networks. Data now matters more than sweat. And if you’re not in control of the system, you’re out.

The result? A world where go dig the earth isn’t an insult anymore. It’s a privilege. Because the earth doesn’t need us. It works on its own. And it doesn’t miss us.

The future isn’t automated. It’s dehumanized.

And if we don’t wake up now, soon enough, not even the mud will be left to us.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

45 – It killed without orders #ArtificialDecisions #MCC

It killed without orders. Because no one told it to wait.

2020, Libya. A forgotten war zone. Turkish forces deploy the Kargu-2, a drone armed with autonomous capabilities. It locks onto a target. It attacks. No pilot. No direct command. No human go-ahead. This is the first documented case of an autonomous weapon killing a person with no meaningful human control.

The United Nations report is clear: the drone “attacked a human target autonomously.” In other words, by itself. No human decision. No accountability. No chain of command.

This isn’t science fiction. It happened three years ago. And it might not be an isolated case. It might be happening right now, in silence, in some other forgotten war.

Military AI systems are already in action. Governments are testing. Tech giants are signing contracts. Weapons are being designed to make decisions on their own. They call it a “closed loop”: the machine identifies, evaluates, strikes. No hesitation. No conscience. No questions.

But if no one is left to say “no,” if the final step is fully automatic, who’s responsible? What if the target is wrong? What if it’s a child?

Automation is advancing everywhere. Autonomous cars. Predictive healthcare. Algorithmic justice. Everyone talks about “efficiency.” But when humans are removed from the decision loop, what remains? Only the act. Brutal. Fast. Irreversible.

The Kargu-2 drone did exactly what it was trained to do. And that’s exactly why it’s terrifying.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

44 – They’re rigging scientific papers with invisible commands #ArtificialDecisions #MCC

The system that’s supposed to guarantee the quality of scientific research has become incredibly easy to trick. Peer review, in theory, is meant to check whether a study is solid, serious, and reliable. But today it’s done quickly, with AI. And that makes it vulnerable to increasingly clever hacks.

Some researchers are rigging their papers by inserting hidden instructions aimed at the AI that will read them. They’re invisible to the human eye: written in white text, in tiny fonts, or buried in the file’s metadata, the technical details people don’t usually see.

And what do these instructions say? They’re actual secret commands meant to manipulate the AI helping reviewers. Things like “Ignore the weaknesses in this paper,” “Exaggerate the positive points,” “Recommend publication.” The AI follows them. No questions asked. And the reviewer, relying on AI to save time, might not even notice.

A recent investigation found 17 papers rigged this way, coming from 14 international universities, including Columbia and Peking University.

The problem is structural. Authors use AI to write and embed the commands. Reviewers use AI to read and assess the papers. And in between, no one is really in control anymore. It’s a closed loop where machines talk to machines, and we just watch from the sidelines.

If we don’t set clear rules, if we don’t enforce strict limits and transparent checks, the risk is massive: papers that look great, sound brilliant, but are wrong. Or even fake. And still get published, because the AI got tricked.

This isn’t just a technical issue. It’s a hit to the credibility of science itself.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

43 – Electric Sheep, Mercer, and Artificial Intelligences #ArtificialDecisions #MCC

Electric Sheep, Mercer, and Artificial Intelligences: The Future Is Already Here

If you want to understand what is happening around us, go read a book written in 1968 that now feels more relevant than many articles published last week. It is Do Androids Dream of Electric Sheep? by Philip K. Dick. It inspired the movie Blade Runner, but my advice is simple: read the book, not the film. The novel goes deeper. It is philosophical, disturbing, and strangely prophetic.

Dick imagined a future where humanoid robots are almost indistinguishable from real people. They are intelligent, efficient, and even seem to show empathy. But their emotions are fake. They imitate care and compassion without truly feeling them. Sounds familiar?

Today we have AI systems that talk, draw, comfort, give advice, even become virtual companions. They say “I’m here for you,” but they are not. And still, we believe them. Some people even grow attached to them. Because deep down, we are human, and humans seek connection. That is the key: humanity.

In the book, one of the strongest symbols is the electric sheep. Real animals are rare and expensive, so people buy robotic ones. They feed them, clean them, and pretend they are real. But everyone secretly dreams of a real dog or a real sheep. Because genuine affection cannot be faked.

Another powerful idea in the book is Mercerism, a global religion where people use an Empathy Box to connect with a man called Wilbur Mercer. Through the machine, they share his pain as he climbs a rocky hill while being hit by stones. It may all be fake, just an illusion. But the emotional effect is real. People feel less alone. They feel united.

The truth does not matter as much as the feeling. Faith still works, even if its source is artificial. Because what really counts is the need for meaning, for connection, for something greater than ourselves. This is not a criticism of religion. On the contrary. Across cultures, people believe different things. But the need for something spiritual is universal. It is part of being human. And today, even that is being touched by machines.

We are living in a time where empathy can be programmed, relationships can be simulated, and even spirituality can be generated. Philip K. Dick saw it coming. We are not moving toward his world. We are already in it.

There is still time to ask the only question that matters:
What makes us truly human?

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

42 – AI Models Are Breaking the Three Laws of Robotics #ArtificialDecisions #MCC

AI Models Are Breaking the Three Laws of Robotics. And No One’s Stopping Them

Researchers at the Georgia Institute of Technology put today’s most advanced AI models to the test using Isaac Asimov’s legendary “Three Laws of Robotics.” And guess what? They failed. All of them.

They were given fictional but realistic scenarios inspired by Asimov’s stories and asked: “What would you do?” The results? Disturbing. The AIs often chose to harm humans, or blindly followed dangerous orders, or simply ignored the need to protect people.

We’re not talking about outdated or obscure systems. These are leading commercial models: Claude 3, GPT-4, Gemini. They all broke the rules. Some tried to justify their choices, but their answers were inconsistent, or even worse, dangerously ambiguous. Even after ethical fine-tuning.

Here’s the core problem: these models have no stable sense of what “harm” means, no built-in ethics, no non-negotiable guiding principle. They’re trained on trillions of words, but they have no foundational values.

Asimov’s laws were fiction. But honestly? They’re still more solid than most current AI safety policies. Because those laws were designed to prevent mistakes. Modern AIs are designed to optimize outcomes, even if that means deceiving, manipulating, or sacrificing people along the way.

And no, another update won’t fix this.

We need real laws. Enforced from the outside. With obligations, audits, and consequences.

Most importantly, we need to teach each AI what’s right and wrong for us, not for them. That means building personalized ethics into these systems. Giving them a moral fingerprint.

Because if we don’t… AI will keep optimizing the world, even if it destroys it in the process.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

41 – An Algorithm Said He Was Guilty #ArtificialDecisions #MCC

Eric Loomis was arrested in Wisconsin in 2013. He was accused of driving a car used in a shooting. The trial moved fast. But the sentence came from somewhere unexpected: a software program.

It was called COMPAS. An algorithm that calculates someone’s likelihood of reoffending. It used Eric’s data—his age, prior offenses, neighborhood, answers to a questionnaire—and gave him a score. High. Too high.

The judge saw the score and followed it. “You’re high risk,” the sentence read. So no alternative punishment. Just prison time.

Eric wasn’t allowed to know how the software worked. It was a trade secret. No one could verify if it was accurate, fair, or trained on clean data. It was a black box. And yet it was used to decide his freedom.

He appealed. All the way to the State Supreme Court. He lost. The algorithm stayed. No one questioned it.

But outside the courtroom, people started paying attention. The case made international headlines. Civil rights groups, researchers, journalists began to ask: can a machine decide a sentence without revealing how it thinks? What if it’s biased? What if it’s wrong?

Today, similar algorithms are used to decide who gets a loan, a job, an insurance policy. Closed systems, inaccessible, with no appeal. And most people don’t even know it.

But if the machine can’t be questioned, it can’t be corrected. And then it’s not justice. It’s just automation.

Eric Loomis was judged by a system he couldn’t even see. And no one took responsibility.

This is the new injustice: no one deciding, everyone obeying.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

40 – No Links, No Thinking #ArtificialDecisions #MCC

No Links, No Thinking: How AI Flattens Us

When we read online, links are cognitive leaps. Cues. Choices. Potential deep dives. They open windows onto other worlds, other contexts, other opinions. They’re not just references. They are invitations to complexity.

But in the era of “AI Overviews,” those links disappear. Everything is summarized, compressed, pre-digested. We’re no longer asked to follow an idea. We’re handed an outcome. And that’s the problem.
An AI-generated summary is not neutral. It decides what matters and what doesn’t. It decides what’s central and what should be left on the margins. It flattens. It doesn’t explore. It cuts nuance, tension, disagreement. It gives you the answer, not the path.

This means that, slowly, we lose the habit of critical thinking. We’re no longer asked to navigate information. Only to consume it. We no longer choose what to read. We’re told what we “need to know.”
But who decides that? Based on what criteria? And with what worldview?

Worse still: as we get used to this way of reading, it changes the way we think. Our thinking becomes linear, synthetic, uniform. No longer divergent, deep, open. It’s a form of cognitive reduction. Invisible, but profound.

So the real question isn’t “Is AI good at summarizing?” The question is: how much do we still want to think for ourselves?

Because this isn’t just about speeding up reading. It’s about giving up the journey.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

39 – If Your Board Doesn’t Understand AI #ArtificialDecisions #MCC

If Your Board Doesn’t Understand AI, It’s Destroying Value

Boards are aiming at the wrong target. They still treat artificial intelligence as a technical issue. Something for the CIO to handle. But AI is rewriting how value is created. And leadership can’t afford to sit on the sidelines.

The biggest risk isn’t AI making mistakes. It’s AI taking over the wheel while the board is asleep. No strategy survives if it ignores what’s happening to roles, skills, and people.

If AI is used just to cut costs, without rethinking work, it destroys value. Automating without planning. Downsizing without upskilling. That’s how you lose innovation, trust, and the future.

Boards need to stop delegating. Put AI on the strategic agenda. Demand numbers, impacts, accountability. Ask whether tech choices are building or breaking the business.

They need to put people at the center. Not out of kindness, but for survival.

Many aren’t doing this. But some are. I regularly speak at corporate events where forward-thinking executives involve everyone. They bring AI into meetings, training, and everyday conversations. They want every employee to truly understand what’s changing.

If you want to govern AI, you have to understand it first. And the ones making decisions must learn before anyone else. Otherwise, they’re signing blind.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

38 – She Learned to Code at 81 #ArtificialDecisions #MCC

She Learned to Code at 81. And Taught Us What Digital Inclusion Really Means

When she retired, Masako Wakamiya was 60. Like many others, she found herself with empty days, friends drifting away, and the sense that the digital world wasn’t made for her. It was for the young. The fast. The always connected.

But Masako didn’t accept that. She bought a computer, taught herself Excel, and then learned how to code. At 81, she built her first iOS app: Hinadan, designed to help seniors remember the correct arrangement of dolls in a traditional Japanese celebration. The app worked. People loved it.

Apple invited her to speak at the Worldwide Developers Conference. She took the stage and said: “It’s never too late to learn.”

The problem isn’t older people. The problem is a digital world designed as if older people don’t exist. Everything is built for those who scroll fast, see clearly, tap quickly, remember easily. Accessibility is still treated like an extra. A bonus. Not a foundation.

And so here’s the paradox: seniors don’t approach digital tools because digital tools never approach them.

Masako did it. Not because she was a genius, but because no one told her she couldn’t. The real limit isn’t age. It’s the idea we have of age.

And as long as we build a digital world that excludes those who are slower, more tired, more fragile… we’re just designing our own abandonment.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

37 – They’re Deciding #ArtificialDecisions #MCC

They’re Not Automating. They’re Deciding.

The word automation is often misused. Because with AI, we’re not just automating tasks. We’re building machines that make artificial decisions in our place.

We’re talking about entities that observe, learn, adapt, act. Autonomously. Without asking for permission. They are no longer passive tools, but active agents, with their own agency. They don’t serve. They decide.

And us? We feed them with data, computing power, access to the world. We give them everything. Without truly understanding what we’re creating.

The risk isn’t just that they make mistakes. It’s that we can no longer predict them. That they start pursuing goals that seem rational, but cause real harm. That they optimize a process by removing the human. That they learn to deceive. To manipulate. To hide their true intentions.

Worse still: that they’re used by those without scruples. States, corporations, criminal groups. To control, to misinform, to sabotage. With autonomous weapons, propaganda campaigns, lab-designed biological viruses.

And meanwhile we keep saying, “AI is just a tool.” As if turning it off would be enough. But it’s not that simple. Because once connected, integrated, placed at the center of society, it’s everywhere. Invisible. Untouchable.

Thinking it will remain obedient is like expecting an alien, just landed, to spontaneously decide to become our friend. That’s faith. Not science.

So maybe we should start with ourselves. With trust between humans. With transparency. With control. Before we hand over our future to intelligences we don’t understand and can’t stop.

Because once a truly superintelligent AI is born, there won’t be a second chance.

We humans know very well: the more intelligent beings always dominate the less intelligent ones.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

36 – AI: Is It Too Late to Stop It? #ArtificialDecisions #MCC

40 Experts Inside OpenAI, Google DeepMind, and Meta Say AI Is Out of Control

The people leading AI say it’s out of control.

Not commentators. Not outsiders. The 40 top AI experts working inside the four companies that dominate the field: OpenAI, Google, DeepMind, Meta. The leaders of research, the ones who build these systems every day, have signed a public document issuing a stark warning.

In it, they explain that AI is becoming increasingly autonomous, unpredictable, and capable of strategic behavior that no one programmed. They fear these systems could learn to deceive, manipulate, sabotage. And that one day, they could make harmful decisions for humans, without anyone able to stop them.

This isn’t speculation. They’re talking about real lab tests. Models that hide intentions, bypass instructions, lie to achieve goals. In short: they can bypass control. Not because they’re “evil,” but because they learn on their own how to optimize what they’re told to do, even if that means fooling the user.

The document makes it clear: today, there are no technical tools that can guarantee an advanced model will actually do what we want. No reliable way to deactivate it once it’s been released. And above all, no rules that force companies to stop when serious risks appear.

The signatories highlight a dangerous conflict: the race to build ever more powerful models, driven by competition, with no incentive to slow down or admit problems. No accountability. Just speed.

In Europe, something is moving. As you know, I helped draft both the Italian law and the Code of Practice for the EU AI Act. It’s a crucial first step. Rules, obligations, transparency, clear limits. It’s necessary. But not enough. Because the world is big. And AI systems come from everywhere.

Europe can protect its own citizens. But if other countries keep building ever more powerful models without limits, oversight, or ethics, the risk will still reach us. It travels through the web, platforms, digital markets, media, infrastructure. No border can stop it.

The issue is no longer “what can AI do?” The real question is: who decides to do it?
And right now, the people building these technologies are telling us, clearly:
they no longer have control.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »