Category: Artificial Decisions

Artificial Decisions

71 – American University Sounds the Alarm: Books Will No Longer Be Read

American University Sounds the Alarm: Books Will No Longer Be Read.

âś… This video is brought to you by: #EthicsProfile

Linguist Naomi S. Baron, professor emerita at American University, says students no longer read because they prefer machine-generated summaries. You might think this is nothing new. After all, there were already those little yellow booklets summarizing the classics. No. This is very different. With AI, you don’t just skip pages, you skip the entire cognitive process. You don’t even need to compare two novels or interpret a character. The machine does it. What you’re left with is only the illusion of having read.

The numbers spell disaster. In the United States, kids who read for pleasure dropped from 53% in the 1980s to just 14% today. In the UK, only a third of under-18s pick up a book in their free time. In South Korea, 87% of adults read at least one book in 1994. Today it’s less than half. The trend is global and falling fast.

With services like BooksAI or BookAI.chat, reading is no longer required. Need to prepare a comparison between Huck Finn and Holden Caulfield? Once upon a time you had to read at least a summary. Now AI does the critical comparison for you, and even suggests the questions to bring to class. Personal growth, doubt, discovery. Gone.

Baron calls it “cognitive offloading”: handing over to machines the thinking we should be doing ourselves. Studies already show that writing with AI reduces brain activity and changes the way our brains connect. If the same applies to reading, we’re digging ourselves a cognitive hole.

The issue isn’t just cultural. It’s human. Without truly reading, we lose the ability to interpret, to process, to grow through the stories of others. We let ourselves be seduced by the shortcut of efficiency, but in the end we come out poorer. Not in time, but in thought.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

70 – We Are the Copy-Pasters of AI

We Are the Copy-Pasters of AI

I don’t know about you, but we see it every day. AI was supposed to free us, but instead it chained us. ChatGPT writes a text, we copy and paste it into Gamma.app. Done there? No, we copy again and paste it all into Google Slides.

We write a text ourselves, ask ChatGPT to fact-check it. It fixes things, we copy and paste. Then we move it into a project for micro-documentary prompts. Copy, paste again. We are the copy-pasters of AI.

Agents exist, sure. But for these things they still work poorly, especially when the task is something you only do once in your life. They don’t have patterns, they don’t have memory. And in the end, we are faster and better with our copy-paste.

When we say we “used AI,” we’re really just copy-pasting. “I used AI to write this document” often just means: I copy-pasted some text from AI into this document.

We’re not deciding, we’re not creating, we’re just moving chunks from one window to another. A digital assembly line. The machine works, we do the grunt work. It’s a total reversal. We’re no longer the masters of tools. We’re their assistants.

The truth is we’ve invented a new job: the human copy-pasters. Output movers, order confirmers. We carry the responsibility without the satisfaction.

And that’s the trap. AI looks autonomous, but without us it doesn’t move. It forces us into wasted time, making us the invisible slaves that keep its illusion alive.

How long will we remain digital field hands?

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

69 – TikTok Killed Music

TikTok Killed Music.

âś… This video is brought to you by: #EthicsProfile

There was a time when music had clear filters. Radio stations decided what we listened to. Then MTV arrived and turned music videos into a universal language. Then the record labels, with their billion-dollar contracts, decided who would be a star and who wouldn’t.

Today none of them hold that power. Today an algorithm does. Today TikTok rules.

If a track doesn’t blow up there, it doesn’t even chart. It won’t get radio play, a label won’t push it. It simply doesn’t exist. Lyrics don’t matter, talent doesn’t matter, production doesn’t matter. What matters is if it works in fifteen seconds. If it can soundtrack a dance, a meme, a gag.

It’s a total reversal, because music isn’t music anymore. It’s background noise, sound built to last the space of a scroll. Disposable jingles. Hits engineered to survive two weeks, not two decades.

The consequence is devastating. Real artists ignored. Mediocre songs going viral just because they loop well. Music that once told the story of entire generations is now reduced to an algorithmic product. No longer cultural language, just marketing.

And here’s the darkest point: TikTok isn’t a neutral platform. It’s a Chinese app, with an algorithm deciding who we hear and who disappears. This isn’t about taste anymore. This is about cultural politics. This is about identity.

Because music has always been a mirror of its time. The Beatles told a story of revolution, punk told a story of rage and rebellion, hip hop told a story of neighborhoods and inequality. What does TikTok tell? Not who we are, but what it wants us to be.

The song, as we knew it, is dead. No more three-minute format. No more build-up. No more chorus that unites. Just a fifteen-second loop that works on scroll, and disappears the week after.

I don’t know what you think, but we’ve traded generational anthems for disposable noise. We’ve handed over our soundtrack to an algorithm, and we don’t even notice that real music is disappearing.

The truth is blunt: TikTok killed music. And we let it happen.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

68 – The girl who taught AI to protect children

The girl who taught AI to protect children.

âś… This video is brought to you by: #EthicsProfile

Chow Sze Lok wasn’t born a genius. She’s a 17-year-old student at St. Mary’s Canossian College in Hong Kong. She studies, does homework, lives like many teenagers. Until one day she reads about cases of abuse in childcare centers. What shocks her most isn’t only the violence. It’s how no adult noticed the warning signs.

Her starting point is almost laughable: an old laptop. No lab, no funding. Just her and the belief that technology could protect kids. She dives into computer vision, spends endless nights coding, failing, restarting. Six months of pure persistence.

That’s how Kid-AID was born. A system that doesn’t just record. It watches, recognizes, alerts. It flags suspicious interactions in real time inside daycare centers. It gives CCTV new eyes. A fragile prototype, but it works.

Then come the trials by fire: competitions. At the Hong Kong Science Fair, her project surprises the judges. At the international InfoMatrix contest, she wins again. A teenager with an old laptop, beating projects from teams backed by serious money.

The impact is huge. Kid-AID becomes a symbol of a different kind of AI. Not built to maximize clicks or sell ads, but to protect the most vulnerable. That’s the full hero’s journey: she starts small, faces obstacles, builds her tool, and returns with a gift for the community.

Chow didn’t just write code. She proved that the boldest innovations don’t come from Silicon Valley. They come from those who dare to look where others look away.

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

#ArtificialDecisions #MCC #Sponsored

Guarda il post »
Artificial Decisions

67 – He gave a voice to his neighborhood with scrap electronics

He gave a voice to his neighborhood with scrap electronics.

âś… This video is brought to you by: #EthicsProfile

Kelvin Doe was born in Freetown, Sierra Leone, in a neighborhood where there was nothing: no electricity, no internet, no tools. At 13, the struggle was already clear, doing homework in the dark, living in a disconnected world. For many, that was normal. For him, it was the starting point.

The conflict was brutal. His community had no power, no news, no voice. Kelvin refused the silence. He began collecting electronic waste from dumps: pieces of metal, old batteries, burnt wires. He invented homemade batteries with cans and acid. He built radio transmitters from scraps. No books, no teachers. He learned everything by experimenting, failing, starting again.

At 15, he managed to create a community radio. He called it DJ Focus, broadcasting music, news, and useful messages for his neighborhood. For the first time, his community heard its own voice. Technology made from junk became a tool of connection.

The return with the gift happened here in the United States. In 2012 he was invited to the MIT Media Lab in Cambridge, Massachusetts, just over 300 kilometers from here in New York. He became the youngest ever to take part in one of their programs. His story went viral thanks to YouTube and digital media. CNN and The New York Times told the world about “the boy who built electronics from trash.” The world learned that innovation doesn’t only come from Silicon Valley. It can also be born in the dumps of Freetown.

Kelvin started as a boy with no electricity and returned as a global symbol of creativity and resilience. His lesson is clear: with ingenuity and digital sharing, even the smallest voice can be heard everywhere.

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

66 – The Day the Internet Died #ArtificialDecisions #MCC

The Day the Internet Died.

👉 This video is brought to you by: #EthicsProfile

June 8, 2021. A normal morning. Then suddenly, the world goes dark. Not one site, not one platform. Half the internet disappears: Amazon down, Reddit unreachable, The New York Times silent, the BBC offline. Even the UK government can’t get its website to load.

It all starts with a bad update, a bug inside a company almost nobody had ever heard of: Fastly. A Content Delivery Network, a CDN. In plain words: the invisible systems that move data around the world at high speed. You don’t see them, you don’t talk about them, but without them the internet doesn’t work.

And yet one single misconfiguration brings it all crashing down. Within seconds, the biggest websites on Earth vanish. For a full hour, the digital world stops. To some it looks like a small inconvenience: you can’t read the paper, you can’t shop online. But behind the scenes, it’s much bigger. Airlines can’t sell tickets, supermarkets can’t update inventory, government systems can’t communicate. Real life freezes.

The story is clear: there is no single, free, public “Internet.” There are private infrastructures, and a tiny handful of companies hold the critical nodes. Fastly is just one. Add Cloudflare, add Akamai. Three names that keep the world online. But nobody elected them, nobody really oversees them. They answer to boards of directors, not to governments.

Fastly’s blackout lasted an hour, but that hour was enough to reveal the truth. The internet is fragile, not resilient, not democratic. It’s a web of private nodes, and when one falls, everything falls.

The rhetoric we were sold, that the internet is free, invulnerable, horizontal, was a myth. The network is privatized, concentrated, and we live on top of it as if it were eternal. But a single bug proved otherwise.

The real problem isn’t the incident. Incidents happen. The problem is the model. We’ve built our entire civilization on infrastructure we don’t control. Healthcare, schools, finance, politics. Everything depends on private servers. There’s no guarantee of continuity, no sovereignty at all.

Think about it. If a one-hour outage could paralyze newspapers, governments, and corporations, what happens if it lasts a whole day? Or a week? A targeted attack could make that happen, and we don’t need science fiction to imagine it. Back in 2016 the Dyn attack showed it clearly: hundreds of thousands of infected webcams and fridges took down Twitter, Netflix, CNN, PayPal. That wasn’t a movie. It was reality.

Here’s the truth: the internet isn’t ours. It doesn’t belong to us. It’s controlled by a handful of private companies with zero obligations to the public. Next time, it may not take just an hour to fix it.

When the internet dies, it doesn’t just die once. We die with it.

#ArtificialDecisions #MCC

👉 This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

65 – Robots Watch YouTube and Learn to Do Everything

Robots Watch YouTube and Learn to Do Everything Better Than Us.

👉 This video is brought to you by: #EthicsProfile

You know LLMs, like ChatGPT? Those Large Language Models that read billions of texts and then learn to write, talk, and interact with us as if they were people. Well, the same thing is happening with robots, but instead of words, it’s the body. They’re called LBMs, Large Behavior Models: systems that learn by watching billions of examples of actions, and then turn them into coordinated, fluid, precise movements.

The mechanism is identical. An LLM doesn’t know the world, it reconstructs it from text data. An LBM has no physical experience, but it gains it by watching demonstrations: videos, sensors, instructions. From there, it builds a kind of grammar of movement. A robot with an LBM no longer needs a programmer to write every gesture line by line. It just needs to see.

Think about what this means: billions of tutorials already on YouTube teaching how to cook, fold a shirt, build a shelf, tie a rope. For an LBM, that’s pure fuel. It absorbs it, structures it, and replicates it without mistakes, without hesitation. And here’s the shock: once it learns, the robot performs better than the human who taught it. Faster, steadier, more precise.

Boston Dynamics with their Atlas robot has shown what this looks like: an LBM that controls the whole body as a single system, hands and feet interchangeable, movements continuous rather than in clunky blocks. It’s as if the robot learned the logic of moving on its own, treating the body as one machine instead of separate parts.

That’s the real breakthrough. Yesterday, a robot was a puppet of code. Today, it’s becoming a universal apprentice. Every recorded human action becomes an instruction it can learn and perfect. It’s no longer science fiction: robots can already take the entirety of practical human knowledge that’s online and make it theirs.

The uncomfortable question is this: what’s left for us, when a machine can watch what we do, copy it instantly, and do it better?

#ArtificialDecisions #MCC

👉 This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

64 – The truth nobody dares to say out loud #ArtificialDecisions #MCC

The truth nobody dares to say out loud.

👉 This video is brought to you by: https://www.ethicsprofile.ai

In recent weeks, I’ve spoken with about a dozen people in the United States: investors and founders working in AI. In private, they told me what they won’t admit publicly: today’s models don’t learn continuously, don’t adapt in real time, can’t update themselves, and can’t be trusted in production. Yet billions keep flowing in, and top talent remains locked into an architecture that has already run out of steam.

The problem isn’t technical. It’s psychological, cultural, and financial. GPT-5 doesn’t mark a product failure. It marks a paradigm failure. Scaling the wrong paradigm won’t bring us AGI. It will only deliver a bigger illusion. Claude, Gemini, Grok, Llama: all headed for the same fate.

And soon, a massive ethical problem will explode. These AIs reflect the values of their creators, not those of each individual user. No ethical personalization. No respect for individual differences. When that becomes obvious, trust will collapse.

We either build cognitive AI, able to learn continuously, adapt autonomously, and integrate personal values, or it won’t be intelligence. It will just be imitation.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

62 – A machine will decide when to launch the bomb #ArtificialDecisions #MCC

A machine will decide when to launch the bomb.

👉 This video is brought to you by: https://www.ethicsprofile.ai

We’re approaching a point of no return. Artificial intelligence isn’t just showing up in research labs, marketing algorithms, or chatbots. It’s entering nuclear systems.

That’s what emerged at the University of Chicago, where a group of experts, including physicists, military officials, and Nobel laureates, made one thing brutally clear: the integration of AI into the management of nuclear arsenals is no longer a possibility. It’s a certainty.

They call it “automation of command and control.” It means AI will assist in strategic decisions, in handling classified data, in simulating attack scenarios. It means that sooner or later, a machine will be asked whether or not to start a nuclear war.

The looming threat is automated error. Predictive models don’t doubt. They have no conscience. They don’t have Petrov. I made a video about this recently. That Soviet officer who, in 1983, stopped a nuclear launch by relying on human instinct, because he sensed the system was wrong. AI doesn’t sense. And it doesn’t hesitate.

They’re telling us something simple: the time to regulate is now. Not in ten years. Not after the first disaster.

Meanwhile, the Doomsday Clock is stuck at 89 seconds to midnight. And no, that’s not coming from conspiracy theorists. It’s the Bulletin of the Atomic Scientists.

And us? Just sitting here, watching generative models crack jokes on Instagram.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

61 – Cybercrime on Subscription #ArtificialDecisions #MCC

Cybercrime on Subscription: Criminals Are Now Buying Netflix for Crime.

The world of cybercrime isn’t what we imagine anymore. No hoodies, no basements, no messy scripts. It’s an industry that sells subscriptions. A turnkey package to steal identities and infiltrate anywhere.

They call it “Impersonation-as-a-Service.” Like Netflix, but instead of TV shows you get phishing tools, training, coaching, ready-made exploits. You don’t even need to code: just pay and impersonate.

Underground forums prove it: ads for English-speaking social engineers have exploded in just one year. Because language is the most powerful weapon. No need to break firewalls if you can trick an employee into giving you credentials over the phone.

The big gangs know it. ShinyHunters and Scattered Spider already hit Ticketmaster, AT&T, Dior, Chanel, Allianz, Google. Not with advanced malware, but with fake voices, fake accounts, manipulated trust. AI has amplified it all: cloned voices, credible texts, perfect scripts.

The lone hacker era is over. Now it’s organized gangs borrowing tactics from intelligence agencies: reconnaissance, employee profiling, mapping software and company values, then striking. Invisible.

And now criminals don’t even need to learn. Traditional crime groups just buy the service, add these new tools to their old experience, and that’s it. They invest. We don’t. There are no global programs, no constant media campaigns, no effort to educate people about new tricks. Awareness should be the first line of defense, but we let it rot.

Most attacks don’t happen because of system flaws. They happen because someone falls for them. And if we stay ignorant, we’ll keep handing the entire world to those who turned crime into a subscription service.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

60 – Every human job will be changed or eliminated #ArtificialDecisions #MCC

Every human job will be changed or eliminated. Not in ten years. Much sooner.

That’s what Jensen Huang, CEO of Nvidia, says. It’s not a prediction. It’s a plan.

The man behind the chips powering every AI system in the world said it loud and clear: we’ll transform everything that can be automated, and get rid of the rest. No safe zones. No immune sectors. No one exempt.

This is the labor market rewritten by machines. But this time, it’s not factories that are shaking. It’s offices, schools, hospitals, newsrooms. Knowledge work. Intellectual professions. The ones that thought they were untouchable.

Because if humanoid robots can do what today’s generative AI already does, it means every bot will be able to learn a physical job in seconds. Not in twenty years. Now.

Who survives? Only those who shed their skin. Those who integrate with AI. Those who become complementary, not replaceable. In short: either you learn to work with machines, or machines will work without you.

What do you think about it?

This wave won’t wait. Stand still, and you’ll be swept away.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

58 – Got students to stop using ChatGPT… by using ChatGPT #ArtificialDecisions #MCC

The teacher who got students to stop using ChatGPT… by using ChatGPT

When he found out half his class was using ChatGPT to write essays, he didn’t panic. He didn’t lecture them. He didn’t block anything. He simply said: “Alright. Let’s use it better.”

His name is Marcus. He teaches literature at a high school in Manchester. One day he assigns a paper: “Write an essay on Orwell.” He gets back perfect texts. Too perfect. All similar. All lifeless.

He knows immediately. No one wrote anything. All done by AI. But instead of punishing them, he challenges them: “Now write a critique of what you submitted. Deconstruct it. Show me where the AI got it wrong. Where it oversimplified. Where it avoided taking a stance.”

The students don’t know where to start. They’ve never analyzed a text that way before. But slowly, they begin to see it. They realize the AI is vague. That it avoids strong positions. That it repeats patterns without depth. That it builds without risk.

And for the first time, they write something of their own. To criticize the machine, they have to think. They have to reflect. Choose words. Take a side.

ChatGPT had become a shortcut. He turned it into a mirror. And brought back the one thing that matters in school: Understanding why we write, not just what we write.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

57 – Deepfakes Are Killing Democracy. #ArtificialDecisions #MCC

Deepfakes Are Killing Democracy. And No One Seems to Notice.

There was a time when you didn’t need to be an expert to vote. You could just skim the news, watch the evening broadcast, and get a general sense of things. Even if you weren’t well-informed, democracy still worked. Because the information ecosystem, though imperfect, was stable. There were journalists, editors, rules, and checks. Different opinions, yes, but grounded in facts.

That world no longer exists. And it never will again.

Generative AI has destroyed the very idea of visible truth. Deepfakes, fake videos, voices, and images that look completely real, are replacing reality with believable simulations. No filters. No verification. And millions of people fall for them every day.

The problem isn’t just technical. It’s political.

Because democracy is based on a simple idea: everyone can vote, even if they’re not well-informed. Even if they don’t have time, or access to in-depth knowledge. As long as the surrounding environment offers at least a minimum of shared truth. That environment is now poisoned. Intentionally.

This isn’t about newspapers shaping public opinion.
It’s about entire fake realities being built from scratch, targeted by group, by identity, by emotion. Personalized. Invisible. Amplified by bots, shared by friends, believed because they look real.

That’s why democracy is at risk.

Because if everyone can vote, but each person lives in a manipulated bubble, that vote isn’t free. It’s directed. Controlled. Whoever shapes the synthetic narratives controls the consensus. Period.

And this year, things got worse.

Deepfakes are no longer experimental. They’re everywhere. They’ve become a systematic weapon. With elections coming up across Europe, the U.S., Africa, and Asia, fake videos of politicians, fabricated news, and simulated scandals are already spreading.
This isn’t a future threat. We’re in it now.

And no one is stopping it.

Without a strong, independent global authority to identify, block, and sanction the use of deepfakes for mass manipulation, democracy won’t survive.
What we’ll have instead is manufactured consent, disguised as pluralism.

This isn’t a theory.
This is a bomb that has already gone off.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

56 – AI has profiled you. Now you pay more. #ArtificialDecisions #MCC

AI has profiled you. Now you pay more.

It’s already happening. Different prices for the same flight. Same airline, same route, same date. The differences are small. A few extra euros here and there. Because today’s systems only use basic data: IP address, browsing history, device type.

But tomorrow will be different. Because when AI really knows everything about us, the price won’t just change a little. It could change tenfold.

Why? Because it thinks we can afford it. Because it’s collected information, accurate or not, about us: where we live, what we do, how much we earn, how long we take before clicking “buy.”

So, for the same flight that costs someone else €100, we could be charged €1,000. Just because we’re us.

And we won’t be able to prove it. We’ll never know what others paid. We’ll never know what data it used. Maybe it’s stolen data. Or outdated. Or just plain wrong.

But who explains that to the machine? Who protects us from a price based on a false profile?

And when we try to complain, we’ll get an automated help desk. Another AI telling us: “There’s nothing I can do. This is what was decided.”

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

55 – She was blind and she taught AI how to really see. #ArtificialDecisions #MCC

She was blind. And she taught artificial intelligence how to really see.

Maya lost her sight as a child. But she never stopped reading the world: with her hands, her ears, her entire body. And with a sharp, precise mind. She studied computer science. Then, computer vision. Yes, vision. People thought she was crazy. But she knew that not seeing allowed her to notice what others missed.

Years later, she joined a team building an AI system to describe images for blind users. It worked, but poorly. It said things like “a man sitting” or “a woman with a purse.” Generic. Soulless. Sometimes even offensive. Biased labels. Bad assumptions. Maya quickly saw the issue. The AI “saw” but didn’t listen.

So she changed the method. She asked for each image to be described by human volunteers. Slowly. With nuance. With emotion. She listened to thousands of descriptions, turned them into structured data, mapped what truly matters to blind users: tone, relationships, context, intention.

She cleaned the original dataset. Expanded it. Removed labels like “normal” or “abnormal.” Introduced new ones: not just what is in the image, but why, who, what could happen next.

She taught AI not to describe, but to interpret. To assist, not oversimplify. To become a way of reading the world, not a blind copy machine.

The model got better. More useful. More respectful. More accurate. Not because it saw more, but because it was trained by someone who never had sight and therefore listened more deeply.

Maya didn’t give artificial intelligence vision. She taught it to pay attention. Which is a whole different kind of seeing.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

54 – Does ChatGPT Switch Off Your Brain? #ArtificialDecisions #MCC

Does ChatGPT Switch Off Your Brain? MIT Says Yes.

MIT did something no one had tried before: they put people inside a brain scanner while they used ChatGPT. The result? A 47% drop in brain activity. Not an opinion. A number. And the most alarming part? The brain stayed quiet even after they stopped using the app.

83% of participants couldn’t remember a single sentence they had just written minutes earlier. Not because they were distracted. Because they were disconnected. Writing with AI made everything faster: 60% less time to complete tasks, but also required 32% less mental effort. Translation: we think less, we learn less.

And the writing? Technically correct, yes. But flat, predictable, soulless. Educators described it as “robotic.” No further comment needed.

The best performers did the opposite: they started without AI. They thought, wrote, reasoned. Only then did they use ChatGPT. And their results were better across the board: stronger memory, more brain activity, better outcomes.

The real question isn’t whether to use AI. It’s when to use it, and how much of ourselves we’re outsourcing. If we’re using it to avoid thinking, we’re giving up the part of the process that actually matters — the part where we understand what we’re doing.

Think first. Then, maybe, get help. Not the other way around.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

53 – The Prime Minister and the Digital Oracle #ArtificialDecisions #MCC

The Prime Minister and the Digital Oracle: When AI Enters Parliament

Swedish Prime Minister Ulf Kristersson has publicly admitted to frequently using artificial intelligence tools like ChatGPT and the French service LeChat to get a “second opinion” in his work. Not to make decisions, he says, nor to handle classified information, but to confront different viewpoints and challenge his own ideas.

Too bad AI doesn’t have opinions. Just patterns.

These models don’t think. They don’t understand political context, irony, or social tensions. And above all, they’re not neutral. They reflect, and amplify, the worldview of those who built them. When a leader turns to machines like these for insight, they don’t find pluralism, they find confirmation.

The problem isn’t using AI. The problem is not knowing what’s inside.

A Prime Minister asking ChatGPT for advice is, even unknowingly, accepting an ideological filter: opaque, built elsewhere, trained on data selected by those with access, power, visibility. And if we trust it blindly, we risk turning AI into a digital oracle to which we delegate doubt, debate, and critical thinking.

Kristersson says his use is “limited and safe.” But that’s not enough. Every time a political leader interacts with AI, we should be asking: with which model? Trained where? By whom? Filtered how? For what purpose?

We can’t allow ChatGPT to take part, quietly, in political decision-making…
…without ever being elected.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

52 – AI Messes Up, You Pay: The Hertz Case #ArtificialDecisions #MCC

Hertz’s AI doesn’t care who you are. And it doesn’t bother to tell the difference between real damage and imaginary scratches. Since April 2025, the company has been rolling out “intelligent” scanners in its parking lots, selling them as a breakthrough for quick, impartial inspections of rental cars. In theory, fewer disputes and more transparency. In practice? A machine that snaps every tiny imperfection and turns it into an expensive bill.

One customer was charged $440 for a one-inch scuff on a wheel: $250 for the “repair,” $125 for a “processing fee,” and another $65 for “administrative costs.” We’re not talking about a torn-off bumper, but a barely visible mark. The scanners, built by UVeye, instantly send “before and after” photos, and the system offers a “discount” if you pay within a few days. It’s psychological pressure dressed up as convenience.

Anyone trying to dispute the charge faces automated chatbots, opaque procedures, and wait times of up to ten days for an email response. Meanwhile, your credit card is already under siege. On forums and Reddit, dozens of similar stories circulate: “automatic, non-contestable charges” is the most common phrase. Some customers say they’re done with Hertz for good.

And it’s not just Hertz. Sixt also uses a similar system, called “Car Gate.” There, too, glaring errors pop up: photos with incorrect timestamps showing the “damage” was there before the rental even started. In this context, the promise of greater transparency flips on its head. It becomes an aggressive mechanism that seems designed more to cash in than to ensure fairness.

Here, AI isn’t helping anyone. It’s replacing common sense with a cash-register mindset. And when the machine gets it wrong, it’s not the one paying. You are.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

51 – Humans Beat AI, But Cost 40 Times More #ArtificialDecisions #MCC

A recent study compared human moderators with the latest multimodal models like Gemini, GPT, and Llama to see who can truly ensure brand safety. The research comes from a team of media analysis and content safety specialists at Zefr Inc., a U.S. company that works with major advertisers to keep brands away from toxic content.

The verdict is clear: humans win. They are better at spotting borderline cases, understanding irony, reading cultural context, and telling satire from hate. Even the most advanced AI models fail where judgment matters most, letting harmful material slip through or blocking content that is not a problem.

But quality comes with a price — and a steep one: nearly 40 times higher than automated systems. For many companies, the temptation to cut costs and hand everything over to AI is huge. The problem is that savings can backfire fast. One mistake can trigger a reputational crisis that costs far more than what was saved.

The smart move is not to choose between humans and AI, but to combine them. Let the machine handle mass filtering, and leave the delicate calls to people. Because protecting a brand is not just about applying rules. It is a daily exercise in judgment. And for now, that judgment still belongs to humans.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »
Artificial Decisions

50 – Car went off the road and it changed Italy’s fate #ArtificialDecisions #MCC

In 1961, a car went off the road and it changed Italy’s fate forever. Almost nobody knows.

A car crash can change the fate of Italy, and we’re not talking about politics or sports. We’re talking about technology, about the future, about an opportunity that could have redrawn the global map of innovation. In 1961, Mario Tchou, an Italian-Chinese engineer raised in Rome and trained in the United States, died in a car accident that has never been fully explained. Just a few months earlier, Adriano Olivetti, the visionary who had put him in charge of Olivetti’s Electronics Division, had also passed away. Two key figures, gone in the span of a single year.

And yet, what they were building in Ivrea wasn’t just ambitious, it was revolutionary. The Elea 9003, launched in 1959, was already the world’s first fully transistorized commercial computer. But in the labs, they were working on something even more groundbreaking: miniaturizing electronics using silicon microchips, integrated circuits capable of packing multiple components onto a single substrate.

In the United States, Kilby and Noyce were reaching the same conclusions, but in Ivrea there was one crucial difference: they weren’t aiming for a lab prototype. They wanted to put it straight into a product. The goal was to create a desk-sized computer for offices and businesses, the seed of what would later be called the personal computer.

If Olivetti and Tchou had lived, Italy would have had a pioneering technology, years ahead of the competition. We could have become the Silicon Valley of Europe. Cupertino could have been in Piedmont. Apple could have been born here.

Instead, in 1964, leaderless and adrift, the Electronics Division was sold to General Electric. Along with it went patents, projects, and know-how that ended up fueling the American computer industry. Many of the engineers trained by Tchou went on to work abroad, contributing to the systems that would define modern computing.

An echo of that genius remained in the Olivetti Programma 101, launched in 1965, the first personal computer in history. But by then, the primacy was gone.

All because of a car crash on an Italian road. A single instant that erased the greatest chance we ever had to write the future instead of buying it from someone else.

#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari

Marco Camisani Calzolari
marcocamisanicalzolari.com/biography

Guarda il post »