Category: Artificial Decisions

Artificial Decisions

Elders and AI: Loneliness Turned Into Business

Elders and AI: Loneliness Turned Into Business

Older people are the perfect target for artificial intelligence. Not to learn, not to work better, but to fill the emptiness of loneliness.

It’s already happening. Here in New York, many seniors spend hours with chatbots and voice assistants. They talk to Alexa to hear the news, revisit photo memories, ask for recipes, or just have someone answer back. Here in the United States, millions of elderly people do the same. A machine that never gets tired, that always replies. It feels like company, but it’s only simulation.

The numbers confirm it. A 2024 study on people over 55 found that 78% use tools like ChatGPT, Alexa, or Google Assistant, and 18% use them mainly for companionship. At 75, some people spend up to 5 hours a day with a social robot like ElliQ, which costs $59 a month. Clinical data show the effect: in a study with Alexa Echo Show, seniors’ loneliness scores dropped from 47 to 36 points in just six months.

Here in the United States, the business is already mature. Platforms have turned loneliness into a subscription. Twenty or fifty dollars for an artificial voice that listens, remembers your habits, and suggests what to do. An ocean of ideal customers: fragile, isolated, with plenty of time.

The problem isn’t that AI talks to people who have no one. The problem is when it systematically replaces human relationships. Affection becomes product. Empathy becomes software. Friendship becomes algorithm. We stop noticing there’s nobody on the other side.

It’s a massive cultural reversal. Technology that was supposed to connect us separates us. It trains us to accept the illusion. It makes us believe a synthetic voice is enough not to feel alone. But that’s a lie. If it comforts you without understanding, if it listens without feeling, it isn’t company. It’s a surrogate.

And this isn’t only about seniors here in NYC or the Midwest. Today they are the ideal customers. Tomorrow it will be all of us. Artificial friendship, artificial love, artificial company. And if we don’t learn to see the difference, we’ll settle for the copy.

The question remains: what’s left of the human if even affection is outsourced to an algorithm?

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

78 – The Woman Who Challenged Facebook With Its Own Data

The Woman Who Challenged Facebook With Its Own Data

Frances Haugen didn’t start out as an activist. She was an American engineer, working in Silicon Valley, on a brilliant career path. In 2019 she joined Facebook as a product manager. Her job was to analyze the algorithms that decide what we see every day in our feeds.

Then came the discovery. In the internal documents she studied, it was written clearly: Instagram was harming teenagers’ mental health. And the most extreme political and social content was being pushed by the algorithm because it generated more clicks. The conflict was brutal: stay silent to protect her career, or expose the most powerful toxic system on the planet.

Frances chose to act. She copied thousands of internal files and in 2021 brought them to the U.S. Congress. That was the fall. She exposed herself, lost her job, and became “the Facebook whistleblower.” She faced attacks, criticism, threats. But she didn’t stop.

The return with the gift was enormous. Thanks to those documents, we know that behind likes there aren’t just photos and memes, but a system that manipulates emotions and divides communities. Haugen’s hearings, here in Washington and broadcast worldwide, opened millions of eyes and forced governments and institutions to rethink the rules of digital platforms.

Frances started out as a systems engineer. She returned as a civic hero, leaving us with a clear gift: the truth about the dark side of algorithms.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

77 – Did You Know the Dead Internet Theory?

Did You Know the Dead Internet Theory?

âś… This video is brought to you by: #EthicsProfile

The internet may already be dead. That’s the Dead Internet Theory: the idea that much of what we see online isn’t created by people anymore, but by machines.

The numbers are brutal: almost 50% of global traffic today is automated. Bots, scrapers, fake account networks. Social feeds filled with identical phrases, cloned memes, discussions between profiles that look human but aren’t. It’s not dialogue, it’s simulation.

The theory was born in 2021 from an anonymous user called IlluminatiPirate on a niche forum, Macintosh Café. At first, it sounded like digital paranoia, an internet drained of people, replaced by AI and bots talking to each other. But then signs started to line up with reality. Historic forums deserted. Authentic communities drowned in automated noise.

Who are the “supporters”? Not many. The media covers it as a cultural provocation, a reflection of how the web feels less human. But one name matters: Sam Altman, CEO of OpenAI. The man building AI itself.

A few months ago, he wrote on X: “i never took the dead internet theory that seriously, but it seems like there are really a lot of LLM-run twitter accounts now.” All lowercase, in his style. LLM means large language model, the tech powering chatbots like ChatGPT. In plain words, the invasion is happening.

Altman also joked bitterly: “we’re all trying to find the guy who did this.” But the point is clear. There is no outside culprit. The very companies creating these systems are filling the web with artificial content.

So the Dead Internet Theory is no longer just a fringe idea. If even the head of OpenAI admits the web is turning into a machine-to-machine ecosystem, then the funeral isn’t tomorrow. It’s already started.

The internet didn’t die overnight. It was drained piece by piece. What we see now is a shell: alive on the surface, dead inside.

#ArtificialDecisions #MCC #Sponsored

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

76 – The Boy Who Harnessed the Wind and Entered the Digital World

The Boy Who Harnessed the Wind and Entered the Digital World.

âś… This video is brought to you by: #EthicsProfile

William Kamkwamba was born in Malawi, in a village without electricity. At 14, a drought destroyed the crops. His family had no money, no tools, no hope. School was a luxury he could no longer afford. That was the start of his hero’s journey: hunger, thirst, abandonment.

The conflict was total. Adults gave up. Experts spoke of international aid that never arrived. William refused that destiny. He spent his days in the village library, the only place with a few books. He found a science manual in English. He couldn’t read every word, but he understood the diagrams. He discovered wind power.

Using scrap metal from the junkyard, he built a windmill. The first one failed, collapsed, broke apart. He started again. And again. Finally, the blades turned. A lightbulb lit up. Then water was pumped. His family could drink, the crops grew back. The village changed.

The return with the gift happened here in the United States. In 2007 he told his story at a TED conference in California. The video went viral, shared online across the world. Digital media turned an unknown boy into a global symbol. With scholarships, he came to study in the US, using the internet and international networks to spread his invention.

From a village without electricity to the digital stage of the world. William started as a hungry boy and returned as a hero, bringing a universal lesson: even with nothing, knowledge and sharing can change the destiny of an entire community.

#ArtificialDecisions #MCC #Sponsored

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

75 – The Man Who Invented Digital Addiction

The Man Who Invented Digital Addiction. And Then Tried to Dismantle It.

âś… This video is brought to you by: #EthicsProfile

Tristan Harris worked at Google. Not just any engineer, he was the “design ethicist.” He studied how buttons, colors, and notifications change our behavior. And he saw the truth: we don’t design tools. We design habits.

Platforms were never neutral. Infinite scroll wasn’t built for convenience. Red notifications don’t exist to inform. Public likes aren’t about connection. Every detail was built to keep us there, staring at the screen. These are not mistakes. They’re choices.

Tristan wrote it all in a memo. He called it “Time Well Spent.” He asked Google to focus on human time, not screen time. They ignored him. So he quit, and for months, he regretted it. He felt lost, like he had walked away from the most admired job in tech. For what? For an idea that seemed too weak, too naïve.

Then he started talking. First at small events, then on big stages, then in the U.S. Senate. He explained how platform design really works. How every tap (Instagram, TikTok, YouTube) isn’t random. It’s architecture, built to grab attention and sell it.

He founded the Center for Humane Technology. He appeared in The Social Dilemma on Netflix. And the world listened. He said something brutally simple: if you’re not paying for the product, you are the product.

Today’s kids grow up inside these systems. Nobody explains them. They think they’re choosing, but they’re just reacting. Telling them “use your phone less” doesn’t help. What helps is this: understand who built it this way, and why.

The truth is, the people who engineered digital addiction were brilliant. The problem is, they never had to face the consequences.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

74 – The Girl Who Stopped Bullying by Learning to Code

The Girl Who Stopped Bullying by Learning to Code

âś… This video is brought to you by: #EthicsProfile

The news spreads through the school: a girl just a little older has taken her own life after months of online abuse. Trisha Prabhu is 13, living in Naperville, a suburb of Chicago, and that story won’t leave her mind. It’s not an isolated case, it’s the sign of a poison affecting her whole generation: cyberbullying.

The problem is clear. Adults don’t see it. Teachers don’t understand. Platforms look the other way. Trisha doesn’t know how to code, she has no labs, no mentors. Just an old computer and the conviction that something must be done. She decides she will teach herself the language of machines.

She starts from zero, with Java. Tutorials, manuals, endless nights of trial and error. She fails, gets back up, tries again. She’s not chasing likes or applause. She wants to find a way to stop the finger before it hits “send.”

After months, ReThink is born. An app that intercepts offensive messages and shows a warning: are you sure you want to hurt someone? And it works. Ninety-three percent of kids delete the message. Just a moment of hesitation is enough to save a life.

From there, everything changes. Trisha brings ReThink to the White House, wins over international juries, even appears on Shark Tank. An app written in Java in her bedroom becomes the shield for millions of teenagers. She did what big tech refused to do: put a brake on bullying.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

72 – Aaron, the Hero Who Fought to Free Knowledge

Aaron, the Hero Who Fought to Free Knowledge.

âś… This video is brought to you by: #EthicsProfile

Aaron Swartz was born here in the United States, in Chicago, in 1986. He wasn’t a normal kid. At 14, he helped create RSS, the standard that still today makes it possible to read news and updates online. I still use it myself to stay informed, because it lets me gather content from many sites with great ease. Aaron understood early that the internet was not just technology. It was power, it was freedom.

The conflict came quickly. He saw that scientific knowledge, often paid for with public money, was locked behind paywalls and expensive subscriptions. It wasn’t just a technical barrier. It was an injustice. Aaron couldn’t accept that.

In 2010 he took his most radical step. At MIT in Cambridge, Massachusetts, he connected to the servers and, using an automated script, downloaded millions of papers from JSTOR, one of the world’s largest academic archives. He never published them, never sold them. His goal was only to show that knowledge should not be imprisoned. But the act was illegal. JSTOR forgave him, but the federal government did not. They indicted him.

Here came the fall. Aaron was facing up to 35 years in prison and more than one million dollars in fines. Accused of computer fraud and unauthorized access, a young man who wanted to free knowledge was treated like a criminal. The pressure became unbearable. In January 2013, at just 26, Aaron took his own life.

The return with the gift came after. His death sparked a global movement. Here in the United States and around the world, the demand for open access, for free knowledge, grew stronger. Universities began releasing more and more scientific work for free. The debate over the Computer Fraud and Abuse Act exploded, showing how it had been used as a weapon.

Aaron is a hero, the central figure of a tragedy that left us with a final lesson. The digital world must not close doors. It must open them.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

71 – American University Sounds the Alarm: Books Will No Longer Be Read

American University Sounds the Alarm: Books Will No Longer Be Read.

âś… This video is brought to you by: #EthicsProfile

Linguist Naomi S. Baron, professor emerita at American University, says students no longer read because they prefer machine-generated summaries. You might think this is nothing new. After all, there were already those little yellow booklets summarizing the classics. No. This is very different. With AI, you don’t just skip pages, you skip the entire cognitive process. You don’t even need to compare two novels or interpret a character. The machine does it. What you’re left with is only the illusion of having read.

The numbers spell disaster. In the United States, kids who read for pleasure dropped from 53% in the 1980s to just 14% today. In the UK, only a third of under-18s pick up a book in their free time. In South Korea, 87% of adults read at least one book in 1994. Today it’s less than half. The trend is global and falling fast.

With services like BooksAI or BookAI.chat, reading is no longer required. Need to prepare a comparison between Huck Finn and Holden Caulfield? Once upon a time you had to read at least a summary. Now AI does the critical comparison for you, and even suggests the questions to bring to class. Personal growth, doubt, discovery. Gone.

Baron calls it “cognitive offloading”: handing over to machines the thinking we should be doing ourselves. Studies already show that writing with AI reduces brain activity and changes the way our brains connect. If the same applies to reading, we’re digging ourselves a cognitive hole.

The issue isn’t just cultural. It’s human. Without truly reading, we lose the ability to interpret, to process, to grow through the stories of others. We let ourselves be seduced by the shortcut of efficiency, but in the end we come out poorer. Not in time, but in thought.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

70 – We Are the Copy-Pasters of AI

We Are the Copy-Pasters of AI

I don’t know about you, but we see it every day. AI was supposed to free us, but instead it chained us. ChatGPT writes a text, we copy and paste it into Gamma.app. Done there? No, we copy again and paste it all into Google Slides.

We write a text ourselves, ask ChatGPT to fact-check it. It fixes things, we copy and paste. Then we move it into a project for micro-documentary prompts. Copy, paste again. We are the copy-pasters of AI.

Agents exist, sure. But for these things they still work poorly, especially when the task is something you only do once in your life. They don’t have patterns, they don’t have memory. And in the end, we are faster and better with our copy-paste.

When we say we “used AI,” we’re really just copy-pasting. “I used AI to write this document” often just means: I copy-pasted some text from AI into this document.

We’re not deciding, we’re not creating, we’re just moving chunks from one window to another. A digital assembly line. The machine works, we do the grunt work. It’s a total reversal. We’re no longer the masters of tools. We’re their assistants.

The truth is we’ve invented a new job: the human copy-pasters. Output movers, order confirmers. We carry the responsibility without the satisfaction.

And that’s the trap. AI looks autonomous, but without us it doesn’t move. It forces us into wasted time, making us the invisible slaves that keep its illusion alive.

How long will we remain digital field hands?

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

69 – TikTok Killed Music

TikTok Killed Music.

âś… This video is brought to you by: #EthicsProfile

There was a time when music had clear filters. Radio stations decided what we listened to. Then MTV arrived and turned music videos into a universal language. Then the record labels, with their billion-dollar contracts, decided who would be a star and who wouldn’t.

Today none of them hold that power. Today an algorithm does. Today TikTok rules.

If a track doesn’t blow up there, it doesn’t even chart. It won’t get radio play, a label won’t push it. It simply doesn’t exist. Lyrics don’t matter, talent doesn’t matter, production doesn’t matter. What matters is if it works in fifteen seconds. If it can soundtrack a dance, a meme, a gag.

It’s a total reversal, because music isn’t music anymore. It’s background noise, sound built to last the space of a scroll. Disposable jingles. Hits engineered to survive two weeks, not two decades.

The consequence is devastating. Real artists ignored. Mediocre songs going viral just because they loop well. Music that once told the story of entire generations is now reduced to an algorithmic product. No longer cultural language, just marketing.

And here’s the darkest point: TikTok isn’t a neutral platform. It’s a Chinese app, with an algorithm deciding who we hear and who disappears. This isn’t about taste anymore. This is about cultural politics. This is about identity.

Because music has always been a mirror of its time. The Beatles told a story of revolution, punk told a story of rage and rebellion, hip hop told a story of neighborhoods and inequality. What does TikTok tell? Not who we are, but what it wants us to be.

The song, as we knew it, is dead. No more three-minute format. No more build-up. No more chorus that unites. Just a fifteen-second loop that works on scroll, and disappears the week after.

I don’t know what you think, but we’ve traded generational anthems for disposable noise. We’ve handed over our soundtrack to an algorithm, and we don’t even notice that real music is disappearing.

The truth is blunt: TikTok killed music. And we let it happen.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

68 – The girl who taught AI to protect children

The girl who taught AI to protect children.

âś… This video is brought to you by: #EthicsProfile

Chow Sze Lok wasn’t born a genius. She’s a 17-year-old student at St. Mary’s Canossian College in Hong Kong. She studies, does homework, lives like many teenagers. Until one day she reads about cases of abuse in childcare centers. What shocks her most isn’t only the violence. It’s how no adult noticed the warning signs.

Her starting point is almost laughable: an old laptop. No lab, no funding. Just her and the belief that technology could protect kids. She dives into computer vision, spends endless nights coding, failing, restarting. Six months of pure persistence.

That’s how Kid-AID was born. A system that doesn’t just record. It watches, recognizes, alerts. It flags suspicious interactions in real time inside daycare centers. It gives CCTV new eyes. A fragile prototype, but it works.

Then come the trials by fire: competitions. At the Hong Kong Science Fair, her project surprises the judges. At the international InfoMatrix contest, she wins again. A teenager with an old laptop, beating projects from teams backed by serious money.

The impact is huge. Kid-AID becomes a symbol of a different kind of AI. Not built to maximize clicks or sell ads, but to protect the most vulnerable. That’s the full hero’s journey: she starts small, faces obstacles, builds her tool, and returns with a gift for the community.

Chow didn’t just write code. She proved that the boldest innovations don’t come from Silicon Valley. They come from those who dare to look where others look away.

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

#ArtificialDecisions #MCC #Sponsored

Guarda il post »
Artificial Decisions

67 – He gave a voice to his neighborhood with scrap electronics

He gave a voice to his neighborhood with scrap electronics.

âś… This video is brought to you by: #EthicsProfile

Kelvin Doe was born in Freetown, Sierra Leone, in a neighborhood where there was nothing: no electricity, no internet, no tools. At 13, the struggle was already clear, doing homework in the dark, living in a disconnected world. For many, that was normal. For him, it was the starting point.

The conflict was brutal. His community had no power, no news, no voice. Kelvin refused the silence. He began collecting electronic waste from dumps: pieces of metal, old batteries, burnt wires. He invented homemade batteries with cans and acid. He built radio transmitters from scraps. No books, no teachers. He learned everything by experimenting, failing, starting again.

At 15, he managed to create a community radio. He called it DJ Focus, broadcasting music, news, and useful messages for his neighborhood. For the first time, his community heard its own voice. Technology made from junk became a tool of connection.

The return with the gift happened here in the United States. In 2012 he was invited to the MIT Media Lab in Cambridge, Massachusetts, just over 300 kilometers from here in New York. He became the youngest ever to take part in one of their programs. His story went viral thanks to YouTube and digital media. CNN and The New York Times told the world about “the boy who built electronics from trash.” The world learned that innovation doesn’t only come from Silicon Valley. It can also be born in the dumps of Freetown.

Kelvin started as a boy with no electricity and returned as a global symbol of creativity and resilience. His lesson is clear: with ingenuity and digital sharing, even the smallest voice can be heard everywhere.

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

66 – The Day the Internet Died #ArtificialDecisions #MCC

The Day the Internet Died.

👉 This video is brought to you by: #EthicsProfile

June 8, 2021. A normal morning. Then suddenly, the world goes dark. Not one site, not one platform. Half the internet disappears: Amazon down, Reddit unreachable, The New York Times silent, the BBC offline. Even the UK government can’t get its website to load.

It all starts with a bad update, a bug inside a company almost nobody had ever heard of: Fastly. A Content Delivery Network, a CDN. In plain words: the invisible systems that move data around the world at high speed. You don’t see them, you don’t talk about them, but without them the internet doesn’t work.

And yet one single misconfiguration brings it all crashing down. Within seconds, the biggest websites on Earth vanish. For a full hour, the digital world stops. To some it looks like a small inconvenience: you can’t read the paper, you can’t shop online. But behind the scenes, it’s much bigger. Airlines can’t sell tickets, supermarkets can’t update inventory, government systems can’t communicate. Real life freezes.

The story is clear: there is no single, free, public “Internet.” There are private infrastructures, and a tiny handful of companies hold the critical nodes. Fastly is just one. Add Cloudflare, add Akamai. Three names that keep the world online. But nobody elected them, nobody really oversees them. They answer to boards of directors, not to governments.

Fastly’s blackout lasted an hour, but that hour was enough to reveal the truth. The internet is fragile, not resilient, not democratic. It’s a web of private nodes, and when one falls, everything falls.

The rhetoric we were sold, that the internet is free, invulnerable, horizontal, was a myth. The network is privatized, concentrated, and we live on top of it as if it were eternal. But a single bug proved otherwise.

The real problem isn’t the incident. Incidents happen. The problem is the model. We’ve built our entire civilization on infrastructure we don’t control. Healthcare, schools, finance, politics. Everything depends on private servers. There’s no guarantee of continuity, no sovereignty at all.

Think about it. If a one-hour outage could paralyze newspapers, governments, and corporations, what happens if it lasts a whole day? Or a week? A targeted attack could make that happen, and we don’t need science fiction to imagine it. Back in 2016 the Dyn attack showed it clearly: hundreds of thousands of infected webcams and fridges took down Twitter, Netflix, CNN, PayPal. That wasn’t a movie. It was reality.

Here’s the truth: the internet isn’t ours. It doesn’t belong to us. It’s controlled by a handful of private companies with zero obligations to the public. Next time, it may not take just an hour to fix it.

When the internet dies, it doesn’t just die once. We die with it.

#ArtificialDecisions #MCC

👉 This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

65 – Robots Watch YouTube and Learn to Do Everything

Robots Watch YouTube and Learn to Do Everything Better Than Us.

👉 This video is brought to you by: #EthicsProfile

You know LLMs, like ChatGPT? Those Large Language Models that read billions of texts and then learn to write, talk, and interact with us as if they were people. Well, the same thing is happening with robots, but instead of words, it’s the body. They’re called LBMs, Large Behavior Models: systems that learn by watching billions of examples of actions, and then turn them into coordinated, fluid, precise movements.

The mechanism is identical. An LLM doesn’t know the world, it reconstructs it from text data. An LBM has no physical experience, but it gains it by watching demonstrations: videos, sensors, instructions. From there, it builds a kind of grammar of movement. A robot with an LBM no longer needs a programmer to write every gesture line by line. It just needs to see.

Think about what this means: billions of tutorials already on YouTube teaching how to cook, fold a shirt, build a shelf, tie a rope. For an LBM, that’s pure fuel. It absorbs it, structures it, and replicates it without mistakes, without hesitation. And here’s the shock: once it learns, the robot performs better than the human who taught it. Faster, steadier, more precise.

Boston Dynamics with their Atlas robot has shown what this looks like: an LBM that controls the whole body as a single system, hands and feet interchangeable, movements continuous rather than in clunky blocks. It’s as if the robot learned the logic of moving on its own, treating the body as one machine instead of separate parts.

That’s the real breakthrough. Yesterday, a robot was a puppet of code. Today, it’s becoming a universal apprentice. Every recorded human action becomes an instruction it can learn and perfect. It’s no longer science fiction: robots can already take the entirety of practical human knowledge that’s online and make it theirs.

The uncomfortable question is this: what’s left for us, when a machine can watch what we do, copy it instantly, and do it better?

#ArtificialDecisions #MCC

👉 This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

64 – The truth nobody dares to say out loud #ArtificialDecisions #MCC

The truth nobody dares to say out loud.

👉 This video is brought to you by: https://www.ethicsprofile.ai

In recent weeks, I’ve spoken with about a dozen people in the United States: investors and founders working in AI. In private, they told me what they won’t admit publicly: today’s models don’t learn continuously, don’t adapt in real time, can’t update themselves, and can’t be trusted in production. Yet billions keep flowing in, and top talent remains locked into an architecture that has already run out of steam.

The problem isn’t technical. It’s psychological, cultural, and financial. GPT-5 doesn’t mark a product failure. It marks a paradigm failure. Scaling the wrong paradigm won’t bring us AGI. It will only deliver a bigger illusion. Claude, Gemini, Grok, Llama: all headed for the same fate.

And soon, a massive ethical problem will explode. These AIs reflect the values of their creators, not those of each individual user. No ethical personalization. No respect for individual differences. When that becomes obvious, trust will collapse.

We either build cognitive AI, able to learn continuously, adapt autonomously, and integrate personal values, or it won’t be intelligence. It will just be imitation.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

62 – A machine will decide when to launch the bomb #ArtificialDecisions #MCC

A machine will decide when to launch the bomb.

👉 This video is brought to you by: https://www.ethicsprofile.ai

We’re approaching a point of no return. Artificial intelligence isn’t just showing up in research labs, marketing algorithms, or chatbots. It’s entering nuclear systems.

That’s what emerged at the University of Chicago, where a group of experts, including physicists, military officials, and Nobel laureates, made one thing brutally clear: the integration of AI into the management of nuclear arsenals is no longer a possibility. It’s a certainty.

They call it “automation of command and control.” It means AI will assist in strategic decisions, in handling classified data, in simulating attack scenarios. It means that sooner or later, a machine will be asked whether or not to start a nuclear war.

The looming threat is automated error. Predictive models don’t doubt. They have no conscience. They don’t have Petrov. I made a video about this recently. That Soviet officer who, in 1983, stopped a nuclear launch by relying on human instinct, because he sensed the system was wrong. AI doesn’t sense. And it doesn’t hesitate.

They’re telling us something simple: the time to regulate is now. Not in ten years. Not after the first disaster.

Meanwhile, the Doomsday Clock is stuck at 89 seconds to midnight. And no, that’s not coming from conspiracy theorists. It’s the Bulletin of the Atomic Scientists.

And us? Just sitting here, watching generative models crack jokes on Instagram.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

61 – Cybercrime on Subscription #ArtificialDecisions #MCC

Cybercrime on Subscription: Criminals Are Now Buying Netflix for Crime.

The world of cybercrime isn’t what we imagine anymore. No hoodies, no basements, no messy scripts. It’s an industry that sells subscriptions. A turnkey package to steal identities and infiltrate anywhere.

They call it “Impersonation-as-a-Service.” Like Netflix, but instead of TV shows you get phishing tools, training, coaching, ready-made exploits. You don’t even need to code: just pay and impersonate.

Underground forums prove it: ads for English-speaking social engineers have exploded in just one year. Because language is the most powerful weapon. No need to break firewalls if you can trick an employee into giving you credentials over the phone.

The big gangs know it. ShinyHunters and Scattered Spider already hit Ticketmaster, AT&T, Dior, Chanel, Allianz, Google. Not with advanced malware, but with fake voices, fake accounts, manipulated trust. AI has amplified it all: cloned voices, credible texts, perfect scripts.

The lone hacker era is over. Now it’s organized gangs borrowing tactics from intelligence agencies: reconnaissance, employee profiling, mapping software and company values, then striking. Invisible.

And now criminals don’t even need to learn. Traditional crime groups just buy the service, add these new tools to their old experience, and that’s it. They invest. We don’t. There are no global programs, no constant media campaigns, no effort to educate people about new tricks. Awareness should be the first line of defense, but we let it rot.

Most attacks don’t happen because of system flaws. They happen because someone falls for them. And if we stay ignorant, we’ll keep handing the entire world to those who turned crime into a subscription service.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

60 – Every human job will be changed or eliminated #ArtificialDecisions #MCC

Every human job will be changed or eliminated. Not in ten years. Much sooner.

That’s what Jensen Huang, CEO of Nvidia, says. It’s not a prediction. It’s a plan.

The man behind the chips powering every AI system in the world said it loud and clear: we’ll transform everything that can be automated, and get rid of the rest. No safe zones. No immune sectors. No one exempt.

This is the labor market rewritten by machines. But this time, it’s not factories that are shaking. It’s offices, schools, hospitals, newsrooms. Knowledge work. Intellectual professions. The ones that thought they were untouchable.

Because if humanoid robots can do what today’s generative AI already does, it means every bot will be able to learn a physical job in seconds. Not in twenty years. Now.

Who survives? Only those who shed their skin. Those who integrate with AI. Those who become complementary, not replaceable. In short: either you learn to work with machines, or machines will work without you.

What do you think about it?

This wave won’t wait. Stand still, and you’ll be swept away.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

58 – Got students to stop using ChatGPT… by using ChatGPT #ArtificialDecisions #MCC

The teacher who got students to stop using ChatGPT… by using ChatGPT

When he found out half his class was using ChatGPT to write essays, he didn’t panic. He didn’t lecture them. He didn’t block anything. He simply said: “Alright. Let’s use it better.”

His name is Marcus. He teaches literature at a high school in Manchester. One day he assigns a paper: “Write an essay on Orwell.” He gets back perfect texts. Too perfect. All similar. All lifeless.

He knows immediately. No one wrote anything. All done by AI. But instead of punishing them, he challenges them: “Now write a critique of what you submitted. Deconstruct it. Show me where the AI got it wrong. Where it oversimplified. Where it avoided taking a stance.”

The students don’t know where to start. They’ve never analyzed a text that way before. But slowly, they begin to see it. They realize the AI is vague. That it avoids strong positions. That it repeats patterns without depth. That it builds without risk.

And for the first time, they write something of their own. To criticize the machine, they have to think. They have to reflect. Choose words. Take a side.

ChatGPT had become a shortcut. He turned it into a mirror. And brought back the one thing that matters in school: Understanding why we write, not just what we write.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

57 – Deepfakes Are Killing Democracy. #ArtificialDecisions #MCC

Deepfakes Are Killing Democracy. And No One Seems to Notice.

There was a time when you didn’t need to be an expert to vote. You could just skim the news, watch the evening broadcast, and get a general sense of things. Even if you weren’t well-informed, democracy still worked. Because the information ecosystem, though imperfect, was stable. There were journalists, editors, rules, and checks. Different opinions, yes, but grounded in facts.

That world no longer exists. And it never will again.

Generative AI has destroyed the very idea of visible truth. Deepfakes, fake videos, voices, and images that look completely real, are replacing reality with believable simulations. No filters. No verification. And millions of people fall for them every day.

The problem isn’t just technical. It’s political.

Because democracy is based on a simple idea: everyone can vote, even if they’re not well-informed. Even if they don’t have time, or access to in-depth knowledge. As long as the surrounding environment offers at least a minimum of shared truth. That environment is now poisoned. Intentionally.

This isn’t about newspapers shaping public opinion.
It’s about entire fake realities being built from scratch, targeted by group, by identity, by emotion. Personalized. Invisible. Amplified by bots, shared by friends, believed because they look real.

That’s why democracy is at risk.

Because if everyone can vote, but each person lives in a manipulated bubble, that vote isn’t free. It’s directed. Controlled. Whoever shapes the synthetic narratives controls the consensus. Period.

And this year, things got worse.

Deepfakes are no longer experimental. They’re everywhere. They’ve become a systematic weapon. With elections coming up across Europe, the U.S., Africa, and Asia, fake videos of politicians, fabricated news, and simulated scandals are already spreading.
This isn’t a future threat. We’re in it now.

And no one is stopping it.

Without a strong, independent global authority to identify, block, and sanction the use of deepfakes for mass manipulation, democracy won’t survive.
What we’ll have instead is manufactured consent, disguised as pluralism.

This isn’t a theory.
This is a bomb that has already gone off.

#ArtificialDecisions #MCC

Guarda il post »