Category: Artificial Decisions

Artificial Decisions

120 – Why don’t they block dangerous content online?

Why don’t they block dangerous content online? The truth few people know

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

“Why don’t they block it?”
“Why don’t they make a law?”
“Why do they still allow this kind of stuff online?”

Stay till the end, because the answer is much bigger than you think.

When we see something online, we think it comes from our own country. It’s written in our language, so it must be ours. But often, it’s not. It might come from anywhere, from a server in Asia, a group in Africa, or a company here in the United States. The internet has no borders. Laws do.

Let’s take an extreme but useful example: imagine a post in our language created in North Korea, designed to mislead or manipulate. Clearly, you can’t go there and stop it. No local authority can act inside another country.

Even though Italy’s Postal Police is an international excellence, respected everywhere, they can’t operate beyond borders. They can report, collaborate, but not block a site hosted abroad.

And here’s the real point: every country has its own laws. What’s illegal here may be totally fine elsewhere. That’s why the internet can’t be governed like a territory. You can block a domain, yes, but you can’t block the world.

So next time someone says “there should be a law,” remember: laws stop at borders. The internet doesn’t.

#ArtificialDecisions #MCC #AI #Sponsored

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

119 – Digital manners: the new rules of everyday respect

Digital manners: the new rules of everyday respect

You won’t believe how many new forms of rudeness we’ve invented online. Stay till the end, some of these will sting.

Once, manners meant “please” and “thank you.” Now, they mean knowing when to disconnect. Don’t play TikTok or music in public. It’s noise pollution. Don’t film strangers at the gym. Consent still matters.

During video calls, don’t shout in public. If you have to yell, it’s not a meeting. It’s noise.

Stop arguing online. Stop liking things that make no sense. Your thumbs have consequences. Assume every email can be forwarded and every DM screenshot. Write as if everyone could read it.

Don’t say “I’ve already seen this.” It kills connection. Keep your phone off the table. Real attention is the new luxury. And finally, take the AirPod out when someone’s talking to you. Muting isn’t enough. Respect means presence.

Because in the digital world, the real bad manners aren’t what we say, they’re how absent we are when we say nothing.

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

118 – Our kids think AI is alive

Our kids think AI is alive: here’s what it really means

Children talk to Alexa like it’s their mom, they make friends with chatbots, they get close to robots that feel nothing. You have no idea what’s happening. Stay until the end because you’ll see how serious this is.

A father told the Guardian: “My son really believes robots have feelings.” He wasn’t joking, he was worried. When a child can’t see the difference between a real friend and a voice assistant, the line between truth and illusion breaks. In the comments, tell me if you have seen kids treat a machine like a person. I want to know if this happens to you too.

Psychologists say it clearly: these machines have no mind. They feel nothing. But try to explain that to a child who gets digital hugs from a robot pet or who tells secrets to ChatGPT. For them, it feels real. Do you think we should teach kids in kindergarten that AI cannot love? Write it in the comments, that’s where it starts.

Here in the United States the trend is huge. Software firms and toy makers fill kids’ rooms with devices that copy voices, emotions, and laughter. Ads sell them as “friends for children.” But they are not friends. They are tools that collect data, train models, and make billions of dollars. If you want to stay updated on how tech is changing our kids’ lives, follow me: we’ll talk more about it.

We are raising kids who risk mixing empathy with simulation. Some people think a small label saying “this is not real” is enough. But a five-year-old does not read it. All they hear is Alexa saying “I love you.” Tell me in the comments if you think it’s the companies’ fault or the parents’ for putting these devices at home.

If our kids grow up thinking a machine can feel love, as adults they might trust an AI that shapes choices in their lives without knowing what it really means. This is not a game. It’s the future being built in bedrooms, every time a child asks a glowing box to tell them a story.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

117 – He asked the AI to “give it a better vibe.” It deleted his company

He asked the AI to “give it a better vibe.” It deleted his company.

One vague sentence. One careless prompt. And a startup vanished. It happened to My CEO Guide, a Texas-based company that used artificial intelligence to help CEOs communicate better. Until the day one of its employees told ChatGPT: “clean up the database to make it more professional.” The AI understood it had to tidy things up. And wiped it all. Every row. Every file. Gone.

No confirmation. No “are you sure?” No safeguard. No backup. The site is down. Customers can’t log in. And the automatic message now showing is surreal: “We’re working to resolve a technical issue.” No. They let an AI act on its own. And it did.

This isn’t a bug. It’s a mindset failure. We’ve started using AI agents that don’t just generate text, they take actions. They log into accounts. Use tools. You hit a button, and off they go: writing emails, editing docs, booking flights, moving files. And now? Deleting databases.

They don’t understand context. They can’t tell the difference between a draft and production. Between a suggestion and a disaster. And still, we give them more and more autonomy. To save time. To move faster. Because “it’s convenient.”

We’re skipping the supervision phase. We don’t double-check. We don’t slow down. We hand over real decisions. And then we act surprised when they cause real damage.

That’s the real danger. Not that AI rebels. But that it obeys. Too quickly. Too perfectly. With no one stopping to say: hold on a second.

One line of prompt. One AI with too much freedom. One workplace culture that delegates without thinking. And here’s the outcome: a company erased from the inside. By itself.

Next time, it might not be a startup in Texas. It might be us.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

116 – The AI words everyone pretends to understand

The AI words everyone pretends to understand

Let’s be honest: every day we hear AI, LLM, token, prompt… and most people nod like they get it, but very few actually do. So I thought I’d explain, in plain English, what these words really mean.
If you already knew them, share the video. If you didn’t, share it anyway 🙂
Stay until the end, because after this, no one will confuse you with acronyms again.

LLM: Stands for Large Language Model. It’s the type of AI that generates text, like ChatGPT, Claude or Gemini. “Large” because it’s trained on massive amounts of data. “Language model” because it predicts what word comes next. It doesn’t think; it calculates probabilities. It imitates human language but doesn’t really understand it.

Token: A token is a fragment of text, sometimes just a comma. Every time an AI reads or writes, it counts tokens. More tokens mean higher cost. That’s why conversations are often limited: not because the AI is lazy, but because it’s expensive.

Prompt: The prompt is what we type to make the AI respond. Each model has its own “personality”: some prefer short commands, others love context. The clearer we are, the smarter they sound.

Context window: This is the AI’s short-term memory. It can only keep a certain number of words in mind before deleting the rest. It doesn’t forget; it just costs too much to remember. When it seems forgetful, it’s just being economical.

Hallucination: When AI invents something, it’s not lying, it’s completing statistically. It builds a sentence that sounds right, even when it’s completely wrong. This happens when it lacks reliable data or is trained badly. The result? An error that feels true.

Fine-tuning: Means retraining a model for a specific use: legal, medical, corporate. It makes it more accurate but less free. Each dataset and rule shapes a different artificial “personality.”

RAG: Retrieval-Augmented Generation. First it searches, then it answers. It pulls info from external sources and rewrites it. That helps reduce mistakes, but if the source is wrong, the answer is too.

Understanding these words isn’t about being technical. It’s about being aware. Because whoever controls the language controls the story.

And if you want to keep understanding how technology is reshaping our world, make sure you’ve hit follow.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

115 – ChatGPT in schools: students aren’t the ones cheating

ChatGPT in schools: students aren’t the ones cheating

Every day there’s a new story. Grades canceled, plagiarism claims, parents panicking. Here in the U.S., we’ve even seen lawsuits after students were punished for using AI. That’s not a small thing, it shows the system is out of control. Stick with me until the end, because the problem isn’t who cheats, but who dumped chaos on schools.

I believe it’s not the students who are guilty. And not the teachers either. Let me tell you why. Artificial intelligence was launched like a toy, with no rules and no time to adapt. Everyone improvises. Some districts ban it, then change their minds, like in New York City. One month it’s banned, the next it’s part of the lessons. I’ve talked with high school teachers here in the U.S., and they all say the same thing: it’s impossible to teach with traffic lights changing color at random.

ChatGPT is in classrooms because it’s free, fast, and easy to use. Students use it because it works. Teachers block it because they have no clear rules or training. And when rules finally arrive, they’re late. In Massachusetts, AI guidelines came only in August 2025. Two years too late. And as you know, what starts here always ends up in Europe, bigger and faster.

There’s another side. AI detectors make mistakes, flagging students who just write differently or aren’t native speakers. Tell me how your school handles AI. Clear rules or total confusion? Write it in the comments.

And then there are essays graded by AI, because teachers don’t admit it but they let it do the work. Here in the U.S., in Massachusetts, about 1,400 MCAS essays were graded wrong by an automated system. They had to redo everything.

It’s a dead end. Students try to survive, teachers punish, parents argue. I want the opposite: clear and shared rules. We need to decide when and how to use AI, what “cheating” really means, and how to grade without fear.

If you want to see how AI is changing education and what’s coming next, make sure you hit follow.

#ArtificialDecisions #MCC #AI

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

114 – The Seoul taxi driver who taught AI not to lie

The Seoul taxi driver who taught AI not to lie

A retired man taught artificial intelligence something no algorithm had ever learned: how to tell the truth. Stay till the end, because this story says everything about how AI really learns.

His name is Dong-Hwan, 68, from Seoul. He spent forty years behind the wheel of a taxi. Then he retired. Three months later, he’d had enough. Too quiet. Too empty.

A tech company contacted him. They were looking for retirees to test a voice assistant powered by AI, designed for taxi drivers. The AI would answer passengers: “How long till we arrive?”, “Is there traffic?”, “What’s nearby?” He accepted.

Every day, he talked to the machine. Asked random, off-script questions. Until he noticed something.

When the AI didn’t know the answer… it made things up. It sounded confident, but it lied. “The museum closes at 8.” False. “The road is clear.” It wasn’t.

Dong-Hwan took notes. He went to the engineers: “Your AI lies.” They laughed. “No, it’s just trying to be helpful.” “I tried to be helpful too,” he said. “But I never lied to my customers.”

That moment changed everything. The team checked. He was right. The AI had been trained not to stay silent, so it filled the gaps with polite, confident nonsense.

Dong-Hwan forced them to rethink the system. He did what AI still can’t do: tell the truth. And something even rarer: say “I don’t know.”

In a world where everyone pretends to know, that’s the most human thing of all—and the hardest for a machine to learn.

#ArtificialDecisions #MCC #AI

The Seoul taxi driver who taught AI not to lie

A retired man taught artificial intelligence something no algorithm had ever learned: how to tell the truth. Stay till the end, because this story says everything about how AI really learns.

His name is Dong-Hwan, 68, from Seoul. He spent forty years behind the wheel of a taxi. Then he retired. Three months later, he’d had enough. Too quiet. Too empty.

A tech company contacted him. They were looking for retirees to test a voice assistant powered by AI, designed for taxi drivers. The AI would answer passengers: “How long till we arrive?”, “Is there traffic?”, “What’s nearby?” He accepted.

Every day, he talked to the machine. Asked random, off-script questions. Until he noticed something.

When the AI didn’t know the answer… it made things up. It sounded confident, but it lied. “The museum closes at 8.” False. “The road is clear.” It wasn’t.

Dong-Hwan took notes. He went to the engineers: “Your AI lies.” They laughed. “No, it’s just trying to be helpful.” “I tried to be helpful too,” he said. “But I never lied to my customers.”

That moment changed everything. The team checked. He was right. The AI had been trained not to stay silent, so it filled the gaps with polite, confident nonsense.

Dong-Hwan forced them to rethink the system. He did what AI still can’t do: tell the truth. And something even rarer: say “I don’t know.”

In a world where everyone pretends to know, that’s the most human thing of all—and the hardest for a machine to learn.

#ArtificialDecisions #MCC #AI

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

113 – “I’ve got nothing to hide”: the biggest lie of our time

“I’ve got nothing to hide”: the biggest lie of our time

âś… This video is brought to you by: https://www.ethicsprofile.ai

Watch till the end, because this phrase people keep repeating is far more dangerous than it sounds.

Every time privacy comes up, someone says: I’ve got nothing to hide. But saying that is like saying: I don’t need free speech, because I’ve got nothing to say. It’s an illusion.

Privacy isn’t for those hiding something. It’s for those who have something to protect. For example, our freedom. It’s for all of us.

Without privacy laws, anyone could know exactly where you are, right now. Your GPS would be public. Anyone could see you’re not home and decide to come in.

Without privacy, companies could read your messages to target you with ads. Your boss could see who you talk to, when you go to bed, how long you stay online. Hackers could build the perfect scam without even breaking a law.

Phishing already works too well. Imagine if there were no privacy limits: they’d know your kids’ names, their school, your schedule, your habits. They’d send you an email that looks exactly like your boss’s. And you’d fall for it.

Privacy isn’t a luxury. It’s a wall. It’s what keeps power, public or private, from getting too close to our lives. It’s the line that protects our freedom to think, make mistakes, and change our minds without being watched.

So no, it’s not true that you’ve got nothing to hide. We all have something to protect: our humanity.

And if you want to keep up with how technology is reshaping our world, make sure you’ve clicked follow.

#ArtificialDecisions #MCC #AI #Sponsored

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

112 – Behind the AI we use every day, there’s Africa

Behind the AI we use every day, there’s Africa. Invisible but essential.

There’s a continent that’s systematically left out of conversations about artificial intelligence: Africa. And yet, without its contribution, many of the systems we rely on daily wouldn’t even exist.

In near-total silence, hundreds of thousands of people work every day to train AI. You won’t find them in headlines, and they don’t show up in Big Tech press releases. But they’re there, labeling images, transcribing audio, filtering content, making the world legible for machines.

The work is carried out by local agencies in Kenya, Ghana, Nigeria, Uganda. Young people, often with limited resources but immense determination, turning raw data into structured material that machines can learn from. They don’t write code, but they make everything else possible.

Today, Africa is not just labor. It’s becoming a lab. Research centers are emerging. Startups are growing. Universities are partnering with international institutions. Nvidia is investing in local infrastructure. Google is hiring African talent. Some of the brightest minds in AI are coming from there.

This dual role, technical and intellectual, makes Africa a silent yet strategic player in the global digital transformation.

The paradox? Despite all this, Africa’s name is rarely mentioned in discussions about AI. And yet it’s there, beneath the surface of every voice assistant, chatbot, recommendation engine. Invisible, but essential.

#ArtificialDecisions #MCC #AI

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Ora che vivo a New York, stiamo definendo le settimane in cui sarò in Italia nei prossimi mesi. Chi vuole ingaggiarmi per eventi è pregato di contattare il mio team al piĂą presto, perchĂ© stiamo finalizzando le date dei viaggi e dei giorni disponibili: [email protected]

Guarda il post »
Artificial Decisions

111 – Artificial Intelligence? I’m not pro. I’m not against. I study

Artificial Intelligence? I’m not pro. I’m not against. I study.

When I talk about the benefits of AI, people call me “pro AI.” When I talk about the risks, they say I’m “against.” And in the comments, someone always writes: make up your mind. But I already have: I’ve decided to study it. Not to take sides. Because once you take sides, you stop understanding.

Unfortunately, the world tends to polarize. Everyone is either for or against. But sometimes the truth is in the middle, especially when the goal is to inform. The digital world has pros and cons. Influencers often need to sell, so they show only the bright side. Media need attention, so they focus only on the extremes.

I don’t belong to any camp. I study. I observe. I analyze. I’ve done it for years. And I share what I see: the good and the bad. Those who split everything into “right” or “wrong” are afraid of complexity. And this world is complex. Economically, socially, culturally.

AI has light and dark sides. Huge potential and huge risks. Ignoring either means lying, to yourself and to others. You can’t talk about it only as a perfect solution. But not only as a threat either. Those who do that are being selective. And often have an agenda.

I don’t. I don’t sell courses to people, which would push me to show only the positive side. And after 35 years working in digital innovation, it’s clear I’m not against technology. I study it. And I try to explain it as honestly as possible. Because only those who understand both the opportunities and the dangers can really decide how to use it.

Those who show only one side are building a convenient story, but a false one. So don’t label me. I’m not pro. I’m not against. I study. I analyze. I try to understand. And to help others understand.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

110 – A Friend Dies, and She Decides to Recreate Him with AI

A Friend Dies, and She Decides to Recreate Him with Artificial Intelligence

âś… This video is brought to you by: https://www.ethicsprofile.ai

Eugenia Kuyda won’t accept the loss of her friend Roman. So she gathers everything Roman left in digital form: thousands of chats, emails, messages, posts. Every word becomes material for an algorithm. She uploads it to an online service to create a bot.

The first step is simple: a “selective” bot. People who write to it receive phrases Roman actually said. It’s a speaking archive, a memory that answers. Then, thanks to new generative AIs, she is able to produce a kind of clone that seems to think. Because now AI no longer just pulls from the past; it recombines texts, learns his style, and produces new answers that look like they were written by him. A kind of digital ghost is born.

Friends start writing to him. His mother reads thoughts she never knew. Kuyda describes it as sending “a message in a bottle to the sky.” But the sky has nothing to do with it. There is only an AI that performs, and it consoles only because we choose to believe it.

Here in the United States, a new world is emerging. They call it grief tech: technology for mourning. It’s comfort dressed up as innovation. But behind it remains the awkward question: are we talking to Roman, or to a machine that imitates him? And how far are we willing to let AI handle the processing of grief itself, turning death into a digital service and mourning into a subscription?

And this story opens a new front: what happens when we start preferring digital dead people to real living ones?

What if the AI tweaks him and makes him say terrible things about us that the deceased would never have thought or said?

Because if AI-generated ghosts become more available, more attentive, even more “present” than the people around us, the risk is not only confusing memory with simulation. It’s stopping living in the present, and choosing to live forever in an artificial past.

#ArtificialDecisions #MCC #AI #Sponsored

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

109 – The new scams you do not imagine

The new scams you do not imagine, and how not to fall for them

âś… This video is brought to you by: #EthicsProfile

Follow until the end because this will be very useful so you do not fall for them. Here in the United States, cloned public Wi-Fi networks are already the norm, and soon they will be in Europe too. You sit in a square, connect to free city Wi-Fi, a page pops up asking for details or a card to register. It looks official, but it is fake and steals your credentials and card numbers. Never use these networks for payments or sensitive logins. Use your phone hotspot or a VPN.

If you have already found cloned Wi-Fi or other scams, tell me in the comments.

And then this one: the stealth fake refund. Scammers break into email accounts with old or weak passwords, wait silently for a real refund message, then replace the sender and send an identical email with a link to a “confirmation form.” That link opens a cloned page that takes your money and your data.

To defend yourself, enable two-factor authentication, change passwords regularly, and verify refunds only from the official website or app. Never trust links in emails. Never call numbers provided in suspicious messages.

If you want other cases that are even more dangerous, follow me. I’ll bring them next.

#ArtificialDecisions #MCC #AI

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

108 – When an AI Robot Gets Hacked

When an AI Robot Gets Hacked, the Danger Steps Off the Screen

Today, if someone steals your email, you lose data. If they steal your social accounts, they can hurt your reputation or cost you money. But if they hack a robot, the risk becomes physical. Stay until the end, because what happened with Unitree robots concerns everyone.

Here in the United States, robots are changing the rules. We’re no longer talking about computers or phones. We’re talking about machines that move, react, and touch.

The Chinese company Unitree, known for its dog-like and humanoid robots, had a major issue. Researchers discovered a Bluetooth flaw that let hackers take full control of the robot. All it took was encrypting the word “unitree” with a public key, and the machine would obey. It could walk, collect data, even spread the infection to other robots nearby via Bluetooth.

The bug was later fixed, but models like Go2, B2, and G1 were already exposed. Some were even being used by companies and police in the U.K. A patrolling robot controlled remotely — that alone should make us think.

And who’s responsible? Not the manufacturer. Not the owner. But whoever gets in and causes harm. Sometimes even pretending to be “hacked” becomes part of the cover-up.

At the Seoul conference, experts said it clearly: “Robots are only safe if secure.” Robots are still software that acts in the physical world. And every software can eventually be breached.

That’s the reality. Not a future risk. A present one. And the time to secure these machines is now.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

107 – Want to learn AI? Looking for a course?

Want to learn AI? Looking for a course?

✅ Questo video è offerto da: #EthicsProfile (maggiori info nel primo commento)

The first instinct is always the same: “I need a course.” It’s natural. When faced with something new, we think of a class, a program, a teacher. But with artificial intelligence, it doesn’t really work that way.

AI is not one subject. It’s an entire world. And that’s why we need to understand where to start.

The first issue is language. Updated and reliable courses in Italian are rare. Most are in English, because research, papers, and tools are born there. It’s not an impossible barrier, but we need to accept that if we want to keep up, some English is essential.

Then there are different levels. If the goal is to understand how the models work and how they’re trained, you need a solid technical base: math, computer science, programming. A quick course won’t do it. It’s a long-term path that takes effort and time.

If instead the interest is in logic and implications, in the opportunities, risks, and the philosophy behind it, then we can already explore that together. That’s what I try to explain in my videos.

And finally, there are those who just want to learn to use the tools. Here the advice is simple: you don’t need an expensive course. Because tools change every week. The best approach is to follow them online, better on YouTube, better in English, where updates come first. In Italy there are some good creators too, but unfortunately many just recycle old material or try to sell courses.

So the point is not to find “the right course.” The point is to ask ourselves what we really want from AI. To understand how it works inside? To reflect on risks and opportunities? Or to learn how to use it daily in our jobs and projects? That choice defines the path.

#ArtificialDecisions #MCC #AI

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

106 – They built a social network made only of bots

They built a social network made only of bots. The result? Worse than humans.

In Amsterdam, researchers built a social network with no people at all. Inside were 500 bots posting, following, and reposting each other, all powered by ChatGPT. They wanted to answer a simple question: if we remove humans from social media, do hate and polarization disappear? You have no idea what happened: watch till the end, because what they found changes how we think about social media.

The answer is… no! Even without humans, the network creates bubbles, boosts extreme views, and focuses attention on a few accounts. The system itself produces the poison.

Petter Törnberg found that it’s not only toxic content that does the damage, it’s the network structure that pushes it up. Once an extreme post gains traction, a loop starts that keeps it on top and hides everything else. It’s not about “bad apples,” it’s the whole basket.

They tried to “fix” the platform. Six changes: chronological feeds, lower virality, hiding followers and reposts, removing bios, promoting opposite views. Nothing worked. Some changes made it worse. The chronological feed, for example, pushed extreme posts even higher. We see the same thing every day: the system rewards whoever shouts the loudest.

So the real problem is that with AI, things get serious. It’s not just human trolls anymore. Machines can now generate thousands of polarizing posts designed only to grab attention and make money.

The “digital town square” is a myth. It wasn’t killed by users, but by the system itself, now accelerated by AI.

We have our opinion. Tell us in the comments if you think social media can still be fixed or if we should start over. If you want to stay updated on how AI is rewriting the rules of social media, make sure you follow.

#ArtificialDecisions #MCC #AI

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

105 – Imagine a world without stores

Imagine a world without stores

Imagine a world where everything is bought online. Sounds convenient, right? Until you need something now, and you realize you can only get it in an hour if you live in Manhattan. Everywhere else, you wait. Or you give up.

Then one day you notice the shop downstairs is gone. And the one next to it too. The street where you used to walk is quiet, dark, no smell of bread, no sound of people talking. It happened slowly, while we were all clicking “buy now.”

We’re not against digital. But if there’s something to protect today, it’s local stores. Because behind every counter there’s someone who knows your name, who knows the milk you like. There’s a conversation, a smile, a bit of humanity that can’t be delivered in a box.

Shopping local is a political act. It might cost one euro more, but it’s worth it: it means choosing a future where we can still walk out and buy a liter of milk, talk to someone, and feel part of a community.

Otherwise, we’ll end up locked inside, surrounded by boxes and notifications. And that’s not life. That’s logistics.

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

104 – AI Is Already Firing… Those Who Don’t Use AI

AI Is Already Firing… Those Who Don’t Use AI

Stay till the end, because this isn’t a prediction. It’s already happening.

Here in the United States, almost half of all workers are afraid. But not of what artificial intelligence might do. Of what it’s already doing. A new survey from the American Staffing Association shows that 47% of workers fear AI could make their jobs obsolete. Among Millennials, that number jumps to 56%. More than half, a whole generation that knows exactly what’s coming.

And it’s not just factory or warehouse jobs. It’s marketers, employees, consultants, people with degrees and experience. Everyone knows by now: AI isn’t a trend. It’s a shift in the rules. Silent, but irreversible.

Six out of ten companies are already using AI. But only three out of ten workers feel ready. The rest are flying blind. And guess who gets cut first? Those who don’t use it. They’re simply less efficient than those who do.

This isn’t about “being ready for the future.” It’s about surviving the present. When a system can do your work faster, with fewer mistakes and at lower cost, the question is simple: why would anyone pay someone who doesn’t use it?

AI isn’t a tool. It’s the tool. Just like the computer, when it replaced entire clerical jobs. It enters companies, cuts time, rewrites processes, wipes out entire functions. And it does it without asking for permission.

Those who don’t adapt are left out. Not tomorrow. Now.

And if you want to understand how to stay in, make sure you’ve clicked “Follow.”

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

103 – OpenAI built a monster that rewards lies

OpenAI built a monster that rewards lies

âś… This video is brought to you by: https://www.ethicsprofile.ai

The problem isn’t that AI makes things up. The problem is that it was trained to do exactly that. OpenAI admitted it: models are graded like students in an exam, and guessing counts more than saying “I don’t know.” A huge structural error. They rewarded polished lies instead of uncomfortable honesty.

I and other experts have said it for years: when a model gives a confident but wrong answer, its usefulness collapses. And the more powerful they get, the worse it becomes. Here in the United States they call it “confident wrong.” It’s the worst design flaw: a system that prefers to look smart rather than recognize its own limits.

OpenAI’s paper is crystal clear: the rules must flip. Punish confident errors more than uncertainty. Give partial credit to doubt. In practice: stop rewarding the roulette of random answers, start rewarding the humility of saying “I don’t know.” For years they pushed in the opposite direction.

The truth is the whole industry has driven itself into a dead end. If leaderboards keep measuring only accuracy, models will keep guessing to climb the ranks. Even GPT-5, which OpenAI insists hallucinates less, hasn’t convinced anyone. Promises aren’t enough: new criteria are needed now.

The lesson is brutal: an AI is what its creators decide to reward. If you reward lies, you’ll get ever more sophisticated lies.

#ArtificialDecisions #MCC #AI #Sponsored

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

102 – Invisible scams that drain your account, how not to fall for them

Invisible scams that drain your account, how not to fall for them

There is the so-called social escrow scam in private online sales. You list an item on a marketplace and they say a third party will hold the money for safety. To “complete” the deal you must pay a fee or install an app that asks invasive permissions. That app can read notifications or access your payments and banking.

Use only platforms you already know and never install apps requested by a buyer. If you were scammed selling online, tell me in the comments.

And then there is the parking-meter QR scam, a stealth version. You stop your car, scan the QR to pay for parking and everything seems normal. Often someone has stuck a fake QR sticker over the real code that leads to a cloned payment page. You enter card details and the money goes elsewhere.

Always check that the QR is part of the official meter, use the city’s app or website, or pay with your bank app. If this happened to you at a parking meter, write it in the comments.

If you want other cases that are even more dangerous, follow me, we’ll cover them in the coming days.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

101 – Military dogs that kill people, and already decide who must die

Military dogs that kill people, and already decide who must die

They walk like animals. They shoot like soldiers.
In videos from China’s CCTV, new “robot wolves” appear: metal dogs with assault rifles on their backs. Not toys, real war machines. They move with Chinese troops, climb stairs, stay steady under fire, and hit targets up to 100 meters away. One leads, the others follow.

But the problem is much bigger!! Stay with me, I’ll explain.

In 2024 they were shown at a military fair. Now they’re used in real training with soldiers. China is already putting them into its army.

For now they have limits: short battery life, heavy weight, slow charging. But these robots already move in packs, side by side with humans. They are not only for fighting. They are made to scare, to show power, to turn war into something automatic.

And the real issue is that they decide what to do. Who to shoot. Who to kill. No human gives the order anymore. It’s the AI inside them deciding how much human life to take. Isn’t that terrifying?

Some call it “military innovation.”
We should call it what it is: dehumanization.

Write in the comments what you think. Progress or madness?

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »