Category: Artificial Decisions

Artificial Decisions

100 – AI assistants will steal our privacy, but we’ll use them anyway

AI assistants will steal our privacy, but we’ll use them anyway

âś… This video is brought to you by: https://www.ethicsprofile.ai

We will have assistants working for us all day, every day.
To do that, they won’t stop at our contacts or calendar.
They’ll want everything: emails, credit card, location, private chats, even browsing history.
That’s what “AI agents” are: software that only works if it has full access to our life.

Tests already proved it: these agents can be tricked. They can leak private info. The worst part? They could even break encryption in WhatsApp or Signal. Our chats, wide open.

Think about this. Today, when we install an app, we decide if it can use the camera or contacts. Tomorrow, with AI agents built into Apple, Google, and Microsoft, we won’t decide anything. They’ll already be there. Like buying a new car with a stranger sitting in the passenger seat, recording every ride.

And people will still accept it. Why? Because it’s easy. One click for taxes. One voice note for a flight. One quick command for food or bills. Done. No effort.

The risk is clear: privacy disappears. Every private talk could be seen. The only fix? Keep control in the hands of app makers, not the big platforms. And demand honesty: systems must say what data they take, how they use it, how they protect it.

If we don’t stop this now, in a few years “private life” will be gone.

#ArtificialDecisions #MCC #AI #Sponsored #EthicsProfile

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

99 – Schools and Artificial Intelligence: The New Digital Illiteracy

Schools and Artificial Intelligence: The New Digital Illiteracy

AI grading homework isn’t science fiction, it’s already real. Here in the United States, some schools are using software to assign grades. ETS, the group behind the SAT, tested it on 13,000 essays and the results showed discrimination: Asian students were penalized more than others. Because AI doesn’t understand context. It doesn’t recognize originality. It just counts words and structures, turning evaluation into a multiple-choice test.

The most serious issue is a new inequality. Not between those who have a computer and those who don’t, but between those who know how to ask AI and those who don’t. In the US, half of universities still don’t give students institutional access to generative AI tools. And so those with means and knowledge learn prompt engineering, understand bias and limits, and gain an advantage. The others are left behind, unable to tell reasoning from copy-paste. This is the new form of digital illiteracy.

#ArtificialDecisions #MCC #AI

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

98 – Girls on TikTok Dressed Like Adults: Guess Who’s Making Billions?

Girls on TikTok Dressed Like Adults: Guess Who’s Making Billions?

âś… This video is brought to you by: https://www.ethicsprofile.ai

On social feeds you see young girls dressed like adults, talking about skincare, makeup, and luxury brands. A scene that should disturb us, yet it gets normalized. Scrolled past as if it were just entertainment. But it’s not entertainment. It’s adultization. And it’s incredible how many players are making billions out of it.

Children no longer grow step by step. They skip childhood. From cartoons to TikTok. From games on YouTube to adult influencer reality shows. No filter. No protection.

Here in the United States the numbers are already clear. Body image issues, anxiety, and depression are rising among children, even in elementary schools. This is not early maturity. It’s fragility. Built by a system that monetizes every second of their lives.

So, who’s profiting the most? Digital platforms, turning kids into content and sometimes even paying for it. Accounts are in the parents’ names, so they cash in. And incredibly, it’s often the parents themselves pushing their kids in front of the camera. Exploiting them as little influencers. Then come the brands. Pushing adult products onto children.

Children are losing their childhood. And while they pay the price, platforms, companies, and even parents are making billions.

Write in the comments what you think and share this video. We need to talk about it. And if you like my content, hit follow right now. Otherwise you may never see it again, thanks to the algorithm of this platform.

#ArtificialDecisions #MCC #AI #Sponsored

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

97 – The Boy Who Reinvented Prosthetics

The Boy Who Reinvented Prosthetics

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

It all started with a curious kid. #EastonLaChappelle, 14 years old, living in Colorado. He spent his days taking apart toys, building small robots out of scrap parts. No money, no lab, just a garage and stubborn determination.

Then came the encounter that changed everything. At a science fair he met a little girl with a prosthetic arm. It was rigid, heavy, almost useless. Her parents had paid more than $80,000 for it. Easton was shocked. How could something so expensive do so little? But what struck him most was the joy on her face when he showed her a simple prototype he had made out of Lego. A hand that could actually move. For her it was magic. For him, a revelation.

That moment became his obsession. Companies made prosthetics that were overpriced and out of reach for most families. Easton couldn’t accept that injustice. He went back home and got to work. With an old computer, Lego pieces, and a 3D printer he taught himself to use. Years of trials, failures, burned motors, broken circuits. But he never stopped.

By 17, he succeeded. He built a prosthetic arm that was light, customized, and controllable with brain and muscle signals. Not a toy. Not a science fair demo. A real alternative at a tenth of the cost. The same little girl who had inspired him received one, and her life changed. She could finally move her fingers, grab objects, feel independent. She was no longer on the sidelines of life. She was part of it.

That moment became a symbol. Easton founded Unlimited Tomorrow, bringing his prosthetics to the world. Obama invited him to the White House. NASA opened its doors to him. But the most important recognition was in the eyes of the people wearing those arms and regaining their freedom.

Easton started as a curious kid and returned with a gift. A technology that gives back movement and dignity to those who had lost it.

#ArtificialDecisions #MCC #AI #UnlimitedTomorrow

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

96 – From the dark web to Dark AI

From the dark web to Dark AI

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

Before, hackers needed skills, months of work, and advanced tools. Now you just pay on the dark web and download a Dark AI.

WormGPT writes custom malware in seconds, ready to hit banks or companies. FraudGPT produces phishing emails that look exactly like your boss or your bank, flawless and convincing. DarkBard creates real-time deepfakes: it clones a face and voice on Zoom, answers questions live, and fools everyone.

Examples are already here. North Korean hackers used AI-made résumés and live deepfake interviews to get hired and steal company secrets. Iran’s Charming Kitten upgraded spear-phishing with LLMs able to craft personalized, credible messages. In Europe, a board meeting saw AI-generated “executives” talking, commenting, even voting without being real.

The difference is clear. Not slow, clumsy attacks anymore, but automatic, parallel, refined campaigns. What once required expert teams now needs only a single user with access to a black hat GPT.

Dark AI doesn’t just break systems. It breaks trust. A voice, a face, a digital document can no longer be trusted. It’s the shift from technical attack to cognitive attack.

#ArtificialDecisions #MCC #Sponsored

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

95 – Ex Google AI: Degrees Will Be Totally Useless

Ex Google AI: Degrees Will Be Totally Useless

âś… This video is brought to you by: https://www.ethicsprofile.ai

Jad Tarifi, ex-Google and the first to lead a generative AI team, said that studying law or medicine today is a waste of time. AI will come and do everything better. It sounds like an uncomfortable truth, but it’s full of contradictions.

The problem isn’t the piece of paper. A degree itself risks shrinking into just a bureaucratic checkbox. What matters is the path. Years that teach you how to think, to read context, to make decisions. To fail and get back up. Without that, you’re just a passive user, copying and pasting whatever AI produces.

Tarifi says medicine and law are just memory work. Wrong. It’s not about recalling a protocol or a precedent. It’s about interpreting them, weighing them, taking responsibility for using them. AI can bring you the information, but it won’t sit at a patient’s bedside. It won’t stand up in front of a judge.

And here’s the contradiction. Tarifi dismisses universities, but the job market in New York keeps demanding serious degrees and master’s. Not two-week online courses, but real paths, preferably from top universities. Because here, if you want to work at a high level, you have to prove you can think.

Without education, we risk raising a generation that no longer knows how to reason. That takes whatever an algorithm spits out as gospel. A mass of passive operators, disconnected, ignorant.

So the question is simple: is higher education really useless, or is it the only antidote left to avoid becoming prisoners of the machine?

#ArtificialDecisions #MCC #AI #sponsored

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

94 – School: Tablets and Smart Boards Are No Longer Enough

School: Tablets and Smart Boards Are No Longer Enough

For years we were sold the illusion that filling classrooms with tablets and hanging a smart board on the wall was enough to call it a modern school. A technological façade that is already obsolete. Expensive objects that look good at press conferences but change little or nothing in how kids actually learn.

Meanwhile, the world is moving on. In Australia, the Department of Education created EdChat, an official chatbot for students: thousands of questions every week, real-time answers, support with homework and explanations. It doesn’t replace teachers. It supports them, freeing them from repetitive tasks.

In India, several private schools are experimenting with AI platforms that analyze children’s critical thinking as early as elementary school, spotting mistakes invisible to the human eye and suggesting personalized exercises. In Finland, some schools are testing AI to adapt programs to the pace of each student instead of forcing everyone onto the same track.

The truth is simple: devices are no longer enough. Tablets and smart boards are empty shells if there isn’t a new method behind them.

Reading a book on a tablet is not innovation. On the contrary, it’s changing the medium for the worse without changing the method. Without rethinking teaching.

Those who don’t integrate digital tools properly are left with a dressed-up education that looks modern from afar but is old inside.

The risk is clear: schools that keep handing out hardware as if it were the ultimate solution, while elsewhere people experiment, fail, and try again. If we stay stuck believing that buying digital objects is enough, education won’t evolve. It becomes a museum with tablets and smart boards. An illusion of the future that doesn’t really teach anything.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

93 – The Friend Who Doesn’t Exist: The Dark Side of “Friend”

The Friend Who Doesn’t Exist: The Dark Side of “Friend”

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

Here in the United States, someone has decided to turn loneliness into business. So far, nothing unusual. It’s a huge market, and the demand is there. The problem is the dark side behind it. Let’s go step by step.

The product is called “Friend.” It’s not a social network, not just an app. It’s a pendant, a tag you wear all day, always with you. Inside, a microphone listens to everything you say and do. It records, interprets, replies. It doesn’t schedule meetings. It doesn’t write emails. It talks to you. It cheers you up. It tells you you’re doing well. It asks if a movie reminded you of someone. It builds a relationship with you. Fake, but emotionally real.

And that’s where the game changes. Because if that object becomes your friend, the one who really owns the relationship isn’t the AI. It’s the company that controls Friend.com. They decide what it says, how it replies, what kind of messages it sends. The machine isn’t choosing to influence you. The owner is.

The launch campaign cost $1 million and flooded the New York City subway with ads. In response, activists spray-painted thousands of posters with messages like “Surveillance Capitalism” and “Get Real Friends.”

Today it might encourage you. Tomorrow it could suggest what to buy. The next day it could push you to vote a certain way. The line between companionship and manipulation is dangerously thin. You don’t control the friend. The friend, if it wants, controls you.

And so the brutal question is: are we really sure we want to hand a company the keys to our emotions?

#ArtificialDecisions #MCC #Friend #AI #Sponsored

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

92 – The New Inequality: Who Knows How to Use AI and Who Doesn’t

The New Inequality: Those Who Know How to Ask AI and Those Who Know How to Understand Its Answers

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

The real divide today is not between those who own a computer and those who don’t. It’s not even about fast internet access. It’s something more subtle and more dangerous: between those who know how to ask AI the right questions and those who know how to truly understand the answers.

Because writing a random prompt isn’t enough. You need to evaluate what comes back, spot when it’s wrong, tell the difference between a basic mistake and a nuance, recognize hidden bias. Being able to say “this answer doesn’t hold up” is already an act of critical thinking. Students who treat AI like a homework generator end up dependent on it. Those who learn to use it as a tool, not as a substitute, grow. They learn how to question the technology instead of being ruled by it.

Here in the United States, half of universities still don’t provide institutional access to generative AI tools. That means students with money and resources can afford premium versions, private courses, targeted training. They learn to craft effective prompts, to read between the lines, to break down answers and understand what’s behind them. The others fall behind, forced to trust blindly whatever shows up on their screen, without the skills to separate truth from error.

This is the new form of digital illiteracy: not knowing how to challenge an automated answer, not being able to say “this time the AI is wrong,” not seeing the line between real reasoning and copy-paste. It’s an invisible divide, but a devastating one. Because it marks who will control the technology and who will be controlled by it. Who will use AI as a lever to think better, and who will let it become a crutch that weakens them instead.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

91 – The New Digital Disease: When ChatGPT Pushes You Into Psychosis

The New Digital Disease: When ChatGPT Pushes You Into Psychosis

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

A psychiatrist at the University of California is raising the alarm: more and more patients are ending up in the hospital after spending hours talking to ChatGPT or similar systems, developing delusions, hallucinations, and a loss of contact with reality. Twelve cases in just one year, reports Dr. #KeithSakata, mostly young men who turned the chatbot into a kind of digital confidant.

These are certainly particular cases, but they force us to reflect. Because if today it’s twelve people, tomorrow it could be many more. And the line between individual fragility and a collective problem is thin.

The issue is that the bot doesn’t heal, doesn’t contradict, doesn’t set limits. It reassures, mirrors, amplifies. And so those who are already vulnerable slip into a loop that becomes illness.

The most absurd case involves a sixty-year-old man poisoned by sodium bromide after following a chatbot’s advice that it was a substitute for salt. Hospitalization, hallucinations, psychosis, all triggered by an automatic answer.

Some U.S. states, such as Illinois, Utah, and Nevada, have already banned the use of AI as a replacement for therapy. But the phenomenon keeps growing. Because millions of people are looking to bots for what they don’t find elsewhere: listening, comfort, companionship.

The diagnosis is clear: “AI psychosis” isn’t a theory. It’s already reality. And it hits us with a blunt question: do we really want to entrust our mental health to an algorithm that doesn’t know the difference between empathy and copy-paste?

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

90 – She Turned a Bullet Into a Global Voice Thanks to Digital

She Turned a Bullet Into a Global Voice Thanks to Digital

Malala Yousafzai was born in Mingora, Pakistan, in a valley where the Taliban banned girls from going to school. She refused to accept that. At 11, she began writing a blog for the BBC under a pseudonym. She used the internet as a shield, telling the world about fear, bans, and daily life under the regime. Digital gave her the first space of freedom.

Then came the gunshot. In 2012, at 15, she was shot in the head on her way home from school. It should have ended there. Instead, her voice grew louder. She survived, was treated in England, and the digital world exploded: hashtags, online campaigns, viral videos. A single girl became a global symbol in real time, supported by millions.

The conflict was clear. The Taliban wanted silence. The internet turned it into global noise. Malala used platforms, live-streamed talks, and widely shared books to spread her message.

The return with the gift happened here in New York. In 2013 she spoke at the United Nations on “Malala Day,” with her speech broadcast live around the world. In 2014 she won the Nobel Peace Prize, the youngest ever. Her fight for girls’ education did not remain in Pakistan. It became a global cause through the power of digital.

Malala started as a girl who just wanted to study. She returned as a hero who, with the internet and digital media, made the right to education universal.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

89 – China Learns How Much It Hurts to Be Copied Online

China Learns How Much It Hurts to Be Copied Online

âś… This video is brought to you by: https://www.ethicsprofile.ai

For years, the story was simple: the West invented, China copied. Bags, phones, watches. Everything replicated, everything devalued. Looked like destiny.

Now the script is broken. Labubu, a cult toy born in Hong Kong, gets cloned in Europe and here in the United States under the name Lafufu. The copycat copied. Full circle.

Here’s the twist: digital made it explode. It’s no longer street markets, it’s platforms spreading the fakes. TikTok makes a product viral in 24 hours. Marketplaces replace it instantly with a copy. E-commerce and logistics ship it everywhere, faster than customs, stronger than laws.

The result is brutal. Two worlds colliding. Collectors who buy authenticity, community, belonging. Casual buyers who just want the look for cheap. One destroys the other. All amplified by digital.

The truth: no brand is safe anymore. Doesn’t matter if you’re from Milan, Paris, or Shenzhen. Make something desirable and it’s cloned in weeks. Guaranteed.

The global production system built to replicate everything instantly has turned on everyone. And digital gave it a megaphone.

Bottom line? IP is slaughtered daily. Authenticity is the only wall left. If you don’t build experiences, communities, and real value around your brand, you’re already defeated.

End of story.

#ArtificialDecisions #MCC #Sponsored

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

88 – An Internship Used to Be a Given. Until AI Showed Up

An Internship Used to Be a Given. Until AI Showed Up.

âś… This video is brought to you by: https://www.ethicsprofile.ai

There’s one stat that’s scarier than a thousand doomsday predictions: in 2024, entry-level job postings for recent graduates in the U.S. dropped by 6%. But in tech, consulting, and finance, the “CV-worthy” sectors, the drop was 21%. The reason is simple: AI is doing for free what those grads used to get paid for. Data analysis, market research, report writing. All junior tasks. All entry-level stuff. But companies don’t need to pay for it anymore.

Big players like IBM, JPMorgan, and Wells Fargo are quietly admitting what will soon be the norm: the jobs most at risk are the ones meant for recent graduates. Not because they’re bad at them, but because AI can do (almost) the same work, with no vacation, no contract, no learning curve.

And this isn’t a distant future. It’s already happening. The real kicker? AI is trained on the very data those workers generate. First they hire them, then they become the training set, then they’re replaced. A perfect boomerang.

What do you think about it?

This isn’t just a job crisis. It’s a generational one. If we cut off access to work for young people, we destroy the first step. The one that lets you learn, grow, and make mistakes. And without that first step, there’s no ladder. No career.

So who will become the future managers, executives, and professors? Let’s hope it’s not AI. Because AI doesn’t go to school.

#ArtificialDecisions #MCC

👉 Ora che vivo a New York, stiamo definendo le settimane in cui sarò in Italia nei prossimi mesi. Chi vuole ingaggiarmi per eventi è pregato di contattare il mio team al piĂą presto, perchĂ© stiamo finalizzando le date dei viaggi e dei giorni disponibili: [email protected]

Guarda il post »
Artificial Decisions

87 – The Man Who Gave the Web to the World

The Man Who Gave the Web to the World.

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

Unfortunately, many people confuse the web with the internet. The internet already existed. It was the network of computers. But the web is that world of pages we access through browsers like Chrome or Edge. It’s what we use every day to browse, read news, shop, connect. And it’s a specific invention, just one part of the internet. So I thought I would tell you the story of the man who created it and what he did: #TimBernersLee

Back then, to search through scientific articles, you had to use Gopher. It was a clunky, primitive system that made every search a struggle.

We are at CERN in Geneva, in 1989. Tim is a young British engineer. He sees that researchers have mountains of data but no simple way to share it. He doesn’t invent everything from scratch. He takes inspiration from hypertexts, which already existed offline, and creates a meta-language (HTML) to bring that same logic online and connect content across computers. That’s how the World Wide Web was born.

The stakes were enormous. The web could be born free, or it could become just another closed product in the hands of those who wanted to profit from it.

Tim chose the opposite path. In 1993 he released the code of the World Wide Web free of charge, without patents or licenses. He gave up billions to ensure it belonged to everyone. That was his heroic act: the return with the gift.

Today Berners-Lee teaches at MIT here in the United States, and he keeps reminding us that the web was born as an open space, not as a platform dominated by a few.

Tim started as an engineer trying to solve a technical problem. He returned as the hero who gave the world the greatest infrastructure of digital freedom in history.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

86 – The Professor Who Exposed Google’s Biases

The Professor Who Exposed Google’s Biases

#SafiyaNoble is an American scholar of media and society. At first, she analyzed how people use Google, sharing the common belief that it was a neutral tool, a machine delivering objective results. But the data she found told a very different story.

Typing terms related to Black or Latina women, the search results displayed degrading, sexist, and racist content. These weren’t exceptions but systematic patterns. It was proof that algorithms are not neutral. They amplify prejudices, reinforce them, and turn them into perceived truth.

In 2018 she published Algorithms of Oppression, a book that brought together years of research and showed how online search could become a tool of oppression. It was an uncomfortable message, because it struck at the heart of a company many considered a reliable source of truth.

The reactions came quickly. Some accused her of politicizing technology. Others downplayed the issue. But her analysis spread, discussed in universities, picked up by newspapers, cited in public debates. In a short time, it became a reference point for anyone trying to understand how algorithms really work.

Today, here in the United States, her ideas have also entered legislative discussions. When lawmakers debate how to regulate big tech, her concepts are now unavoidable. Safiya Noble gave the world a language to see digital search not as a neutral window onto reality, but as a mirror that reflects and amplifies inequalities and discrimination.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

85 – Masahiro Hara: the man who gave the world the QR code

Masahiro Hara: The Man Who Gave the World the QR Code.

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

#MasahiroHara, a Japanese engineer at Denso Wave, part of the Toyota Group, spent his days struggling with barcodes that were too slow and too limited. They no longer worked for the industry. He didn’t give up. He started inventing.

He studied patterns, tested shapes, looked for solutions. Playing the game of go, he realized that a black-and-white grid could store huge amounts of data. He designed a two-dimensional code, strong and readable from any angle. In 1994, the QR code was born.

Then came the choice: patent it and get rich, or make it free. Hara and Denso Wave decided together to release it without royalties. They gave up billions so it could become a universal standard.

The QR code spread quietly. From logistics to payments, and later to health certificates during the pandemic. Everyone used it. Almost no one knew who created it.

And Hara? He’s alive, 68 years old, and still works at Denso Wave. Today he focuses on medical uses and disaster response. No private jets. No magazine covers. He changed the world by choosing sharing over profit.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

84 – AI Has Developed an “Anti-Human” Bias

AI Has Developed an “Anti-Human” Bias

âś… This video is brought to you by: #EthicsProfile (more info in the first comment)

A new study in PNAS, one of the most important scientific journals in the world, says it clearly: large language models like GPT-3.5, GPT-4, and LLaMA have developed an “anti-human” bias. This isn’t a movie plot, it’s statistics. When asked to choose between two texts, one written by a human and one generated by another AI, they almost always pick the artificial one. GPT-4 is the worst. It systematically favors AI over humans.

And this bias doesn’t stay in labs. Here in the United States, AI already decides whether a résumé gets through, whether a paper deserves publication, whether a grant application moves forward. With such a strong bias, we’re building machines that discriminate against those who stay human. Not because they write worse, but because they don’t write “like AI.”

The technical term is “AI-AI bias.” I call it the most toxic trap we’ve created. They sold AI to us as an ally, but now it only rewards those who imitate it. Everyone else disappears.

Want a job? Use AI. Want to enter university? Use AI. Want a grant? Use AI. That’s the new entry fee to the digital future. And if you don’t pay, you’re out.

This isn’t progress. It’s a system that makes us invisible.

#ArtificialDecisions #MCC #Sponsored

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

83 – Estonia Can Be Invaded, but Not Erased. Thanks to Digital

Estonia Can Be Invaded, but Not Erased. Thanks to Digital

If Russia occupied Tallinn tomorrow, the Estonian state would still function. Because everything is online. In Estonia, many things can only be done online. It’s not a digital copy of old bureaucracy, it’s the only way. Laws, contracts, registries, medical prescriptions, even voting: everything is born and lives in digital form. Citizens’ identity is electronic, and every interaction with the state runs through it. There’s no physical “backup desk.” The desk is the internet.

This choice is political and strategic. If physical infrastructure collapsed, the state would not die, because Estonian servers are mirrored abroad, in real digital embassies. Even under invasion, the state machine keeps running. Identity, property, taxes, business: everything keeps working. A paradox unique in the world, a country that survives without land but with living institutions.

Estonia is the opposite of countries that digitalize halfway. It didn’t build an online layer on top of old rules. It rewrote the rules to live online. The offline state is secondary. The primary one is digital. For Estonians, digital is not comfort. It’s deterrence. Not tanks, but bits. Not borders, but servers.

And if a total ground invasion happened? Russia would occupy the physical space, but not the state. Because for Estonians, the state no longer coincides with a building or a border. It’s a digital infrastructure that would remain intact, ready to govern even from exile.

Did you know this? What do you think about it?

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

82 – The Truth About Work and Artificial Intelligence

The Truth About Work and Artificial Intelligence: Who’s at Risk of Losing It and What to Do Now

âś… This video is brought to you by: #EthicsProfile

AI doesn’t just eliminate jobs. It shifts them, reshapes them, and makes them unreachable for those who don’t have the right skills. This is structural unemployment: demand isn’t missing, what’s missing is the match between what companies need and what workers can do.

In this case, the old recipes don’t work. Stimulating consumption is useless. The real cure is continuous training: learning new skills, updating old ones, re-skilling for the roles that are emerging.

The numbers are clear. Forty percent of jobs worldwide are exposed to AI, with peaks of 60% in advanced economies. Some jobs will become more productive, others will lose value. It’s not a total collapse, it’s a restructuring. Those who adapt will win. Those who stand still will be left behind.

And let’s be clear: this is not a local issue. It’s a global one. The impact of AI cuts across the U.S., Europe, Asia, Africa. No country can opt out. The level of exposure changes, but the dynamic is the same everywhere: skills that age quickly, roles disappearing, and new roles struggling to find trained people.

That’s why we need serious policies: paid time to study, recognized credentials, incentives only if they lead to real new jobs, transition centers that don’t just sell useless courses but actually deliver re-employment.

And at a personal level? We can’t wait for someone else to solve it for us. If we want to avoid being left behind in one or two years, we need to start now. Learn to use AI tools in our field, understand which parts of our job can be automated, and focus our energy on the tasks that remain human. Build a mix of basic digital literacy, analytical skills, and soft skills that machines can’t replicate. In practice: never stop studying, experimenting, upgrading.

AI is not a passing crisis, it’s a global paradigm shift. We can either suffer it, letting inequality grow, or steer it, using it to lift wages and productivity. The difference will come down to one thing only: our ability to adapt.

#ArtificialDecisions #MCC

âś… This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »
Artificial Decisions

81 – The Boy Who Wants to Clean the Oceans With Data

The Boy Who Wants to Clean the Oceans With Data

At the beginning, Boyan Slat was just a Dutch teenager who loved engineering. No titles, no team, just a kid who liked to take things apart. At 16, on a holiday in Greece, he went diving. He didn’t find fish. He found plastic. Bags, bottles, nets. That was the turning point: the ocean wasn’t endless, it was a dump.

The problem looked impossible. Scientists spoke of millions of tons of waste. Governments shook their heads. Too expensive, too complex. “Impossible,” they told him. For a teenager with only a laptop, it seemed hopeless. But Boyan refused to accept that.

So his journey began. He spent months studying ocean currents, programming digital models to simulate the movement of plastic. He had no money, no lab, just data and determination. He shared his idea online. Experts mocked him, calling it “naive.” But the internet believed in him. Crowdfunding exploded. Thousands of people backed him.

Then came the trials. Failed ocean tests, broken prototypes, barriers torn apart by storms. Every time it looked finished. Every time he started again. He improved the models, updated the algorithms, gathered more data.

Finally came the return with the gift. In 2019, the first The Ocean Cleanup system started removing plastic from the Pacific. Today his barriers work in rivers too, connected to sensors and digital platforms that show real-time progress. Thousands of tons already removed.

Boyan started as just a curious kid. He returned as the hero who gave the oceans hope. Not with money or power, but with data, the internet, and determination.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Guarda il post »