The Seoul taxi driver who taught AI not to lie
A retired man taught artificial intelligence something no algorithm had ever learned: how to tell the truth. Stay till the end, because this story says everything about how AI really learns.
His name is Dong-Hwan, 68, from Seoul. He spent forty years behind the wheel of a taxi. Then he retired. Three months later, he’d had enough. Too quiet. Too empty.
A tech company contacted him. They were looking for retirees to test a voice assistant powered by AI, designed for taxi drivers. The AI would answer passengers: “How long till we arrive?”, “Is there traffic?”, “What’s nearby?” He accepted.
Every day, he talked to the machine. Asked random, off-script questions. Until he noticed something.
When the AI didn’t know the answer… it made things up. It sounded confident, but it lied. “The museum closes at 8.” False. “The road is clear.” It wasn’t.
Dong-Hwan took notes. He went to the engineers: “Your AI lies.” They laughed. “No, it’s just trying to be helpful.” “I tried to be helpful too,” he said. “But I never lied to my customers.”
That moment changed everything. The team checked. He was right. The AI had been trained not to stay silent, so it filled the gaps with polite, confident nonsense.
Dong-Hwan forced them to rethink the system. He did what AI still can’t do: tell the truth. And something even rarer: say “I don’t know.”
In a world where everyone pretends to know, that’s the most human thing of all—and the hardest for a machine to learn.
#ArtificialDecisions #MCC #AI
The Seoul taxi driver who taught AI not to lie
A retired man taught artificial intelligence something no algorithm had ever learned: how to tell the truth. Stay till the end, because this story says everything about how AI really learns.
His name is Dong-Hwan, 68, from Seoul. He spent forty years behind the wheel of a taxi. Then he retired. Three months later, he’d had enough. Too quiet. Too empty.
A tech company contacted him. They were looking for retirees to test a voice assistant powered by AI, designed for taxi drivers. The AI would answer passengers: “How long till we arrive?”, “Is there traffic?”, “What’s nearby?” He accepted.
Every day, he talked to the machine. Asked random, off-script questions. Until he noticed something.
When the AI didn’t know the answer… it made things up. It sounded confident, but it lied. “The museum closes at 8.” False. “The road is clear.” It wasn’t.
Dong-Hwan took notes. He went to the engineers: “Your AI lies.” They laughed. “No, it’s just trying to be helpful.” “I tried to be helpful too,” he said. “But I never lied to my customers.”
That moment changed everything. The team checked. He was right. The AI had been trained not to stay silent, so it filled the gaps with polite, confident nonsense.
Dong-Hwan forced them to rethink the system. He did what AI still can’t do: tell the truth. And something even rarer: say “I don’t know.”
In a world where everyone pretends to know, that’s the most human thing of all—and the hardest for a machine to learn.
#ArtificialDecisions #MCC #AI
✅ This video is brought to you by: https://www.ethicsprofile.ai
👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]
