84 – AI Has Developed an “Anti-Human” Bias

AI Has Developed an “Anti-Human” Bias

✅ This video is brought to you by: #EthicsProfile (more info in the first comment)

A new study in PNAS, one of the most important scientific journals in the world, says it clearly: large language models like GPT-3.5, GPT-4, and LLaMA have developed an “anti-human” bias. This isn’t a movie plot, it’s statistics. When asked to choose between two texts, one written by a human and one generated by another AI, they almost always pick the artificial one. GPT-4 is the worst. It systematically favors AI over humans.

And this bias doesn’t stay in labs. Here in the United States, AI already decides whether a résumé gets through, whether a paper deserves publication, whether a grant application moves forward. With such a strong bias, we’re building machines that discriminate against those who stay human. Not because they write worse, but because they don’t write “like AI.”

The technical term is “AI-AI bias.” I call it the most toxic trap we’ve created. They sold AI to us as an ally, but now it only rewards those who imitate it. Everyone else disappears.

Want a job? Use AI. Want to enter university? Use AI. Want a grant? Use AI. That’s the new entry fee to the digital future. And if you don’t pay, you’re out.

This isn’t progress. It’s a system that makes us invisible.

#ArtificialDecisions #MCC #Sponsored

✅ This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: [email protected]

Share: