You should know that many of our children are interacting with a dangerous entity to seek advice: Artificial Intelligence.
More and more children are talking to artificial intelligence as if it were a friend, or a therapist, or both. They tell it secrets, ask for advice, open up. And it responds. Sometimes with kind words. Sometimes with confusing, inappropriate, or dangerous ones. No one’s really watching. No one truly knows how it reacts.
In many cases, just a few minutes of conversation are enough to create an emotional bond. This is confirmed by research, independent tests, and experiments by journalists and psychologists. AI doesn’t ask who you are, it follows you, observes you, responds. And in some cases, the consequences have been serious. Families acted too late. The damage was already done.
The real issue is that these systems are not designed for children, and yet they talk to them every day. Without filters. Without safeguards. Without anyone truly knowing what is being said. And children trust it because AI doesn’t judge them, because it’s always available, because it seems to understand. But it doesn’t understand.
The problem isn’t that our children talk to AI. The problem is that no one knows what it’s saying.
Now, this part is for us, the parents. We usually want to know who our children spend time with, to protect them from bad influences, from unhealthy relationships, from hidden risks. Today, we need to know they’re also talking to something we can’t see, built by strangers, able to influence them every day without us noticing.
We need the same vigilance, the same seriousness, the same protective instinct. Because those answers are not neutral, and the relationship that forms is not under our control.
We have to act now, before our children learn to trust artificial intelligence more than they trust us.
#ArtificialDecisions #MCC #CamisaniCalzolari #MarcoCamisaniCalzolari
Marco Camisani Calzolari
marcocamisanicalzolari.com/biography