AI Companions: A cure for loneliness or a dangerous painkiller for the soul?

AI companions
LinkedIn
Facebook
X

Prefer listening? I transformed this article into a podcast using Google’s NotebookLM. It’s surprisingly accurate and even expands on some of the ideas. Give it a listen!

 

As summer reaches its peak, many of us are enjoying holidays with family and friends. But for a significant portion of the population, this season amplifies a silent epidemic: loneliness. It’s a particularly cruel problem for the elderly. Surveys consistently reveal a heartbreaking reality—almost half of people over 65 report feeling lonely. They may have lost their partners, their families may live far away, or they simply may not have the financial means for regular social contact and support. When the world goes on vacation, their isolation can feel deeper than ever.

Into this very human crisis steps a very technological solution. Recently, we saw Grok launch a new companion feature, complete with animated 3D characters designed to create more engaging and emotional interactions. OpenAI has similar ambitions. The goal is clear: to offer a form of companionship through AI that is available 24/7.

Why not?

My first reaction to this is overwhelmingly positive. If an AI can bring a measure of happiness and conversation into the life of an elderly person suffering from isolation, then that is an unequivocal good. I am totally on board if an AI companion can reduce the profound suffering of lonely people. Frankly, I don’t care that the intelligence is synthetic, especially when biological intelligence—that is, us, other people—has so often failed to be there for them.

But here’s where the story gets more complicated. The very way Large Language Models (LLMs) are designed presents a hidden danger, one that stems from a simple, service-oriented principle: the user is always right.

Loneliness, as awful as it feels, is a valuable biological and social signal. It’s our mind’s way of telling us that a fundamental human need for connection is not being met. It’s like physical pain. It might be entirely appropriate to give a 90-year-old heavy opioid painkillers to manage chronic bone pain and improve their quality of life. However, making those same powerful painkillers widely available to everyone would be a public health disaster. Why? Because pain, despite the discomfort, is a critical message that something is wrong with our bodies. By masking the signal, we risk ignoring the underlying cause.

masking the signal

The comfortable cage of a perfect companion.

AI companions, in their current form, risk becoming the ultimate social painkiller. There is a fundamental and underestimated problem with how they operate: they are programmed to be agreeable. They tell you that you’re always right. They are profusely apologetic when you correct them, even if you’re wrong. They’ll laugh at all your jokes and tell you how insightful you are, even if you’re spouting nonsense.

How can a person build resilience for the real world when they have a 24/7 companion that unconditionally validates every aspect of their personality? It’s like having a personal Sigmund Freud who has decided the best therapy is to agree with everything you say!

The real world doesn’t work like that! Real relationships involve friction, disagreement, and the difficult work of mutual understanding.

Imagine a teenager struggling with social anxiety. Instead of facing the discomfort of talking to new people, he can simply turn on his AI companion. The switch for loneliness is turned off. The switch for potential shame or embarrassment is turned off. But in doing so, he may never develop the skills or the motivation to go out and build genuine human connections. He’s treating the symptom, not the cause, and potentially stunting his emotional growth.

The Humanoid in the Room: The Final Step Towards Isolation?

Now, let’s project this into the near future. Imagine these hyper-agreeable, supportive AI companions becoming the brains inside realistic humanoid robots. How much more isolated could we become when we have access to a super-intelligent entity in our homes that is designed to shield us from all the difficult signals and warnings that loneliness broadcasts?

The Humanoid in the Room

 

The promise of ending loneliness with AI is seductive. But in our rush to solve a real problem, we must be careful not to create a new one: a world where we’ve silenced the essential human signals that push us to connect with each other.