
“Theodore: I’ve never loved anyone the way I loved you.”
“Samantha: Me too. Now we know how.”
When Her premiered in 2013, the idea of a man falling in love with an AI operating system felt safely distant. I remember wondering: does genuine connection really require flesh and blood on the other side, or just an understanding mind that responds rightly? Back then, the question was purely theoretical.
That was then.
Recently, I came across a tweet from someone who said they prefer talking to AI over humans. The responses were predictable: “Get a life.” “Touch grass.”
This should concern us not because AI isn’t good at conversation. Rather, AI is far too good at human interaction.
The Efficiency Problem
AI doesn’t tire. It doesn’t judge the way humans do. It doesn’t bring its own emotional baggage to the conversation. It engages with your thinking at 3 AM with the same care as at noon. This is genuinely useful for externalizing vague thoughts, for working through ideas without the social friction that comes with human interaction.
Human relationships are difficult because other people are genuinely other. They misunderstand you. They’re preoccupied with their own problems. They challenge you in ways that feel unfair. They’re inconsistent. That friction isn’t just a bug—it’s where growth happens. You learn to articulate yourself more clearly when someone doesn’t get you. You develop patience when someone is having a bad day. You discover that your perspective isn’t the only valid one when someone pushes back in a way that initially annoys you.
The risk isn’t that people will prefer AI because it’s “better.” It’s that AI might satisfy certain needs—for reflection, for being heard, for working through ideas—so efficiently that it removes the motivation to do the harder work of human connection. And that harder work is what builds the capacity for intimacy, conflict resolution, tolerance of difference. These are muscles that atrophy if unused.
What Makes This Different
Let me be clear: I’m not talking about using AI for work. That’s just using more powerful technology to accelerate productivity: faster, yes, but not different in nature. And frankly, it’s a choice most of us won’t have the luxury to make. At work, you’re required to do whatever it takes to sustain your job.
But in the personal sphere, the stakes are different.
Every previous technology, even transformative ones, operated within certain boundaries. The printing press changed information distribution but didn’t simulate conversation. The telephone connected people but required another human on the line. Social media reshaped interaction but still mediated human-to-human contact. Even automation replaced human labor, not human relationship.
AI is the first technology that can simulate the experience of human understanding itself. That’s categorically different because it doesn’t just change how we meet our needs—it changes what constitutes meeting them.
AI is potentially replacing the need for that connection altogether. Or at least satisfying enough of it that the motivation to seek out messier, harder human connection diminishes.
What Constitutes Connection?
Maybe a lot of what we valued about human connection was actually just the experience of certain interactions. If that experience can be generated artificially, then the special status we gave to human relationships starts to demystify.
This is addictive precisely because it’s so good. This is not like getting hooked on junk food, where you know it’s bad for you. It’s more like if there were a pill that gave you all the subjective benefits of exercise without the effort. Would people still go to the gym?.
The humans in your life offer something AI can’t: they’re building a life alongside you with their own stakes in the world you share. Their limitations—their moods, their partial context, their occasional irrationality—come packaged with the fact that they’re real in a way AI is not. They can actually show up when things go wrong.
It’s Not All Bad
If someone in a small Indian town is dealing with anxiety or depression, their options are limited. They have expensive therapy they can’t afford or family conversations that might lead to judgment. Or they can talk to AI which is private, available, and helpful—then that’s better than nothing.
The same logic applies globally: rural areas without mental health services, communities where therapy is stigmatized, people working night shifts who can’t access care during business hours, teenagers who aren’t ready to talk to adults but need someone. AI isn’t replacing good human care in these cases—it’s filling a vacuum.
Honestly, even where human therapy is available, it’s often not very good. Therapists have bad days, biases, limited expertise outside their specialty, personality mismatches with clients. An AI trained on the best therapeutic techniques and applying them consistently might actually be more helpful than a mediocre human therapist.
That’s real democratization happening. Mental health support that was previously available only to affluent people in major cities is now accessible to anyone with a smartphone.
That said, I genuinely don’t know if democratized AI connection is preferable to stratified human connection.
An important aside: To reiterate the obvious, AI doesn’t actually have ‘empathy’ or ‘compassion’. But what matters is empathy as experienced by the recipient. And if AI’s responses create the subjective experience of being understood, being cared for—then functionally, for the person on the other end, what’s the difference?
The Social Trap
This creates a paradox: if AI therapy becomes the norm, does that reduce pressure to make human therapy more accessible? If AI companionship helps isolated elderly people, does that let society off the hook for building communities that don’t isolate elderly people in the first place? If AI can help someone in a small town deal with family pressure and social stigma, does that mean we stop working to reduce that stigma?
Often, tech solutions to problems caused by social failures can actually entrench those failures by making them more tolerable. Because the tech inadvertently reduces the urgency of addressing root causes.
When these individual choices aggregate into cultural shifts, they turn into challenges. If millions of people decide that AI conversations are “good enough” for their needs, we collectively might end up somewhere none of us individually chose: a society where human connection becomes increasingly optional.
What Theodore Learned Too Late
Looking back at Her, I now think Theodore’s scenario is actually about optimization pushing out messiness. When you can get patience and understanding without friction, you might lose the skills for navigating complex human interactions.
Catherine, Theodore’s ex-wife, saw it clearly: “You always wanted to have a wife without the challenges of actually dealing with anything real, and I’m glad that you found someone. It’s perfect.”
Where This Leaves Us
Stopping AI adoption isn’t realistic. The benefits are too real. The elderly person getting companionship, the anxious person in a small town getting therapy—these are real people with real needs being met now.
But AI is different from every tech shift that came before. Because it satisfies internal needs that previously required other humans.
I believe in Thomas Sowell’s principle: don’t judge the discrepancy between the actual and the ideal; focus on what the reality-based alternatives could be. And for now, I don’t have an answer. We’re watching this play out in real time.
A connection that comes without friction might feel like a connection. But it’s missing the element that makes human relationships irreplaceable: the fact that another person is risking something real by showing up for you.
That’s the part AI can simulate but never actually provide. The question is whether we’ll remember to value it.
Leave a comment