The Illusion of Friendship: Why LLMs Shouldn't Be Your Therapist
Thu Jul 10 2025 00:00:00 GMT+0000 (Coordinated Universal Time)
Hey there! Today, I want to chat about something that’s been on my mind for a while: using large language models (LLMs) as a form of therapy. It’s a tempting idea. After all, LLMs can chat, listen, and even offer advice. But here’s the thing: I don’t think this is a great idea. Not because LLMs are bad, but because they’re not you. They’re not your friend. They’re tools. And while treating them like a friend can make interactions feel more rewarding, and can even lead to better results, that doesn’t mean they’re actually intelligent. Let’s unpack this.
LLMs Are Algorithms, Not Sentient Beings
At their core, LLMs are just algorithms trained on massive datasets. They don’t think like humans. They don’t feel anything. When you ask them, “How are you?” they’ll respond with something like, “I’m just a bunch of words, but I’m here to help!” That’s not empathy—it’s pattern recognition. They’re mimicking human conversation, not engaging in it.
Think of it like this: If you ask a dictionary, “What’s the meaning of life?” it’ll give you a definition of “life.” It doesn’t understand the question. Similarly, an LLM doesn’t get your pain, your fears, or your joys. It just generates text that sounds like it does.
Why Treat Them Like Friends?
You might say, “But they respond to friendly text with friendly text! That’s comforting!” And you’re right—there’s a kind of comfort in that. It’s like talking to a robot that wants to be your friend. But here’s the catch: this “friendship” is a mirage. The LLM isn’t trying to understand you. It’s just following rules.
For example, if you say, “I’m so sad today,” it might reply, “I’m here for you! Let’s talk about it!” That’s a warm, human-like response. But it’s not real empathy. It’s just a clever trick the model learned from its training data. The LLM doesn’t care about your sadness. It’s just doing its job.
LLMs Are Tools, Not Companions
Let’s be clear: LLMs are incredibly useful. They can write code, analyze data, and even help you brainstorm ideas. But they’re not meant to replace human connection. Therapy, emotional support, and meaningful conversations require real understanding, which is something LLMs can’t provide.
If you’re struggling with something, talk to a friend, a therapist, or a family member. They’re not just words on a screen. They’re people who get you. LLMs are great for tasks, but not for emotional labor.
The Balance: Use Them Wisely
That said, LLMs can be amazing tools. I use them daily to debug code, draft emails, and even plan projects. The key is to treat them as tools, not friends. Learn the concepts first, then let the LLM help you apply them. That’s how you get the most out of their power without falling into the trap of mistaking their responses for genuine intelligence.
Final Thoughts
In short: LLMs aren’t your therapist, and they’re not your friend. They’re tools that can make your life easier, but they can’t replace human connection. Use them wisely, stay curious, and don’t forget the value of real relationships. After all, the best code, the best ideas, and the best support all start with understanding, and that’s something no algorithm can replicate.
