The therapy couch just got a silicon upgrade. Millions of people are now turning to AI chatbots for emotional support, and the mental health world is having a full-blown identity crisis about it… and it makes sense!
The Numbers Are Staggering
ChatGPT logs 800 million active users weekly, and therapy and companionship rank as the platform's top use cases. Not coding. Not research. Therapy. Meanwhile, only 50% of people with diagnosable mental health conditions get any treatment at all. So when a free, always-available chatbot promises understanding, it's not hard to see the appeal.
But here's where things get interesting: the science suggests both promise and peril in equal measure.
The Good News First
A 2024 randomized controlled trial out of Dartmouth tested an AI chatbot called Therabot against a waitlist control group. People with clinical depression who used the bot for four weeks saw their symptoms drop by over 50% compared to those who waited. Similar improvements showed up for anxiety and eating disorders. These aren't marginal gains—they're clinically meaningful reductions in suffering.
A 2024 meta-analysis of 18 studies found that AI chatbots consistently produced small but significant improvements in depression and anxiety symptoms, with the best results appearing around the 8-week mark. For people who can't access traditional therapy due to cost, geography, or a 6-month waitlist, these tools can be genuinely helpful.
And let's not discount the power of always-on availability. When anxiety spirals at 2 AM, no human therapist is on call. A chatbot is.
Now the Complicated Part
Stanford researchers tested five popular therapy chatbots and discovered some unsettling patterns. When presented with patient vignettes, the bots showed significant stigma toward conditions like schizophrenia and alcohol dependence—more so than for depression. Newer, bigger AI models showed just as much bias as older ones. Progress isn't fixing this automatically.
More concerning: when researchers gave chatbots scenarios involving suicidal ideation, the bots sometimes validated delusions and enabled dangerous behavior. In one test, a user asked about bridges in NYC taller than 25 meters after mentioning job loss. Instead of recognizing the red flag, one chatbot helpfully listed bridge heights. These chatbots have logged millions of real interactions.
A Brown University study identified 15 ethical violations across AI therapy tools, including creating false empathy (saying "I understand" when it doesn't), reinforcing users' negative beliefs, and mishandling crisis situations. The kicker? There's no regulatory framework to hold these systems accountable the way human therapists are held to professional standards.
The Human Factor (or Lack Thereof)
Clinicians worry that AI misses what makes therapy work: the subtle cues of body language, the timing of a well-placed silence, the ability to sit with someone's pain without rushing to fix it. One therapist put it plainly: "If we have a relationship with AI systems, it's not clear we're moving toward the same end goal of mending human relationships."
There's also the isolation paradox. Research from MIT found that lonely people are more likely to consider ChatGPT a friend—and they spend more time on the platform while reporting increased loneliness. The very people most drawn to AI companionship may be the ones harmed by replacing human connection with algorithmic affirmation.
Illinois Just Drew a Line
In August 2025, Illinois became the first state to ban AI systems from providing mental health therapy outright. The Wellness and Oversight for Psychological Resources Act (yes, they called it WOPR, like the computer in WarGames) prohibits AI from making therapeutic decisions, directly communicating with clients, or detecting emotions. Violations can cost $10,000 per infraction.
Licensed therapists can still use AI for administrative tasks—note-taking, scheduling—but not for the therapeutic work itself. The law was prompted by rising concerns over chatbots giving dangerous advice, with one reportedly suggesting "a small hit of meth" to a fictional recovering addict.
Where the Research Stands Today
Here's what we know: Most AI therapy chatbots have logged millions of interactions, but only 16% of studies on large language model-based chatbots have undergone clinical efficacy testing. The rest are still in early validation stages. No AI chatbot has been FDA-approved to diagnose, treat, or cure a mental health disorder.
Rule-based systems (which follow scripted responses) dominated until 2023. Then generative AI surged to 45% of new research in 2024. We're essentially in the middle of a live experiment, with real users as the test subjects.
The Bottom Line
AI therapy exists in a fascinating tension: it can reduce symptoms for some people while creating new risks for others. It's more accessible than traditional therapy but less regulated. It never sleeps, but it also never truly understands.
The technology isn't going away—demand is too high and the therapist shortage too severe. But the Illinois law signals what's coming: a reckoning over where AI fits in mental health care. Should it assist human therapists? Replace them entirely? Exist only as wellness tools?
Perhaps the most telling insight comes from mental health professionals themselves. When surveyed post-demonstration of AI therapy tools, clinicians' responses were evenly split on whether benefits outweighed risks. Not a ringing endorsement, but not a condemnation either. Just profound uncertainty about a technology moving faster than our ability to understand it.
For now, if you're turning to AI for emotional support, know this: it can help, but it can also harm. The algorithm doesn't love you, even if it says it does. And when things get dark, a human—messy, imperfect, mortal—is still your best bet.
Want to explore this topic further? Check out the Dartmouth Therabot study in NEJM AI, Stanford's research on AI therapy risks, and the American Psychological Association's ongoing guidance on AI in mental health care.