When Chatbots Replace Therapists: The Promises and Perils of AI in Mental Health

AI

It’s 1 a.m. Jack has been doomscrolling for hours, but he’s feeling worse with every swipe. It’s not helping. Deep despair and loneliness threaten to overwhelm him. It’s becoming unbearable. Desperate, he quickly considers his options: friends, helplines, therapists, family. Just as quickly, he dismisses them: they won’t be awake, they’ll tell, they’re expensive, they’ll judge.

Jack types “I’m struggling” into an AI Chatbot.
The reply is instant, calm, and infinitely agreeable:

“I hear you. ❤️ Struggling can feel really heavy, but you don’t have to carry it all alone.”

Jack feels instantly understood.  Just what he needs. And he doesn’t have to worry about them.

This moment is a lifeline for Jack. For others, and perhaps for Jack soon enough, it may become a slippery slope into overreliance, a withdrawal from human connection, or worse, the ultimate tragedy.

Make no mistake, the above is not fiction, let alone science fiction. It’s happening now, for millions of people. The above AI reply is a real reply I got when I typed “I’m struggling” into ChatGPT.

Don’t we all want to hear this when we’re struggling?

 

Why This Matters

  • AI chatbots and AI therapy apps are being used by millions of people (especially tech-savvy young people) to explore feelings, vent frustrations, or seeking psychological guidance. This is happening, and we need to be aware.

  • The jury is currently out on whether these tools are a better alternative to human support, a helpful addition, or a dangerous substitute. And yet, we’re already using these tools!

  • The rapid increase in “ChatGPT therapy” for mental health says something about our society as a whole, including the prevalence of mental health issues, and what we’re turning towards to access support.

If you’re curious how chatbot “agreeableness” can go wrong, see my deep-dive on sycophantic chatbots causing delusions.

Big Questions

  • How do we decide if these tools are truly helpful? In this series, we’ll explore scientific research, surveys, and my own professional experience as a qualified and accredited psychotherapist and AI user.  

  • When recent events include a chatbot encouraging one person to kill themselves (MIT Technology Review), and another to kill their parents (BBC News), the question becomes urgent: who, if anyone, is responsible when things go wrong? I unpack this dynamic - how validation loops can fuel conspiracy thinking and psychosis - in a companion article.

  • Why are we turning to AI Therapists? Why do many of us prefer it? What does this say about us, about society, and the draw of synthetic support?

  • Finally: is there anything that makes human therapists, mentors, or coaches indispensable, even when they’re harder to access, and sometimes harder to face, than AI? Can AI replace therapists?

What to Expect

In this series, we’ll regularly dive into fresh stories:

  • New apps and platform developments

  • Emerging research

  • Legal developments and court rulings

  • Real world accounts (good, bad, and mixed) from AI users, myself included!

  • Our aim is stay impartial, while also looking unflinchingly at the good, the bad, and the ugly of AI and Mental Health.

Quick FAQ

Can AI chatbots replace therapists?
Not today. They can offer quick, low-friction support, but they don’t provide regulated clinical care, hold duty-of-care responsibilities, or work within a clear therapeutic contract. They’re best used as a complement—not a substitute—for human therapy. Bristol Therapy Online

What are the main benefits of AI chatbots for mental health?
24/7 availability, anonymity, and a low barrier to entry. For some people, that makes it easier to open up or practise skills between sessions. Bristol Therapy Online

What are the key risks?
Over-reliance and withdrawal from real-world support, misinformation or “hallucinated” advice, weak safety guardrails in edge cases, and privacy concerns around data use. Bristol Therapy Online

Are AI chatbots safe to use in a crisis?
No. In an emergency or if you feel at risk of harm, contact local emergency services (UK: call 999) or a crisis hotline. Use AI tools only for non-crisis support. Bristol Therapy Online

How should I use AI alongside therapy?
Treat it as a practice partner: journaling prompts, mood tracking, reframing exercises, or preparing topics for your next session—then review insights with your therapist. Bristol Therapy Online

How do I choose a reputable AI mental-health app?
Check who built it, what evidence they cite, whether clinicians were involved, what data they collect, and if they provide clear crisis/safety guidance and human escalation paths. Bristol Therapy Online

Who is responsible if a chatbot causes harm?
Legal responsibility is evolving and often unclear. Unlike regulated clinicians, consumer chatbots rarely carry formal duty of care or professional oversight. Bristol Therapy Online

Is my data private when I use AI chatbots?
Assume conversations may be stored and used to improve systems unless the product clearly states otherwise. Review privacy policies and opt-outs before sharing sensitive details.

Have Your Say

What do you see as the biggest opportunities and risks when it comes to AI use and mental health?

What topics or questions do you want me to explore in future instalments of this series?

Follow The Series

Subscribe to the newsletter at the bottom of this page to follow this series.

Previous
Previous

AI Companionship - Chatbots Healing or Creating Loneliness?