An AI therapist can’t really do therapy. Many clients will choose it anyway.

Woman working on laptop in an apartment window / Photo by Matthew Henry via Burst / Used under licenseIt just isn’t the same, I hear over and over, from psychotherapists shrugging away concern over artificial intelligence. An AI therapist can’t really empathize. It can’t truly understand. It can’t build a therapeutic relationship with depth and connectedness the way a human therapist can.

As a therapist myself, I agree with all of these statements. An AI therapist is not equivalent to a human therapist. Like many therapists, I tend to focus on the ways that AI falls short.

But for clients, in many ways, an AI therapist is better than a human one. 

Client options for mental health care

Consider this from the eyes of Alex, a woman in her 20s. She feels anxious and depressed, and has been struggling to go to work or maintain relationships. Alex knows she has a mental health issue, and wants help. She goes online to review her options.

She could contact a human therapist for in-person care. Doing so will require finding a therapist who is reasonably local, accepts Alex’s insurance, and is taking new clients. Even if she succeeds in doing so, she then has to work out scheduling, transportation to and from the therapist’s office, and if she has kids, child care. All of this to meet with a therapist who may or may not prove to be a good match. If Alex feels judged by the therapist, or just wants to work with someone else, she has to either have an awkward conversation telling this to the therapist, or she has to deal with the discomfort (and possible fees) associated with not showing up for scheduled sessions. Either way, she has to start the whole process over again.

She could contact a human therapist for telehealth-based care. This opens up more options for potential therapists, even in-network ones. It eliminates Alex’s concerns about transportation, and potentially child care as well. (It’s easier to leave your kids playing in the next room for 45 minutes of a session than to leave them alone at home for 90 minutes of a session plus drive time.) But the therapist still may judge her, and still may not be a good match. Her insurance may not cover telehealth the same way it covers in-person care. And she’ll still have to deal with the discomfort inherent in therapy, of sharing parts of herself that she’s embarrassed or ashamed of with a stranger, in hopes that they will help.

Now presume that there is a third option. If Alex chooses, she can receive services from a therapist powered by AI. Her AI therapist cannot make a diagnosis, and it cannot feel love or empathy. But it also has a lot of things going for it. Alex doesn’t need to schedule an appointment. Her AI therapist is always awake, alert, and standing by, ready to meet with Alex at a moment’s notice. It may not feel love, but it also cannot feel disgust. No matter what Alex shares with her AI therapist, it will not judge her. It will remember everything Alex ever tells it. If she travels to another state or country, her AI therapist can continue to see her, unconstrained by the bureaucracy of state-based licensure. 

And it’s $10 an hour or less. That’s a lot less than the cost of a session with a human therapist.

I’m intentionally using the pronoun “it” here, since the AI therapist is a thing. But what if that thing is a close enough approximation of a person that Alex is willing to play along? That batch of positive qualities, in a personified form, starts to look rather appealing.

Weighing all of these options, many clients would continue to prefer a human therapist. I’m among them. As a therapist, I believe very strongly that the act of being present with another person, accepting them as they are, and relating to them from an outside perspective, is healing in ways that machines, no matter how sophisticated, will never be able to duplicate.

But for many clients, the machine will be close enough.

Many clients would prefer an AI therapist to a human one

In a Turkish study from 2021, more than half of respondents said they would prefer an AI therapist to a human one. Most of their reasons will not surprise you: Participants felt that they could talk comfortably to an AI therapist about embarrassing experiences. They liked that the AI therapist would always be accessible, and that they could participate from any location. 

Just over half of participants in the study said that they trusted a human therapist above AI for data security. The AI was more trusted by 14% of participants. Another 35% chose “none of the above,” suggesting that they do not perceive a difference in data security. Remarkably, considering that 55% of respondents expressed an overall preference for AI therapy, at least some of those who felt that humans would be more effective at protecting their data still would rather be treated by AI.

In a separate Korean study, researchers showed that making a chatbot more human-seeming could actually discourage clients from sharing openly about themselves. If an AI chatbot used a real human image, and the user perceived the chatbot as having thoughts and feelings, they were less likely to be open with it. 

The relationship between how a client perceives a chatbot and how the client interacts with that chatbot is a complicated one. But this study suggests that, for AI therapists, trying to more closely emulate a human therapist doesn’t need to be the goal. Being a bot, it seems, has some advantages.

AI can’t get licensed. Can it do therapy?

An AI chatbot can’t actually get licensed as a therapist. Those licenses are only for human beings, who complete specific training and experience requirements, and pass a clinical exam. (It’s worth noting that AI can pass the clinical social worker exam – even without seeing the questions. But that shows the weaknesses of exams more than the strength of AI.) So AI therapists can’t technically assess or diagnose mental illness, and typically can’t advertise themselves as psychotherapists. For now, those remain the domain of human beings. 

But, legalities aside, is it reasonable to say that AI therapists perform therapy? 

Much of the scholarly discussion of AI-based mental health care tackles this question, often with strong moral and philosophical conviction. In the American Journal of Bioethics, researchers from the University of Zurich argue that conversational AI cannot act as a “moral agent.” They say that a chatbot can never be an “equal partner in a conversation” the way a human being can. Ultimately, they suggest that conversational AI “simulates having a therapeutic conversation, but it does not have any.” 

Building on these points, Marcyn Ferdynus, a professor at the John Paul II Catholic University of Lublin (Poland), highlights several elements that make an AI therapist unlike a human one. These include the lack of a conscience, practical wisdom, or the ability to ethically reflect on its own behavior. 

Others suggest that the very definition of psychotherapy demands a person-to-person relationship, making it impossible for a chatbot, no matter how advanced, to ever serve in the capacity of a therapist. The American Psychological Association, for example, defines psychotherapy as a service “provided by a trained professional.”

The title matters, particularly for issues like addressing suicide and reporting child abuse. In most states, the law demands that licensed therapists prioritize the safety of impacted individuals over confidentiality in these situations. But coaches, consultants, and other unlicensed professionals may not have the same mandates. So if an AI therapist actually qualifies as a “therapist,” it may have reporting and safety responsibilities. If it doesn’t qualify – if it is not a therapist and is not doing therapy – there may not be any reporting and safety obligations. Indeed, there may be little legal protection at all for the confidentiality of session content. 

AI-based mental health providers seem content to sidestep the argument about what to call the service they provide. While highlighting that their platform uses “concepts based on Cognitive Behavioral Therapy” (emphasis mine), and has strong privacy protections, Woebot Health clarifies that their AI chatbot “does not replace clinical care,” and instead serves as an “adjunct” to that care. Other AI providers may choose to use unregulated terms like “mental health coaching” or say that their services help with general support and mental well-being. 

This kind of terminology may still allow for AI therapists to be paid for through insurance or state systems, even if their services aren’t specifically labeled as “therapy.” Most states already fund and regulate peer support services. These similarly are not provided by a licensed health care professional, but are instrumental in providing supportive care for individuals with mental health, substance use, or other problems.

Human therapists understandably question whether AI can participate in a relationship with a human being in a way that can be appropriately called “therapy.” For payors, as with many clients, the debate may not matter. AI will be close enough.

This is part 2 of a 3-part series on AI in mental health care. Read the other articles: Part 1 and Part 3.