In the previous two articles on AI-based therapy, I’ve detailed why AI therapists are poised to transform the mental health care industry and why many clients will prefer AI therapists over human ones. Here, we’ll look at how human therapists can remain indispensable as cheap, AI-based therapy becomes widely available.
Some context first. Of course, there will always be a market for human-based psychotherapy. For those clients who can afford it or whose health insurance covers it, many will choose to share their darkest thoughts and feelings only with another living, breathing person. If you’re a therapist who wishes to simply continue what you’ve been doing, and hoping clients will come to you, that’s fine. I believe your market will shrink, perhaps considerably. But that market will still exist.
This article is for those who are considering shifting or adapting their work to be ready for the widespread availability of low-cost, AI-based mental health care. Here are five things you can do to make yourself valuable, even indispensable, in a world where AI therapists encroach on the work of human ones.
1. Diagnose
There is a lot that AI therapists can do, but issuing a diagnosis is not presently on that list. Mental health assessment and diagnosis remains a critical function in the mental health care system. Those therapists skilled at assessment and diagnosis may wind up with practices focused on those tasks.
Consider the day-to-day practice of a psychiatrist, who primarily makes diagnoses, prescribes a medication treatment, and then may largely serve to monitor the effectiveness of that treatment rather than providing it themselves. A human therapist could similarly meet with a client, assess their symptoms, determine that AI-based therapy will be sufficient to meet their needs, and then serve in primarily a monitoring role. The client would make use of the AI-based therapy, and report back to the human therapist only for occasional check-ins, or in the event of a crisis or problem.
2. Specialize
At least in the near term, AI therapists will be useful for the kinds of garden-variety mental health concerns that companies have built large data sets around. Individual, telehealth-based therapy for mild to moderate anxiety and depression seems most likely to be addressed with AI therapists. Those human therapists who specialize in more severe or unusual concerns will be more protected from AI encroachment.
3. Focus on crisis and reporting issues
Human therapists will be especially important for those tasks that AI therapists will be explicitly trained to turn over to human ones. Any time a client expresses active suicidality, homicidality, active intoxication, or psychosis, or shares information that must be reported to law enforcement or protective service agencies, AI therapists are likely to immediately call upon a trained human therapist who can handle the situation appropriately.
4. Work with couples, families, and children
AI therapists are ready to take on individual psychotherapy. They are not ready for the situations that couple, family, and child therapists address, ones that often require using the therapist’s physical presence. The therapist can lean in or raise a hand to interrupt conflict among family members. They might rearrange the seating of a family to address coalitions in the family. The therapist might fine-tune a family sculpture by physically moving the clients, or start an enactment in couple therapy by having a couple hold hands and turn toward one another. They might get on the floor with a child client to engage in collaborative play.
In each of these instances, of course an AI therapist could provide instruction. But the physical presence of the therapist in the room helps ensure that the intervention works the way it should. Therapists who work in person with couples, families, and children are among the least likely to be displaced with AI in the near term.
5. If you’re up for it, train the AI therapists
Like any new machines coming into an industry, AI therapists will need human therapists to train them, put appropriate guardrails in place, monitor their work, and ultimately help them improve. Those therapists who welcome the increased access to care that AI therapists can provide might be well suited to these roles. Having human therapists involved in these processes is particularly important for questions of ethics, where the best interests of the client may not align with the best interests of the company providing the AI therapist. Because the therapist isn’t a person, the usual boundaries imposed by state licensing laws and professional codes of ethics don’t apply.
For a variety of understandable reasons, many therapists will not want to get involved in AI-based therapy in this way. But for those willing to do so, some AI developers will welcome their influence.
Getting involved in AI policy
In addition to the more clinical steps above, therapists interested in protecting the public from possible harms associated with AI-based therapy can get involved on a policy level. Here are just a few of the policies that may be helpful to ensuring that human therapy remains accessible and equitable, even as AI therapists become widely available.
- Develop and apply regulations to AI-based psychotherapy. Because medical devices are regulated at the federal level, state boards may believe (and may be correct in believing) that they have no authority to police AI-based care. But therapists, their boards, and other state regulators can clarify when the use of AI therapists is allowed, particularly when it is supplanting work that would otherwise be done by people. Insurance regulators can clarify in law how and when the availability of AI-based psychotherapy can be used by insurers to escape concerns about network adequacy, for example. Regulators also need to determine how to best protect client data in law.
- Clarify scope of practice laws. Generally speaking, only licensed individuals can call themselves psychologists, clinical social workers, counselors, or marriage and family therapists when selling their services to the public. But can a device say that it offers psychotherapy? As discussed previously, clients may not care whether the services of an AI therapist are called therapy, mental health coaching, or something else. But for regulatory purposes, the title matters.
- Apply existing crisis and reporting laws to AI-based therapy. It is reasonable to question whether a client’s discussion of suicide, homicide, or child, elder, or dependent adult abuse, which typically would require a human therapist to break confidentiality, would be similarly actionable when the client discusses these things with an AI therapist. States should clarify that reporting and disclosure laws, which serve to protect vulnerable populations, apply to AI-based health care just as they would when the care is provided by a person.
This is part 3 of a 3-part series on the role of AI in mental health care. Check out Part 1 and Part 2.