Measuring client outcomes is fast, easy, and essential. Current measurement tools are up to the task, and if more therapists used them, we could get better at our work.
Therapists often fear manualized treatments in psychotherapy. If the therapy process is boiled down to a script, the fear goes, the actual therapist becomes interchangeable with anyone else following the same script. Taken to its logical end, if therapy is just a set of manualized techniques, we could easily be replaced by robots.
We’re big fans of Scott Miller and his Top Performance Blog around here. Miller has allowed his career to be guided by emerging research, a trait that is surprisingly rare in the psychotherapy world. It has led him to some very useful conclusions about how we can become more effective. Deliberate practice and using outcome data are two specific things that we all could do that would almost certainly improve our outcomes.
There are many things about his work to admire. But what I appreciate most is his skill at walking the difficult line between being alarmist — it’s kind of a big deal that therapy outcomes haven’t gotten better in 40 years — and being supportive and uplifting for therapists who are doing their own part individually to improve. So it was an honor to meet him at the 2017 Evolution of Psychotherapy Conference, and to have him interview me earlier this year for his blog.
You can see our post there to the right. That was the image we shared. Here’s a sampling of how people responded:
- I find this to be a dangerous overgeneralization.
- Be careful with this. Lives are at stake.
- This is just wrong!!
- Dangerous, irresponsible statement!
Not only do we stand by the quote, the finding itself isn’t especially controversial in the world of research on treatment for substance use disorders. It’s mostly controversial among professionals who don’t want it to be true.
Streamlining licensure. Banning reparative therapy for minors. Fixing problems in child abuse reporting. Changing “interns” to “associates.” Saving Psychotherapy.
I’ve spent years now fighting for major changes in the world of mental health care, and winning. Many of the changes I’ve played a role in were ones that I was told would be impossible.
Today we launch Ben Caldwell Labs, the most important project of my career. The change I’m fighting for this time involves you.
We can’t say we didn’t see this coming. In the first edition of The Heart and Soul of Change: What Works in Therapy, published in 1999, authors Mark Hubble, Barry Duncan, and Scott Miller predicted that psychotherapists would soon be facing a new era of accountability for their work. Clients, payors, and policymakers would all demand hard evidence that psychotherapy was effective. That era is well underway, and so far we have provided a wealth of the kind of information these parties have demanded.
We can demonstrate that therapy works as an overall conclusion, and within the contexts of specific problems and populations. Proving the effectiveness of specific models has been helpful in many ways (showing that model-based treatment is superior to no treatment) and enlightening in others (showing that, for most problems, the model of therapy has little to no impact on outcome). The brain research discussed by Siegel, Cozolino, and others explains why therapy works. Psychotherapy in general is being held accountable, and it is passing the test with flying colors.
The same cannot be said of specific therapists, or of therapy training programs – yet.
A new study of family therapists who had been disciplined by their state licensure boards showed that every one of the five who had their licenses revoked was still working with clients or planning to do so. None of them appeared to have learned much from their experience.
Odds ratios give researchers eye-catching (and media-catching) results. They also rely on readers’ confusion about what they mean. They’re becoming fashionable in research and in media precisely because they are so eye-catching, but don’t be fooled by a weak finding dressed up to look like a strong one. Here’s a quick primer on how odds ratios work.
Let’s say I publish a book. Just for fun, let’s call the book America’s Cupcake Scourge.
New Study Reveals Secret To Saving Your Marriage After Infidelity
Uh, yeah, no. HuffPo summarizes a recent UCLA/University of Washington study on couples who experienced infidelity. The study found:
Over the course of the study, 19 of the couples experienced infidelity by one or both partners. Of those couples who cheated, 74 percent revealed their infidelity before or during the study, while 26 percent kept it secret (it was later discovered by researchers).
That distinction became very important in determining which relationships survived. At the end of five years, 43 percent of couples who had revealed the infidelity were divorced compared to 80 percent of the couples whose infidelity was kept secret.
Leaving aside the time-shifting abilities couples apparently displayed here (if I’m reading that right, apparently some couples revealed before the study an instance of infidelity that happened during the study), those numbers are almost surely wrong. More than 19 couples probably experienced infidelity over that time, and however far the actual number is over 19, those couples didn’t tell the researchers about it ever.Let’s presume that some couples experienced infidelity and the researchers never knew, either during or after the study. That’s not a stretch; many couples don’t report their infidelity, for a variety of reasons. Of these couples, of course some broke up and others stayed together. What would that do to results? Potentially quite a lot. The report says that 80% of couples who kept their infidelity secret for a little while broke up. That sounds like a more formal way of saying four out of five, because it is literally four out of literally five. Such a small sample should immediately raise your skepticism. If even just three other couples in the study experienced infidelity, didn’t tell the researchers about it, and stayed together, the divorce rate for secret-keeping couples would drop to 50% — a rate that (statistically speaking) isn’t meaningfully different from the 43% divorce rate among couples in the study who revealed their affairs. Even leaving aside the issues here about how much research participants can be trusted to self-report cheating, this is also an example (common in couples research) of mistaken assumptions about causation. The researchers here are suggesting that, quite often, acknowledged secret-keeping about infidelity led a marriage to break up. But it is just as plausible that the opposite is true: The breakup caused the secret to come out to the researchers. Neither partner would have as much reason to keep infidelity a secret once their relationship had ended. Now, I can certainly tell you from my own clinical experience that for many couples who come to therapy, the full disclosure of infidelity is vital to keeping their marriage together. But I can also tell you that there are couples who experience infidelity, don’t reveal it to their partners, don’t go to therapy, and still stay together. Is one route better than the other (or at least likelier to lead to a lasting and happier marriage after cheating)? Perhaps. But this study, with its small sample sizes and its unjustified conclusions, doesn’t get us anywhere closer to knowing. All it actually tells us is what we already knew: That sometimes, couples who choose to work on their relationship after revealing an affair are indeed successful in keeping their marriage together.
In a study of 5,000 couples, a structured program of relationship enhancement not only failed to help poor, unmarried parents. It appeared to make a few things worse.Almost three years ago, I posted on the Building Strong Families (BSF) project, a first-of-its-kind national study of relationship education (also sometimes called premarital education, marriage education, relationship enhancement, or simply RE). The study involved more than 5,000 couples at nine sites around the US, all of whom were low-income, unmarried parents. In theory, such couples could especially benefit from effective programs to improve couple and family functioning. Couples in BSF were offered relationship skills groups, individual support from assigned “family coordinators,” and referrals as needed to specialized services such as mental health counseling, housing assistance, legal services, and child care.My initial post had focused on 15-month follow-up data, which was not promising, but gave reason to hold out for longer-term data. Late last year, the BSF program released its final report, which included a 3-year follow-up. It was a spectacular failure. The executive summary was appropriately blunt (emphasis added):
After three years, BSF had no effect on the quality of couples’ relationships and did not make couples more likely to stay together or get married.
The first piece there, that the program did not impact the quality of couples’ relationships, should startle those therapists who use or recommend relationship enhancement programs. Even in the absence of long-term outcome data, these programs have often highlighted the studies that appear to demonstrate that they offer at least a short-term improvement in relationship satisfaction. (A good current summary of such studies can be found in this Journal of Marital and Family Therapy article, subscription or payment required.) In the BSF study, any such gains had evaporated even before the 15-month follow-up, unlikely to ever return. Couples who had gone through the program were no better than control-group couples on measures of relationship happiness, supportiveness and affection (treated as one variable), faithfulness, or conflict management.The executive summary continued (again, emphasis mine):
BSF had no effect on couples’ co-parenting relationship; it had small negative effects on some aspects of father involvement. […] BSF had no effect on the family stability or economic well-being of children; however, the program led to modest reductions in children’s behavior problems.
Fathers who went through the BSF program were spending significantly less time with their children at the three-year follow-up than control-group fathers. At the same time, children of couples who participated in the program did experience significantly fewer behavior problems; that was the only variable, of 20 studied, to show a desired outcome from the program. Neither the executive summary nor the full BSF report clarifies whether these two findings are more directly connected. It is at least plausible that these families were making rational decisions about the level of father involvement, if such involvement was likely to be detrimental to the child. It is also certainly plausible that the two findings are unrelated. I hope the study makes its full dataset available to other researchers for additional analysis.Perhaps most disappointing in the three-year dataset is the failure of the study’s Oklahoma site to maintain its strong standing over time. In Oklahoma City, the BSF program used a well-established curriculum based largely on the popular PREP program, engaged many more families in participating than other sites, and showed much better outcomes at 15 months. However, most of the positive impacts observed there at the 15-month follow-up had faded away by 36 months. The only positive impact unique to Oklahoma City that remained was a small improvement in family stability, an impact that had not been present in the earlier data. But this positive result was offset by another program site in Florida, where the 36-month data newly demonstrated negative impacts on relationship status and quality, father involvement, and family stability. I suggested in my earlier post that perhaps the BSF program was not failing, but rather it was not drawing enough participation:
It would be more accurate to say that couples didn’t finish the programs than it would be to say that the programs didn’t work; with the exception of the project’s Oklahoma site (which performed much better than other sites in a variety of ways), only 9% of couples completed at least 80% of the relationship enhancement curriculum offered to them. That’s a big problem.
However, the program’s own executive summary suggests that the participation issue should not be reason to dismiss their overall conclusion that BSF largely failed (emphasis added):
Across the eight programs, only 55 percent of couples assigned to the treatment group attended a group relationship skills session. However, analysis of BSF’s impacts among couples who did attend found little evidence of effects on relationship outcomes. Thus, it does not appear that low participation rates explain BSF’s limited success in improving couples’ relationships.
Unfortunately, the field of relationship education is now left with not one but two major problems. 1, How can we design a program to teach family skills such that low-income couples with children would actually be able to make use of it? If the low participation rate was because of difficulty arranging transportation or child care, or other life stressors, programs must find ways to address those stressors first. (See also Maslow’s hierarchy of needs.) 2, Even if we could address all of the circumstances that keep couples from participating in these programs, what could we offer them that would actually make a difference in the long term? Three years and many millions of dollars after the launch of BSF — the average cost of treatment was about $11,000 per couple — we are no closer to an answer to that question.# # #The next wave of data on relationship education will come from a separate five-year study, the Parents and Children Together (PACT) program, which will examine programs that provide both relationship skills and employment services. The BSF report suggests that PACT may ultimately be more effective, given some research findings that fathers are more involved with their children when they see themselves as economically successful.Your questions, comments, and predictions about future studies are welcomed. You can post them in the comments below, by email to ben[at]bencaldwell[dot]com, or on my Twitter feed.