The California Board of Behavioral Sciences will discuss clinical exams this Friday. My colleague Tony Rousmaniere and I decided to dig into these exams, beyond just the horrifying report ASWB released this summer. (TLDR: Wildly disparate passing rates by race/ethnicity.) While I’m previously on record as not a fan of clinical exams, they’re widely accepted. We figured we would follow where the data leads us. And so here it is:
research
Why MFTs struggle to influence public policy
As marriage and family therapists, we have a vast body of knowledge supporting our work with families and communities. Many of the pinciples and interventions from this body of knowledge could be utilized in public policy, to great positive effect. As two examples, family breakdown could be reduced, and juvenile crime recidivism decreased, both in ways that actually save taxpayers money. Politicians of all parties should be chomping at the bit for such policies.
Except that they don’t. And the April 2009 Family Relations journal helps us to understand why not.
After reading this, you’re 250% more likely to understand odds ratios
Odds ratios give researchers eye-catching (and media-catching) results. They also rely on readers’ confusion about what they mean. They’re becoming fashionable in research and in media precisely because they are so eye-catching, but don’t be fooled by a weak finding dressed up to look like a strong one. Here’s a quick primer on how odds ratios work.
Let’s say I publish a book. Just for fun, let’s call the book America’s Cupcake Scourge.
Non-groundbreaking study: People who work on an unhappy relationship sometimes succeed
Just as you would with any “One simple trick” ad, you should be skeptical of a headline like this:
New Study Reveals Secret To Saving Your Marriage After Infidelity
Uh, yeah, no. HuffPo summarizes a recent UCLA/University of Washington study on couples who experienced infidelity. The study found:
Over the course of the study, 19 of the couples experienced infidelity by one or both partners. Of those couples who cheated, 74 percent revealed their infidelity before or during the study, while 26 percent kept it secret (it was later discovered by researchers).
That distinction became very important in determining which relationships survived. At the end of five years, 43 percent of couples who had revealed the infidelity were divorced compared to 80 percent of the couples whose infidelity was kept secret.
Leaving aside the time-shifting abilities couples apparently displayed here (if I’m reading that right, apparently some couples revealed before the study an instance of infidelity that happened during the study), those numbers are almost surely wrong. More than 19 couples probably experienced infidelity over that time, and however far the actual number is over 19, those couples didn’t tell the researchers about it ever.
Let’s presume that some couples experienced infidelity and the researchers never knew, either during or after the study. That’s not a stretch; many couples don’t report their infidelity, for a variety of reasons. Of these couples, of course some broke up and others stayed together. What would that do to results? Potentially quite a lot. The report says that 80% of couples who kept their infidelity secret for a little while broke up. That sounds like a more formal way of saying four out of five, because it is literally four out of literally five. Such a small sample should immediately raise your skepticism. If even just three other couples in the study experienced infidelity, didn’t tell the researchers about it, and stayed together, the divorce rate for secret-keeping couples would drop to 50% — a rate that (statistically speaking) isn’t meaningfully different from the 43% divorce rate among couples in the study who revealed their affairs. Even leaving aside the issues here about how much research participants can be trusted to self-report cheating, this is also an example (common in couples research) of mistaken assumptions about causation. The researchers here are suggesting that, quite often, acknowledged secret-keeping about infidelity led a marriage to break up. But it is just as plausible that the opposite is true: The breakup caused the secret to come out to the researchers. Neither partner would have as much reason to keep infidelity a secret once their relationship had ended. Now, I can certainly tell you from my own clinical experience that for many couples who come to therapy, the full disclosure of infidelity is vital to keeping their marriage together. But I can also tell you that there are couples who experience infidelity, don’t reveal it to their partners, don’t go to therapy, and still stay together. Is one route better than the other (or at least likelier to lead to a lasting and happier marriage after cheating)? Perhaps. But this study, with its small sample sizes and its unjustified conclusions, doesn’t get us anywhere closer to knowing. All it actually tells us is what we already knew: That sometimes, couples who choose to work on their relationship after revealing an affair are indeed successful in keeping their marriage together.How to make a better therapist
It isn’t experience, and it isn’t the kind of training we usually do.
I’m blogging this week from the Evolution of Psychotherapy Conference in Anaheim, where many of the world’s leaders in psychotherapy gather to discuss the state of the field and share new and innovative research findings.
Therapy is tremendously effective. The average treated client ends up better off than 80% of untreated people with similar problems. Psychotherapy for mental health problems is as effective as coronary bypass surgery for heart problems and four times as effective as flouride toothpaste in preventing dental problems. But we’re not getting better, Scott Miller said in yesterday’s opening workshop.
Miller summarizes our field’s efforts to figure this out thusly: We started by looking at our treatment models, hoping to build better techniques to reach better outcomes. This is where many training programs still live: We teach manualized, empirically-supported treatments in hopes that it will make for the most effective therapists. Except it doesn’t; the models formalize and standardize our work, but they don’t make us any more effective, according to outcome studies.
So we then moved to looking at common factors, those things that work across all theoretical models. But teaching common factors doesn’t seem to make us more effective, nor does it relieve the need to know specific models, since it is through those models that the common factors work. (My good friend Sean Davis has the leading text on common factors in couple and family therapy, and in it he makes the same point.)
So the focus then shifted to the study of outcomes rather than methods, and what we found was that some therapists consistently achieve better outcomes than their peers. That has moved the field into a close examination of excellence and expertise.
The findings there are striking, and I can’t do them full justice here — I’ll address the training issues in more detail in a future post. But the short version is this: We don’t spend a ton of time training students to do the things that actually seem to make them better therapists (“better” here meaning more effective). Miller cited a vast literature identifying deliberate practice — not just more hours seeing clients, but significant time focused on reviewing weaknesses in those sessions while not actively in them and then taking action to address those weaknesses — as the key task in making people more effective. What I hear in my therapy-teacher framework: We need to be doing a lot more videotaping.
# # #
Your comments are welcome. You can post them in the comments below, by email to ben[at]bencaldwell[dot]com, or on my Twitter feed.