Every excuse for California’s MFT Clinical Exam pass rate, debunked

California flagAt the August meeting of the California Board of Behavioral Sciences, I had a tense exchange with representatives from the stateā€™s Office of Professional Examination Services about pass rates for the California MFT Clinical Exam. That pass rate has fallen off a cliff. For the first six months of the year, just 56% of those taking the test for the first time passed.

At the meeting, OPES presented about their exam development process, and argued that nothing meaningful had changed on their end. They and the BBS raised several hypotheses about both the current low pass rate and the drop in pass rate at the start of the year.

Over the past week, I investigated every one of the hypotheses offered. Not one of those hypotheses stands up to scrutiny.

Background

Hereā€™s the data that caused me concern:

As you can see, the MFT Clinical Exam pass rate is unusually low, and experienced a significant drop between the fourth quarter of 2016 and the first quarter of 2017. OPES said during their presentation that they were unsure why the change happened. They havenā€™t tried to make the test more difficult. They also said they were not particularly concerned about the data above. They were aware of it, and it raised flags in their office, but not to the point where they felt that any particular action was warranted.

The full exchange is worth watching. Iā€™ve embedded it below. (Thereā€™s also a more edited-down version available on our Facebook page.)

There are two issues in the data, and it can be helpful to think of them separately. First, there’s the pass rate itself, which at 56% for first-time examinees is highly troubling. The exam is supposed to test for public safety for independent practice, so a 56% pass rate means that in the eyes of the test, 44% of those first-time examinees would be a danger to the public if allowed to practice without supervision. That’s highly unlikely given the training and supervision requirements that prepare us for licensure.

Second, there’s the sudden drop in pass rates, which would raise eyebrows even if the rate itself was still fairly normal. For the pass rate to drop by 15% from one quarter to the next, when there are huge sample sizes, suggests that something meaningful changed in that time.

The OPES representatives, BBS members, and BBS staff proposed a handful of possible explanations for the data. Notably, only one of these (prep programs) serves as a possible explanation for the sudden drop in pass rate. All of the hypotheses are reasonable; there arenā€™t any mustache-twirling villains here. The BBS and OPES were, I think, simply brainstorming about why the data might look the way it does in the absence of a problem with the test itself. Itā€™s just that I looked into each one, and not one holds up.

Unless a better explanation comes along, the problem is with the test.

Itā€™s not because of the candidate pool

Itā€™s easy to hear the OPES representatives raise this hypothesis and think theyā€™re just shifting responsibility. But they have a fair argument, one that I myself made: A sudden drop like this really can only be explained one of two ways. Either the test has somehow changed, or the people taking it have. OPES does not think the exam has changed, so theyā€™re logically concluding that the problem lies with the people taking it.

Unfortunately, neither OPES nor the BBS nor anyone else has provided any data to support the idea that the candidate pool has meaningfully changed. As I noted at the board meeting, the size of that pool is so large (about 1,000 first-time examinees per quarter) and so diverse that it is highly unlikely that the overall traits of that pool would change enough in a single quarter to explain a sudden 15% drop in pass rates. The candidate pool simply doesn’t change overnight.

There is one thing — and only one thing — we can say with certainty is changing about the candidate pool. See “It’s not the universities” below.

Itā€™s not because of the people who have made a career out of being a registrant

BBS Executive Officer Kim Madsen suggested that a low overall pass rate might be due in part to those examinees who have been interns for a long time. That makes some logical sense; the longer it has been since you graduated, the more difficult it might be for you to recall or re-learn material from your graduate education. But the BBSā€™s own data shows that this theory doesn’t fly.

Back in 2008, they helpfully put together an anonymized data set of every MFT who had registered with the BBS from 2002-2004. The data set showed each personā€™s outcomes, including how long they spent as an intern, how many exams they took, and whether they had made it to licensure.

To get the cleanest look possible, I looked only at those from the data set who had actually reached licensure. I looked first at all three cohorts together, and then just the 2002 cohort, since it would offer the most complete picture of a cohortā€™s progress over time.

There was, as it turned out, a correlation between the time someone spent as an intern and the number of exam attempts it took them to get to licensure. It was a pretty strong correlation, too — 0.51 for the three cohorts together, and 0.57 for the 2002 cohort by itself.

But hereā€™s the thing: That correlation was in the opposite direction from what was hypothesized. People who had been interns for longer needed fewer attempts to get through to licensure.

Admittedly, the data comes from 2008, and deals with folks who graduated up to 15 years ago. It would be interesting to see a set of fresher data. Itā€™s also worth noting that even in that 2002 cohort, about 20% of them were still active as registrants (meaning they hadnā€™t yet taken an exam but also hadnā€™t dropped out of the process) when the report was put together in July 2008. So the true correlation is probably less strong than the numbers from that data set suggest.

But thereā€™s no reason at all to think that the correlation has somehow flipped. And thereā€™s certainly no reason think that both the correlation flipped and the proportion of long-term registrants taking exams suddenly grew between late 2016 and early 2017.

Itā€™s not the universities

Curriculum requirements for California MFTs grew from 48 graduate units to 60 for those starting their degrees on or after August 1, 2012. Some programs started with the new curriculum standards up to a year early. Gradually, those MFTs with the new, longer degrees are making their way through to the exams. With each new group of examinees, a larger proportion of them will have done the longer degree programs. In other words, each new cohort of examinees at this point should have more training than the one that came before, and each one should set a new record as the cohort with the most graduate training of any group to ever take Californiaā€™s license exams.

While the idea that more training equals better knowledge is certainly questionable, it doesnā€™t make sense that more training would somehow lead to less knowledge. There are more than 70 degree-granting MFT programs in California, making it impossible for changes at one or two programs to explain the change in pass rates. No single university produced more than 7% of the exam pool for either MFT exam in 2016. Educational variables simply donā€™t change enough from one cohort to the next to explain the sudden and major drop in pass rates seen above.

Itā€™s not the test prep programs

Both the BBS and OPES suggested that test prep companies may be giving examinees bad advice for taking the MFT Clinical Exam. If prep companies are advising examinees to read questions multiple times, for example, then those examinees risk running out of time on the test before they answer all the questions. Leaving aside for the moment the fact that an exam for mental health licensure should not be a test of reading speed, I asked representatives of the major prep companies and inquired with some of their customers. All of them reported that the companies encourage examinees to read questions carefully, but not to keep re-reading the same question. And none of them significantly changed how they advise folks to read questions in the second half of 2016, just before the precipitous drop in pass rates.

Itā€™s not that it’s a new exam

The one historical precedent for a low pass rate on a multiple-choice MFT exam comes from the first half of 2004, when the state first instituted the MFT Written Clinical Vignette Exam. That exam, which replaced the prior oral exam process, used case vignettes where each case was paired with a handful of questions. In the first six months of that test, fewer than half of those taking it passed. But in that case, it truly was a brand new test, in a new format that was unfamiliar to examinees and the companies that prepare people for exams. Everyone was flying blind.

In that case, the pass rate came up in the next cycle and stayed up after that. Since the exam was short (at just 30 scored questions), there was a higher degree of variability in the pass rate for that exam from one cycle to the next. (See the chart at the bottom of this post.) But even with a short exam and high variability, after that first cycle, pass rates were always at least 60% for first-time examinees. From 2008 until the exam was discontinued at the end of 2015, the lowest six-month pass rate was 71%.

The new MFT Clinical Exam debuted in California to higher-than-expected pass rates, including a pass rate of 91% in the first three months it was offered. This has been explained by the BBS as possibly being due to examineesā€™ familiarity with the test. Many of those taking it had previously taken the Standard Written Exam, and at least some of the questions on the new Clinical Exam were actually the same as questions that had appeared on the SWE.

With the pass rate now having fallen as far as it has, one explanation offered is that this is because the test is new and unfamiliar, like the WCV was when it started. But the BBS and OPES canā€™t have it both ways. If the early pass rate on the Clinical Exam was so high because the test was familiar, it canā€™t now have dropped so far because it is somehow so different. And unlike the sea change that was the Written Clinical Vignette, the length and formatting of the new MFT Clinical Exam are quite similar to the old SWE. Simply put, it’s not that different of a test, in either content or structure.

Itā€™s not actually normal

Madsen made reference to historical data, suggesting that in prior years, pass rates in the 60s were normal. As I said in the video, sheā€™s right if you include repeat examinees (those who failed the test at least once and are re-taking it). But if you look at the cleaner data set provided by first-time test-takers, you can see just how unusual the current low pass rates are. (Note that prior to 2016, the BBS only provided data in six-month segments to match what was then the exam cycle. I’ve grouped quarters for 2016 and 2017 into half-years below.)

The current low pass rate on the California MFT Clinical Exam (“MFT CCE” on the chart) is highly unusual, even in historical context.

Since test pass rates do naturally vary over time, it’s also reasonable to ask whether the drop is just normal variation. I ran the stats by a former engineer-turned-math-teacher friend of mine, who told me that given the sample sizes involved, the odds of the drop from 2016Q4 to 2017Q1 occurring by random chance alone are about 400 trillion to one.*

So what do we do?

Since I first started writing about this, many of you have contacted me to express your shared concern and frustration. The BBS will be addressing the issue again at their November meeting. In the meantime, some suggestions:

If your California MFT Clinical Exam is coming up, ignore this whole issue. Seriously. Study hard, and when you take the exam, kick its ass. It will do you no good to stress over exam-related politics. Come check in here after you have passed. I’ll be eager to welcome you as a licensed colleague.

Regardless of where you are in your career, do not send angry emails or make angry phone calls. Not to the BBS, not to OPES, not to anybody. Losing your cool will hurt efforts to fix this problem. It will only alienate the people you are contacting. It will make it less likely that they will work with us on positive changes to the exam process, and more likely that they will dig in their heels about blaming anyone who would make a handy scapegoat. I know it’s tempting. Don’t do it.

Do stay in touch with me, and let me know if you want to get involved. “Like” the Ben Caldwell Labs page if you haven’t already, as that’s where I always post new stuff first. New articles on this blog auto-post to the Facebook page the minute they go live.

If you want to reach me, I’m at ben@bencaldwell.com. There’s clearly more work to do here, so I’ll keep updating as new information warrants.

* Like any good stats person, he also cautioned that there are a lot of caveats here. It requires assuming that the test is unchanged in terms of difficulty, and that the examinee groups are equivalent.