The California Board of Behavioral Sciences will discuss clinical exams this Friday. My colleague Tony Rousmaniere and I decided to dig into these exams, beyond just the horrifying report ASWB released this summer. (TLDR: Wildly disparate passing rates by race/ethnicity.) While I’m previously on record as not a fan of clinical exams, they’re widely accepted. We figured we would follow where the data leads us. And so here it is:
After more than 50 years of use, there remains no evidence that clinical exams in mental health care improve the quality or safety of that care. Absent such evidence, our reliance on these exams is built on trust, from professionals, policymakers, and the public.
With ample evidence of racial disparity in exam performance, credible and longstanding criticisms that have not been adequately addressed, and potential conflicts of interest among boards serving as both exam buyers and sellers, that trust is not deserved.
Our full white paper is here. While there are a lot more threads we could have followed, I think it captures well the concerns about clinical exams that have been aired for literally decades, and exam developers’ arrogance in swatting away such concerns as irrelevant or impossible to address.
The whole thing is worth a read, if you don’t mind my bias in saying so. The associated discussion at Friday’s BBS meeting (you can attend) should be an interesting one. Some more pull-quotes from the white paper for the TLDR crowd, emphasis mine:
Rather than being passive recipients of existing disparities, evidence suggests that clinical exams add a unique layer of structural racism to the process of mental health licensure.
In defending its own processes, ASWB argues that “bias is less than 1%” on its exams. The significant racial disparities in exam performance [shown in the ASWB report this summer] suggest a major failure on the part of ASWB to identify bias where it has been plainly occurring.
Boards that are directly and financially involved in the development of license exams for the professions they regulate logically have reduced incentives to critically evaluate the exam processes that their own organization had a hand in developing. Boards’ roles in associations that develop exams, and rely on these exams for revenue, make it appear less likely that they would demand that testing is valid, equitable, transparent, and accountable, or take steps to abandon an exam even when the exam is known to be problematic.
[W]e crudely estimate that the clinical exam process directly costs examinees more than $16 million per year. These exam-related costs present a significant expense, and a significant hardship, for many individual examinees.
“General professional knowledge,” in this context, is circularly defined: Exam developers determine what constitutes general professional knowledge, and therefore they conclude that the exam they have developed evaluates general professional knowledge.
There’s certainly a lot to unpack on the topic. When I write something like this, I keep a separate file of writing that I cut from the document prior to finalizing. In this case, my file of cuts is as long as the white paper itself.
Hopefully, this is just the beginning of a long — and long-overdue — reckoning for the structural racism that exists in mental health licensing. Nowhere is that more plainly evident than in clinical exams for licensure. I don’t think it’s too much of a spoiler for me to share the very last line of the white paper:
An evaluation for licensure that lacks predictive validity does not protect the public. One that lacks predictive validity while producing inequitable outcomes is indefensible.