California CE providers routinely violate ad regulations

A review of 15 advertisements for continuing education events for California MFTs and LCSWs finds that all of them violate state regulations by leaving out required information. If no one is complaining, do we need the regulations?                                                     

Forget-me-not2-croppedContinuing education providers in California are expected to offer some basic information about themselves and their events when advertising, just as licensed marriage and family therapists are expected to have our licensure information attached to any advertising we do. Ideally, these disclosure requirements prevent less-than-reputable folks from putting on less-than-worthwhile CE events to swindle therapists out of money.

But no one is following the requirements.

And that’s a literal “no one,” at least in our sample. Not one advertisement followed the letter of the law.

===

We (my research assistant and I) reviewed ads from every issue in volume 21 (2009) of The Therapist magazine, published by the California Association of Marriage and Family Therapists every two months. We want to emphasize that CAMFT is not in any way responsible for policing outside ads; it is the CE providers who are solely responsible for their own advertising content. We found a total of 15 unique ads, many of which appeared in multiple issues; we did not count duplicates in our calculations.

Section 1887.9 of the California Code of Regulations (it can be viewed in the BBS laws & regulations booklet, page 142 by the on-page numbering, page 149 of the PDF) requires all CE providers to:

ensure that information publicizing a continuing education course is accurate and includes the following:

(a) the provider’s name;

(b) the provider number, if a board-approved provider;

(c) the statement “Course meets the qualifications for _______ hours of continuing education credit for MFTs and/or LCSWs as required by the California Board of Behavioral Sciences”;

(d) the provider’s policy on refunds in cases of non-attendance by the registrant; and

(e) a clear, concise description of the course content and objectives.

Providers seem to be simply ignoring this. We found a total of 15 unique ads for in-person CE events. We ignored ads for online or mail-in courses; they usually were advertising the provider, and not a specific course, and thus they are probably not subject to the above requirements.

We found that among the 15 ads we reviewed, providers routinely ignored all of the required information except the provider’s name:

Item Ads
containing
Ads
missing
% of ads
containing
1. Provider name 14 1 93%
2. Provider number 7 8 47%
3. “Meets qualifications”
statement
0 15 0%
4. Refund policy 2 13 13%
5. Course description 8 7 53%

Another way to look at this is to see what proportion of ads had 2, 3, 4, or all 5 of the required pieces of information:

Items present Ads % of all ads
One or more 15 100%
Two or more 11 73%
Three or more 3 20%
Four or more 2 13%
All five 0 0%

The requirements exist to ensure that when licensees hand over their money to CE providers, the licensee knows what they are signing up for and those hours will legitimately count toward the licensee’s continuing education requirement. But the BBS does not enforce those requirements unless they receive a complaint, and such complaints are rare: in a 7-month span reported here, the BBS received just three complaints against continuing education providers. (And those were not likely to have been about advertising, judging by prior complaint data.) For comparison, there are more than 2,000 approved CE providers in the state.

===

The relative rarity with which the requirements are followed, combined with the relative lack of complaints against continuing education providers, raises an important question: Are these regulations really helping anybody?

Certainly the BBS has better things to do with its enforcement unit than go after such minor omissions. Maybe it would be better to take the requirements off the books entirely.

Or, maybe the current landscape is just fine. After all, it allows licensees who feel they have been duped by unscrupulous CE providers to complain, and gives the BBS leverage to act against the provider who failed to advertise appropriately. It’s the CE-advertising equivalent of jaywalking: we all want the cops to focus their work on more important things, and it’s really only a problem when somebody gets hurt.

===

If you’re wondering, the flowers in the picture are forget-me-nots. Seemed appropriate.

Teen texting study an example of a researcher misleading the media

A new study connects the texting habits of teenagers with drug use and other risky behavior. Contrary to media reports, the study did not show texting to cause the teens’ risk-taking.

Teenagers who send more than 120 text messages a day are more likely than their peers to engage in a variety of risky behaviors, including sexual activity, smoking, drinking, and drug use. That much we can agree on. It was the key finding of a Case Western Reserve University School of Medicine study presented this week.

Media coverage was predictably breathless:

But there is a big problem with each of the stories linked above. As compelling as these stories are, texting did not cause poor health or risky behaviors in this study. More precisely, the study did not show a cause-effect relationship. It found correlations — associations between certain behaviors that tend to rise and fall together. It did not say what causes what.

If we know that one behavior (texting, in this case) is more common among people who also do another behavior (let’s use drinking), then we can say the two behaviors are correlated. But that leaves at least three very different possibilities when it comes to cause and effect:

  1. Texting causes drinking.
  2. Drinking causes texting.
  3. Some other thing (lack of parental supervision, maybe?) causes both drinking and texting.

A correlational study (like this one) does not tell us which of those three possibilities is most likely (the third strikes me as by far the most plausible). And reporters understand that conclusions about correlation are not especially enticing news stories. “This one thing is related to this other thing, but we do not really know what causes either one of them” makes for a lousy article.

So reporters sometimes go beyond what a study actually shows, and pull a cause-effect relationship out of thin air. In essence, they pick their favorite out of the three possibilities listed above, and run with it. They do this in spite of a complete lack of data supporting their conclusion over the other cause-effect possibilities.

That seems to be what happened here. What is unusual in this case is the degree to which the study’s lead author actively promoted the made-up conclusion.

Even though the press release about the teen-texting study largely uses the right terms in describing the results (labeling behaviors as being “associated with” each other), Scott Frank, the lead author of the study, was remarkably cavalier in determining a cause-effect relationship his study did not demonstrate. He is quoted in that same press release as saying

“When left unchecked, texting and other widely popular methods of staying connected can have dangerous health effects on teenagers.”

The medical school where the study was conducted is also encouraging this unsupported conclusion. The link to this study from the Case Western School of Medicine home page currently reads “Hyper-texting and Hyper-Networking Pose New Health Risks for Teens.”

Frank’s promotion of a conclusion his own data does not support prompted an unusually direct rebuke from John Grohol, the CEO of PsychCentral, whose own site had reported on the study earlier. Grohol wrote that Frank’s conclusions about texting having negative health effects are (emphasis Grohol’s)

all pure crap. You could just as easily write the following headlines:

Teens Who Smoke, Drink Also Text a Lot
Outgoing Teens Like to Do Things Outgoing Teens Like to Do
Teens Who Enjoy Sex Like to Text Too!

Scott Frank, MD, MS should be ashamed of himself.

I’m with Grohol on this. For Frank to say that texting can have negative health effects is, as Grohol put it, “sloppy at best, and unethical at worst.” Frank is promoting a conclusion his study simply does not support. And some media outlets appear to be all too happy to run a story confirming parents’ worst fears about teenagers and technology, even when the story and the data do not match.

===

In deference to my journalist friends, it must be noted that the examples of poor media coverage above are far outweighed, in both quantity and quality, by the many stories covering this study that ignored Frank’s quotes and reported his results accurately. Search “teenagers texting drinking” on Google’s news site and you will find far more headlines using phrases like “linked to” or “associated with” than you will find “causes.” Kudos to those writers (of both the stories and the headlines, since they are often not the same person) who understand the difference.

Recapping the 2010 AAMFT Annual Conference

There was a lot to talk about at the just-concluded 2010 AAMFT Annual Conference in Atlanta, where more than 1,700 clinicians and researchers from around the country gathered to share the latest ideas in treatment. This year’s theme was “Marriage: Social and Relational Perspectives,” and this year’s jump in conference attendance was well-deserved. Hitting some of the high points:

* For my money, Stephanie Coontz should be a keynote speaker every year. Last year, she talked about time-use studies and the changing face of American families. This year she gave a lively summation of her great book, Marriage: A history, putting modern marriage into a larger context. Next year’s theme will be “The science of relationships,” and I hope there’s a way to bring her into that, too. The woman could make the history of the paper bag engaging.

* The last plenary speaker, John Witte Jr., was not quite as advertised, but started great dinner-table conversation. He had promised a speech on “Marriage, Religion, and the Law,” which could have been wonderful — a more conservative counterpoint to the arguments others made in favor of same-sex marriage. Ultimately, he barely mentioned religion at all. Which was too bad — as I’ve argued before, there is a reasonable debate to be had about the role of religion in marriage (and specifically whether religious therapists should refer out same-sex couples they do not feel they can work with supportively). I really, really wish someone could put together a respectful dialogue on the topic. But for what it turned out to be, Witte’s speech was valuable. His proposals for legal-system remedies to the changes in family formation and dissolution in the US were far-fetched, but started some great conversations. We all want parents to be responsible for their choices, but how do you have a legal system that best balances supporting families in need with punishing those who are irresponsible? I loved the variety of ideas about that just at my own dinner table; I’m sure similar discussions were happening at plenty of others.

* We’re making great strides in the effective treatment of military veterans and their families. MFTs are ideally trained to help keep military marriages and relationships strong (or to end them more peacefully when necessary), and there was a whole track at the conference dedicated to just this kind of work. The timing could not have been better: finally, after years of struggle with the implementation process, the Department of Veterans affairs has a job description specifically for marriage and family therapists.

It’s always refreshing to renew old connections and make new ones at the conference, and I especially enjoyed the opportunity to present with some of my faculty colleagues from the Alliant MFT program. My heartfelt thanks to everyone who made the conference such a success. I can’t wait for next year!

That infidelity-and-income study? Don’t believe it.

A recent study, presented at an American Sociological Association conference to a fawning media reception (NPR / Salon), tells us that men who make less than their wives or live-in girlfriends are five times more likely to cheat. It’s bogus. Here’s why.

While commentators have been stumbling over themselves to determine what the study’s findings mean about gender, marriage, and society, no one seems to be bothering to notice that the study itself appears pretty useless. The major conclusion, linking income and infidelity, has a number of problems, not the least of which is that everyone — myself included — who wasn’t at the conference is relying on a press release and subsequent media reports about it. Such reports are notoriously unreliable, often drawing ideas from generous and/or speculative interpretations of the results rather than the study itself. That said, here are three of the reasons I’m particularly skeptical:

  1. Do the math. The National Longitudinal Survey of Youth, upon which the study is based, followed about 9,000 individuals — surely a healthy sample size. But the infidelity study examined only those who were married or with a live-in partner for more than a year, which is a much smaller subset. And of those, only seven percent of men and three percent of women actually fessed up to cheating during the study’s six-year period. So, let’s be generous and say that two-thirds of the NLSY group met the relationship-status criteria (n=6,000). And we’ll presume that roughly half are of each gender (3,000 men and 3,000 women). That leaves us with about 210 men who have fessed up to infidelity in this survey. Of those, it is not clear from the media reports how many were in situations where the male earned less than his partner; other recent research suggests about a third, or fewer than 80 of those reporting infidelity, were in such a relationship. And remember, we’re being generous because we do not have the actual numbers. To be sure, 210 male cheaters is still a decent sample, and it could be enough to draw meaningful conclusions about links between infidelity and income (among other factors). But it still is not a lot. In fact, it probably is a lot less than the number of participants in the survey who actually cheated. Remember…
     
    Updated 2010-08-20: LiveScience.com (which has more details on the methodology, and as an added bonus, commentary from Stephanie Coontz) is reporting that only 3.8 percent of men, and 1.4 percent of women, admitted to cheating in the study. That’s not exactly true; on average, 3.8 percent of men and 1.4 percent of women admitted to cheating in any given year of the six-year study, at least according to the press release.

  2. …People lie. A major income discrepancy in the relationship may be a good reason for men to simply be more honest about their cheating. Sure, you could argue, if the wife/girlfriend finds out then the gravy train ends. But if the man is in a relationship for the money, and not emotionally committed, why on earth would he lie to an anonymous survey about his cheating? There is little incentive to, and there is no cognitive dissonance to resolve over telling the truth. On the other hand, if he is emotionally engaged, and is in the relationship for reasons other than money, he may find it safer (and more palatable) to hide any previous infidelity. If all that sounds awfully speculative, well, that’s the point. People lie on studies like this, and we do not always know who will be most likely to lie or why. Yet commenters (and, too often, the researchers themselves, as seems to be the case here) treat the findings as truth in spite of their huge flaws, and then seek to divine an explanation.

  3. Account for other factors, like age, education, and religion, and the income-infidelity link vanishes. That inconvenient fact is actually in the press release, but of course, no one is paying attention to it. Does earning more than your man make him more likely to cheat? the chatterers are asking. In a word, no — the income issue appears to (at best, and even this has big holes) correlate with, but not be a cause for, cheating.

The trouble with any study of undesirable behavior that relies on self-reports is that it is impossible to know what we’re really studying — the behavior itself, or the act of reporting it. Only a more carefully (and expensively) constructed study could parse that out. In the meantime, move on. Nothing new to see here.

From the AAMFT Research Conference: Does marriage education work?

Marriage education (also known as relationship enhancement or RE) has gotten a big, warm spotlight lately. A recent big-deal writeup in the Washington Post hit on the high points: Marriage education programs are big business, they have a lot of federal money supporting them, and there’s not a lot of research on them. Do they work?

That was the basic question tackled yesterday by Howard Markman at the AAMFT Research Conference in Alexandria, VA. In general, it looks like the research base for such programs is growing but still fairly small relative to the number of RE programs in existence. Markman and his colleagues located 30 studies examining 21 different programs since 2002 — meaning that a large number of programs offered at the annual SmartMarriages conference have not been researched at all. The research that does exist is usually promising, but not definitive: programs are generally shown to produce short-term improvements in couple satisfaction and communication skills. However, there have not been studies addressing whether these programs actually do what they set out to do, reducing the risk that couples will eventually divorce over the long term.

The federal government has been running a huge study that should be able to offer clearer answers. Involving eight sites and more than 5,000 couples around the country, the Building Strong Families (BSF) project sponsored by the Administration for Children and Families is testing voluntary RE programs offered to unmarried couples who are expecting or recently had a baby. The project just released its 15-month follow-up data, and the news is not good:

When results are averaged across all programs, RE did not make couples more likely to stay together or get married. In addition, it did not improve couples’ relationship quality.

As Markman was quick to note, the news was not all bleak. It would be more accurate to say that couples didn’t finish the programs than it would be to say that the programs didn’t work; with the exception of the project’s Oklahoma site (which performed much better than other sites in a variety of ways), only 9% of couples completed at least 80% of the relationship enhancement curriculum offered to them. That’s a big problem. Where couples did tend to finish their program — at the Oklahoma site — they were more likely to still be together at the 15-month follow-up, and experienced a number of other measurable improvements as well. Furthermore, only the Oklahoma site used a program that included most of PREP, one of the best-known and more well-researched relationship enhancement programs around. Other sites used less established curricula.

The study will be releasing its 3-year follow-up data in 2012. As Markman noted, the 15-month followup may simply be too early to see the hoped-for impact on marriage that these programs would offer; by definition, preventing marriage breakup is a long-term goal. It is possible that changes will emerge over time. Until they do, however, RE programs will continue to face skepticism. Which is good, if it drives more research that will develop programs that really do ultimately meet their preventive goals.