After reading this, you’re 250% more likely to understand odds ratios

By aussiegall from sydney, Australia (30 Days of Gratitude- Day 15) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia CommonsOdds ratios give researchers eye-catching (and media-catching) results. They also rely on readers’ confusion about what they mean. They’re becoming fashionable in research and in media precisely because they are so eye-catching, but don’t be fooled by a weak finding dressed up to look like a strong one. Here’s a quick primer on how odds ratios work.

Let’s say I publish a book. Just for fun, let’s call the book America’s Cupcake Scourge. (The thrust of the book will be that places like Sprinkles put way too much frosting on their cakes.) I want my book to sell well, so I start by doing a little bit of market research: I give my book to five men and 10 women to read.

Sadly, the dudes do not think I am much of a writer — only two of them like my book. The women seem to like it better — seven of them are fans.

So in each sex, we have three people who do not like the book:

  Likey No Likey
Men 2 3
Women 7 3

Looking at these in a traditional way, we could accurately say that 40% of men (two out of five) and 70% of women (seven out of ten) like the book. Using those same numbers, we might also say that women were almost twice as likely to enjoy my book as men. Seventy is, I am fairly sure, almost twice as much as 40.

Looking at odds ratios, though, we come up with a much more dramatic tale. The ratio of likers to dislikers among men is 2:3. Among women it is 7:3. Since the second term of each ratio is equal, odds ratios just compare the first terms. (In essence, they are asking “for every three book dislikers of your sex, how many book-likers are there?”) Since it takes seven book-liking women, and just two book-liking men, to balance out three book-dislikers, the odds ratio here would be 3.5 (seven divided by two).

So a researcher using an odds ratio might say something like “Women are 3.5 times as likely to enjoy Ben Caldwell’s book as men.” Or to phrase it a bit differently, they might say “You are 250% more likely to enjoy Ben Caldwell’s book if you are a woman than if you are a man.”

Two hundred and fifty percent!

Except if you go back to comparing our original percentages based on the same sample (70 of women versus 40 percent of men liked the book, remember), the most you could accurately say is that women are 75% more likely to enjoy the book than men.

In that example, even though the odds ratio distorted things, we still had a pretty big difference. The women did indeed appear to like my book much more than men. Where things get particularly misleading with odds ratios is around smaller differences. Let’s say my sample was a bit bigger, and far more balanced in both participants and results: 10 out of 20 men, and 11 out of 21 women, liked it.

  Likey No Likey
Men 10 10
Women 11 10

By raw percentages, that works out to 50% of the men and about 52% of the women liking the book. This difference certainly would not meet any reasonable standard of significant difference; it’s virtually nothing. But going the odds-ratio approach, we see that the ratio of book likers-to-dislikers is 11:10 for women and 10:10 for men, giving us an odds ratio of 1.1 when comparing the genders. So the odds ratio crowd would say something like “You are 10% more likely to enjoy Ben Caldwell’s book if you are a woman than if you are a man.” They aren’t lying — that is an accurate reporting of the odds ratio. It’s just that the odds ratio statistic distorts a minute difference to sound like a much bigger one.

Wikipedia gets this right with their own well-done odds ratio example: If 90 out of 100 men in a sample drank over the past week, and 20 of 100 women drank, a straightforward way to say this would be that men were 4.5 times as likely to drink as women. But an odds-ratio approach would suggest that men have 36 times the odds of being drinkers compared to women.

Researchers are not dumb.* But trying to make weak results look stronger to get attention is hardly new. In fact, it is only controversial when it is blatant. Odds ratios provide the impressive-sounding numbers to suggest that your study found big effects even when it didn’t, and to then grab the resulting media attention. Should you come across such fanciful claims, you can safely laugh at their funhouse-mirror appearance; just accept that the numbers are distorted, look for an effect size instead, and understand that many science reporters aren’t yet onto the trick. Give them time.

* – I am a researcher, and I am occasionally dumb. So I can’t really say this as an absolute. But most researchers, most of the time, are not dumb.