Frankly, I wouldn't even inflict it on you, if it weren't for the "boxes" at the end of the article, where they give some examples of how misleading statistics can be. For example:
One set of such studies, for instance, found that with the antidepressant Paxil, trials recorded more than twice the rate of suicidal incidents for participants given the drug compared with those given the placebo. For another antidepressant, Prozac, trials found fewer suicidal incidents with the drug than with the placebo. So it appeared that Paxil might be more dangerous than Prozac.
But actually, the rate of suicidal incidents was higher with Prozac than with Paxil. The apparent safety advantage of Prozac was due not to the behavior of kids on the drug, but to kids on placebo — in the Paxil trials, fewer kids on placebo reported incidents than those on placebo in the Prozac trials. So the original evidence for showing a possible danger signal from Paxil but not from Prozac was based on data from people in two placebo groups, none of whom received either drug.
Get that? If you compared the "statistical significance" of the two studies, you might come to exactly the opposite conclusion from what the evidence showed. One study just had more incidents from kids in the control group, those who didn't get the drug at all.
Or try this:
For a simplified example, consider the use of drug tests to detect cheaters in sports. Suppose the test for steroid use among baseball players is 95 percent accurate — that is, it correctly identifies actual steroid users 95 percent of the time, and misidentifies non-users as users 5 percent of the time.
Suppose an anonymous player tests positive. What is the probability that he really is using steroids? Since the test really is accurate 95 percent of the time, the naïve answer would be that probability of guilt is 95 percent. But a Bayesian knows that such a conclusion cannot be drawn from the test alone. You would need to know some additional facts not included in this evidence. In this case, you need to know how many baseball players use steroids to begin with — that would be what a Bayesian would call the prior probability.
Now suppose, based on previous testing, that experts have established that about 5 percent of professional baseball players use steroids. Now suppose you test 400 players. How many would test positive?
• Out of the 400 players, 20 are users (5 percent) and 380 are not users.
• Of the 20 users, 19 (95 percent) would be identified correctly as users.
• Of the 380 nonusers, 19 (5 percent) would incorrectly be indicated as users.
So if you tested 400 players, 38 would test positive. Of those, 19 would be guilty users and 19 would be innocent nonusers. So if any single player’s test is positive, the chances that he really is a user are 50 percent, since an equal number of users and nonusers test positive.
Wild, huh? So what does all this mean? If you want to read the whole article and explain it to me, in very simple language, feel free. But I'll tell you what it doesn't mean. It doesn't mean that we can't trust anything scientists say. It doesn't mean that we're not learning more all the time - in every field of study. It doesn't mean that our gut is just as good at determining the truth as scientific research. Not at all.
But I would take most studies that rely on "statistical significance" - and especially meta-analyses - with a grain of salt. I'd be cautious about concluding anything based on research that shows only a slight, statistical effect. (I'd be even more cautious about accepting the accuracy of research as reported in the popular press, since the media have needs - and problems - of their own.) And certainly, I'd want multiple independent studies backing up any preliminary findings.
None of this is easy, and it's particularly difficult when we're talking about human health. We can't do research on human beings without being very careful not to cause harm. I would never want to change that, but it does make determining the truth more difficult than it might otherwise be. Statistics is a tool, but it's a tool that can easily be misused - and even more easily be misinterpreted. Lying with statistics is easy, even if it isn't always deliberate.
1 comment:
Wiki Khan Academy for information and delight on this issue of mathematics.
Post a Comment