How can we educate ourselves and others against pseudoscience? That is, how can we help students (as early as high school) see the doubts beneath some research findings and their surrounding hype?
First, let’s define pseudoscience. It consists not only in sloppy science (or scientific processes distorted to fit an agenda) but also in the many misrepresentations of scientific findings. One researcher might be using sound research methods and reporting the findings accurately and cautiously–but someone else might sweep up, distort, and misapply the results. Pseudoscience, then, consists in the extreme distortion of scientific research, whether during the research itself or afterwards.
Pseudoscience has many industries behind it: TED, pharmaceuticals, book publishing, etc. It appeals to a general craving for easy solutions to difficult problems. The Vitamin D fad, the power pose, the introvert lemon juice test, fractal zealotry, grit, himmicanes–all of these offer some kind of life relief. Yet there are skeptics out there who would look at any of these findings and call them pure nonsense.
I am a long-term “nonsense” crier. But it’s easy to call things nonsense. How do we help ourselves and young people discern the differences between real, exaggerated, and outright false findings?
One quick answer would be to “have them learn statistics.” But to do that properly, one cannot do it quickly. The courses themselves have to be designed to alert students to research uncertainties and to problems with practical applications of research findings.
This morning, over coffee, I returned to a Probability and Statistics online course offered by Carnegie Mellon University. It’s fun to plod and plot through, and it’s easy (at least the lessons I completed). It’s great to have resources like this online. But it has the danger of building false confidence. Each problem concludes with a summary and possible applications; early on, I find the summaries a bit too pat and the applications too hasty.
For instance, there’s a module on the relationship between students’ high school and college GPAs. The whole point of the module is to show a linear relationship. But the scatterplot also shows things that the lesson does not mention: within the dataset, college GPAs are lower on the whole than high school GPAs, and students with a given high school GPA will have a wide range of college GPAs.
At the end, the authors of the course offer a possible policy application: “We should intervene and counsel students while still in high school. Knowing that students with low GPAs are at risk for not doing well in college, colleges should develop programs, such as peer or faculty mentors, for these students.” First of all, who is “we”? Second, this recommendation misses part of the picture: even those with high high school GPAs are likely to do less well (GPA-wise) in college. Maybe students on the whole are underprepared for the demands of college; maybe the standards are different. Instead of targeting those with low GPAs, it might make sense to help all students prepare for the challenges they will encounter.
That’s just one example. My point is that you can take a statistics course like this (which is fun and in some basic ways helpful) and emerge with false confidence, not only in your knowledge, but in your ability to apply it to actual situations. It’s possible that the later units go into more subtleties, but even the early lessons should warn against simplistic or hasty conclusions.
So then, what can be done?
Any introductory statistics course, as well as any rhetoric and logic course, should address the problem of pseudoscientific findings and applications. Students should become aware of the problem and know some ways of spotting it.
But how do you spot it?
Some ways do not require knowledge of statistics. If you look behind many of the “research has shown” claims, you find immediately that the actual research has shown something a little different. If you look farther, you may find questions, doubts, and criticisms surrounding that research. So the first step would be to compare the “research has shown” claim with the abstract of the study, and the abstract with the introduction and conclusion of the actual paper. You may see discrepancies right away. If students learn to do this, they will have taken one step toward informed skepticism.
Next, one can read blogs that analyze current (and past) studies and point out their flaws. They may give insight into certain kinds of errors and exaggerations. My favorite is Andrew Gelman’s blog. He often discusses the “garden of forking paths“–that is, the tendency to comb one’s data for statistical significance (with or without intent to p-hack, etc.). As he notes, this does not have to imply “fishing” for significance; there are far too many ways to spot a statistically significant interaction–which may not have been part of your original hypothesis–without intending to cheat. The point is to be aware of this potential problem and take precautions against it. Members of the public, too, understanding the nature of the problem, can spot some of its symptoms.
The point is not that the social sciences (or medical or other sciences) are nonsense, or that errors in research or publicity reflect deficiencies in the researchers. Rather, there are far too many rewards for exciting research results, and far too few calls for caution.
Beyond all of this, one can educate toward appreciation of complexity and uncertainty. I remember my high school teacher saying, again and again, “There’s much more to it than that.” Those reminders stuck with me; I began to see questions behind the summaries. Takeaways have a place in education and life, but there is something interesting behind them, something that extends beyond them and calls them into question. Lift your conclusions into proper doubt. Recognize that difficulty is not calamity; it can make things more interesting than they would otherwise be.
Image credit: Cornelis Bega, The Alchemist, 1663, oil on panel (The J. Paul Getty Museum).
Comments