“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
These famous words of Max Planck are not very comforting. Indeed, they are rather unnerving: Science is the one discipline where truth and knowledge thrive — or so we’ve been told. Look more closely, though, and the human tendency to be subjective and opinionated begins to appear.
By some estimates, only 15 to 40 percent of published scientific studies will ever be replicated. For a discipline that stresses experimental reproducibility, that is an incredibly discouraging figure. And there are fairly obvious reasons for this: Scientific journals would rather publish new findings, findings that might be important enough to change the world, than replications of old findings. Publications want to be cutting-edge.
On the other side, you also have the scientists who want to be cutting-edge. No scientist will choose re-running old experiments when she could be working on the cure for cancer. And those who have made a substantial discovery do not want their research being undone. The result? A report that says 47 of 53 cancer studies are not replicable. I wonder if that study itself is replicable.
To make matters worse, this problem runs rampant not only in the sciences: The lack of replication studies has been noted in education research, economics and psychology as well. No field that relies on the scientific mode of inquiry is safe.
And indeed, there is a lot of contention and opinion involved. If a study shows one result, and a repeat shows the opposite, which is the correct conclusion? The usual statistical tests use a 5 percent probability threshold; if there is less than a 5 percent chance that the observed outcome would occur, assuming that the conclusion we want to make is false, then that conclusion is “significant,” or not attributable to random chance. But this can be misleading.
Say you’re testing a cancer drug, and you see a decrease in the symptoms of cancer in one of your clinical trials. This decrease is large enough to only have a 3 percent chance of happening if the drug were just acting as a placebo. Using general statistical conventions, you conclude that this is a significant finding — the drug really does help mitigate the symptoms. But before you send your drug application to the FDA, you remember that there is still a 3 percent probability that the observed outcome was caused by random chance. (The FDA’s rule of thumb, by the way, is usually two positive trials to pass its reproducibility test.) That last part is a step many of us scientists forget to take. We send in our data, we publish our articles and the finding is labeled a “breakthrough.” But is it really a breakthrough?
Of course, we would like to think so. We want to see cancer cured; we want to see neutrinos defying Einstein’s theory of general relativity. We are just like the scientific journals that want to publish cutting-edge articles. We forget the importance of reproducibility and don’t seek to prove our findings, but rather let our theories prevail by outliving the critics.
Margaret Hansen is a rising senior in the College. Disorderly Conduct appears every other Friday.
Have a reaction to this article? Write a letter to the editor.