President Obama weighed in on the newly ignited debate on misinformation last week, suggesting Tuesday in his farewell address that we all need to have higher academic standards if we want to improve public discourse:
“In the course of a healthy debate, we prioritize different goals, and the different means of reaching them,” the president said. “But without some common baseline of facts, without a willingness to admit new information and concede that your opponent might be making a fair point, and that science and reason matter, then we’re going to keep talking past each other.”
If only it were that simple.
We all know what the president was getting at — he probably (and rightfully) wanted to take a subtle shot at political opponents who refuse to accept the reality of climate change. But this rhetorical device — invoking the power of “science” — is something we hear all too often in politics (and on both sides of the aisle). It might be fair to criticize people for denying compounding evidence on a given issue, but holding up “science” as some depository of knowledge that needs to be broadly respected goes directly against the spirit of research. It’s also naive.
Anyone who’s spent time sifting through the latest trends in academic research has read ad nauseam about the many problems facing science. Between the statistical hacking, the bogus academic journals, the publication bias, the replication crisis and the tendency of scientists to fall into their own echo chambers, it’s clear that amping up our reverence for the academic world of scientific research isn’t the solution to our misinformation epidemic.
A lot of flawed research gets published and publicized without ever being questioned, and it’s a far more pervasive problem than you might imagine. Researchers suggest that a majority of findings published in studies are false — and the claims are pretty well reflected in tests. One study published in Science attempted to replicate 100 psychology studies but could only manage to get the same results for 36.
This is by no means a new phenomenon. Academics have been arguing about how best to resolve these problems for well over a decade, and many have proposed some interesting solutions. Some journals are experimenting with models in which authors submit their methodologies without results in order to prevent publication bias. Others have tried to encourage transparency by attaching “badges” to articles that offer access to accurate supporting data, and it seems to have worked.
These efforts are encouraging, but in the age of the internet and “fake news,” do we really expect everybody to play by the same high standards? The U.S. government alone spends nearly $70 billion on nondefense research each year. With so much money at stake, it’s simply unrealistic to expect all scientists to act purely for the advancement of knowledge.
In addition, we too often bastardize science to fit our own tribalistic tendencies. How many times in a political debate have you heard someone argue, “Well, the science is on my side”? How many times have you said it without having ever read a peer-reviewed article on the topic? “Science” used as a political tool is often just a substitute for blind elitism: We disdain those who challenge points of broad consensus even though we rarely can explain the science ourselves.
It’s easy to criticize people who don’t accept the obvious science-related issues like vaccines or GMOs. But stopping the discussion there prevents us from engaging in critical thinking and looking for the truth behind our many unanswered questions. We need to accept the fact that science is actually really, really difficult — and that “facts” can easily be twisted or weaponized. Accepting major claims like global warming should require a lot of time and mental work to sift through the evidence. And in the event that we don’t have a clear answer to a question, we should be more open to the idea of refusing to have an opinion.
Robert Gebelhoff contributes to The Washington Post’s Opinions section.