By Faye Flam
You have the facts; now share them, reads an advertisement urging me to buy a gift subscription to Scientific American. Perhaps we’re seeing a backlash against Kellyanne Conway’s infamous “alternative facts,” or a response to fears that the US has entered a post-truth era. One way or another, facts have become a hotter commodity than coconut water and kale.
But how do any of us know for sure that the facts we believe are the real ones? Should you go with what your smartest friends post on social media? What you were taught in school? What the newspapers report? Wikipedia? The pages of Scientific American?
The bad news, scientists warn us, is that our brains are problematic places to seek reliable facts. In a new book titled “The Enigma of Reason,” for example, the authors—two cognitive scientists—assert that humans use reason more often to bolster their existing ideas (and egos) than to find the truth. In another recent book, “The Knowledge Illusion,” cognitive scientists Philip Fernbach and Steven Sloman show how most people think they know much more than they actually do.
Princeton University psychologist Daniel Kahneman was a pioneer in cataloging the glitches in human thinking. When I got a chance to talk to him about facts last month, he said we humans tend to think we’re basing our beliefs on solid evidence when we’re really just cherry-picking anecdotes or plain telling stories to support our existing beliefs. We think we’re making evidence-based decisions, but often we’re doing it backwards.
This poses a puzzle. If humans have so much trouble gazing past our navels, how have we managed to learn so much about the origin of our species, the birth of our planet, and the infancy of the cosmos? How has science zoomed in to show the tiny working parts of atoms, and zoomed out to show us a universe of 40 billion galaxies? Over the decades, scientists have found all kinds of rules and patterns in the natural world that have led to life-saving drugs, airplanes, MRI imaging, devices that give us directions instantly by bouncing signals off satellites, and more. Perhaps there’s something about the scientific method—or methods—that help us transcend the misleading tendencies of the human mind.
This seemed like a question for a philosopher of science, so I called Tim Lewens of Cambridge University, author of the book “The Meaning of Science.” His answer was encouraging for seekers of truth, though it might surprise some students of science.
For one thing, he said, there’s no universally agreed-upon definition of the scientific method. “It’s hard to come up with a categorization of the scientific method that fits with all the sciences and nothing but the sciences,” he said. One standard definition, which you might have encountered in school, says the scientific method consists of “systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.” But not all scientists agree. For the last few months I’ve been asking researchers from various fields to define the scientific method, and I’ve received dozens of different interpretations—with varying degrees of agreement with (or scorn toward) the official version.
Instead, different fields have come up with different methods—such as the use of certain statistical tests, or X-ray crystallography, or injecting drugs into mice. Some scientists only work in the realm of theory. What they share is a common ethic: It’s understood that good science requires asking honest questions and seek answers in a careful way.
Over time, science has become more modest in its promises, said Lewens. After Isaac Newton formulated his laws, scholars thought his physics was the absolute truth and that the world couldn’t possibly work any other way. Then Einstein came along and burst the bubble with relativity, which better conforms to experimental tests and offers a different understanding of the nature of space, and gravity. Your GPS wouldn’t work without it.
Scientists rarely talk about facts—a word that implies something immutable. They talk about measurements, which are subject to error and open to interpretation, and theories, which are provisional and therefore amenable to upgrades or wholesale revision. To many, it feels self-evident that science is progressing toward truth, even if we’re not there yet or may never get there.
The belief that science is approaching truth is called scientific realism, Lewens said. Not everyone subscribes to it—some philosophers and even a few scientists think new scientific ideas are useful but not necessarily steps toward a larger truth. But Lewens counts himself as a scientific realist, because in so many areas new ideas do build on or refine earlier ones. Science does seem to be getting somewhere.
And while lots of studies point to human fallibility, that doesn’t preclude our ability to do good, truth-seeking science. “It’s a presupposition of these studies that at least some of us can get the right answer some of the time,” he said. The right answer takes slower, more effortful thinking.
This kind of thought is what Kahneman describes in his book “Thinking, Fast and Slow.” For some people, the slower thinking system is on permanent vacation, but science is a group endeavor and some people in any field are likely to employ those more careful, smarter, slower modes of cognition.
Kahneman told me he believes the methods of science do nudge people to put the evidence first, ahead of their preconceived ideas. But there is no single method for doing this. He said different methods of doing science sometimes lead people to different conclusions—or if they are dogmatic enough, different facts. In psychology, for example, giving volunteers tests via computer can yield different results from very similar experiments where a human administers questions.
Such alternative facts are even more striking today in cosmology, where there’s now a controversy about how fast the universe is expanding. Some astronomers using supernova explosions as a measuring system get one answer, while others using microwaves from the big bang arrive at another.
What makes the astronomers’ alternative facts different from Kellyanne Conway’s is that the scientists on both sides used legitimate methods to arrive at their conclusions, and they all recognize that someone must have it wrong. If coming up with an inaugural crowd estimate were a game like guessing the number of jelly beans in a jar, then Conway could have offered an alternative guess—but not an alternative fact.
The assertion that set off the whole “alternative facts” debate was this: Trump’s inauguration crowd was much smaller than the one that gathered for Obama in 2009. The method behind the fact was aerial photography. But using a method that has not been specified, Sean Spicer, Trump’s press secretary said, “This was the largest audience to ever witness an inauguration, period.” You don’t have to be a scientific realist to pick the side that’s likely to be closer to the truth.