Wrong: Why Experts Keep Failing Us—and How to Know When Not to Trust Them David H. Freedman (Little, Brown)
Willie Sutton was famously quoted as saying that he robbed banks “because that’s where the money is.” We go to experts because they’re the ones with the expertise. Sure, we figure out that water is wet and the floor is hard on our own, but it’s not long after we’re up and walking that we start relying on outside sources for information. Electricity turns the lights on, seat belts save lives, the earth is round—for most of us most of the time the basic assumptions of our lives are based on expert knowledge. Which is to say that a lot of what we think we know isn’t knowledge at all, but faith. When the laptop stops working most of us call the tech guy out of childish hope, just as a medieval peasant with a poisoned well might look for a witch to burn.
Of course, the tech guy and other modern experts are more effective than some rustic auto-da-fé—right? David H. Freedman‘s Wrong may make you wonder. Experts, Freedman argues quite convincingly, don’t know what they’re talking about most of the time. The rest of the time, they really don’t know what they’re talking about. Should you bother with the breathing part of CPR, or should you just do chest pumps? Is it better to get nine hours of sleep a night or five? Does Vitamin D fend off cancer? The experts’ guesses are only marginally better than yours or mine.
Freedman offers an impressive array of reasons for the unreliability of experts. He points out, for instance, that business gurus tend to base their pronouncements on a cognitive bias called the “halo effect,” which involves looking for the reasons behind, say, Toyota’s rise in the carmaker’s own characteristics—its production system or culture or leadership. In fact, argues Jerker Denrell—a Stanford University, um, expert on the failures of business advice—corporate success often has more to do with factors that can’t be accounted for in advance, like random chance or a small initial lead over competitors. All of which means that business advice books are mostly snake oil.
But that’s not exactly news. More surprising is Freedman’s frontal assault on practices in the hard sciences. He notes, for example, that scientific studies often involve tests on mice, which may provide all kinds of insights into rodents but tend to yield very little useful information about humans. Experts, he says, also make shit up: Freedman points to numerous surveys of scientists showing that 10 to 20 percent of them have been involved in or observed research fraud. In just one example he cites, “a 2005 survey of the authors of clinical drug trials reported that 17 percent of the respondents personally knew of fabrication in a research study within the past ten years, with 5 percent having been directly involved in a study in which there had been fabrication.”
Freedman argues that scientists are motivated to commit fraud because they “need to publish impressive findings to keep their careers alive.” As experts, they’re under pressure to supply marketable answers. An expert who says, “Well, I don’t really have any idea why Company X is so successful, but it’s probably luck” isn’t likely to snag a book contract or make tenure. One who writes a paper saying, “We’ve found no link between eating apples and growing a third eye in the small of the back” isn’t as likely to get that paper published as a competitor with more sensational results. “An idea that seems highly likely to be true, that is utterly plausible, is probably not going to seem exciting,” Freedman writes. “For this simple reason, we ought to be most skeptical of the most interesting, exciting ideas—the very ones that journals are eager to publish.”
Freedman doesn’t conclude from this that expert opinion is totally worthless. On the contrary—learned studies of such things as seat belts and cigarettes have been instrumental in saving many lives. While “you should never trust experts blindly,” he writes, “there are many situations in which you should and even must trust them, and in which to do otherwise is nothing other than reckless behavior.”
That sounds like good, commonsense advice, but Freedman’s own research undercuts it. He shows that scientific studies are often inconclusive, misinterpreted, sweetened, or even outright fraudulent. Given all that, how can I be sure that the experts won’t eventually decide that seat belts actually kill more people than they save? Or that cigarettes cure lung cancer? Or that greenhouse gases are lowering rather than raising the earth’s temperature?
Freedman himself expresses some skepticism about the expert consensus on global warming—but he does it discreetly, in a couple of passing comments rather than a sustained look at the science. His reticence is understandable. Who wants to be dismissed as a crank?
Still, it’s not hard to hear the specter of crankery wailing and thrashing just below the measured murmur of reasoned discourse. Freedman keeps his questioning within careful bounds: individual scientists and experts may make errors, but the scientific method itself is at least theoretically sound. And yet once you start looking closely at the farrago of confusion and nonsense Freedman uncovers, well, you don’t have to push very hard before you find yourself wondering how far wrong the experts went on the Kennedy assassination or 9/11—or, for that matter, evolution. Once you’ve walked off the cliff, you just keep falling.
Philosopher Paul Feyerabend did. In his 1975 treatise Against Method, Feyerabend famously declared science itself to be little more than a fairy tale. In science, he wrote, “Scepticism is at a minimum; it is directed against the view of the opposition and against minor ramifications of one’s own basic ideas, never against the basic ideas themselves. Attacking the basic ideas evokes taboo reactions which are no weaker than are the taboo reactions in so-called ‘primitive societies.’ . . . The similarities between science and myth are indeed astonishing. . . .”
“Primitive” thinkers, Feyerabend added, “showed greater insight into the nature of knowledge than their ‘enlightened’ philosophical rivals.”
His point wasn’t that truth doesn’t exist, but that our relationship with it is always partly dependent on faith and consensus.
What’s needed, then, is a little humility. Science is a useful tool, but it hasn’t made us superhuman. We—that is, you, me, and the experts whose voices bounce around in our skulls—don’t actually know how the world works. Setting people up as godlike arbiters is asking to be disappointed. Instead, we’ve got to recognize that we know what we know only through a glass darkly—and that wrong describes not just the experts but everybody, and almost all the time.