Economist, author and university lecturer Dylan Evans discusses why managing the risk of a nightmare scenario can be counterproductive in the following excerpt from his upcoming book “Risk Intelligence: How to Live with Uncertainty.”
There’s something mesmerizing about apocalyptic scenarios. Like an alluring femme fatale, they exert an uncanny pull on the imagination. That is why what security expert Bruce Schneier calls “worst-case thinking” is so dangerous. It substitutes imagination for thinking, speculation for risk analysis and fear for reason.
One of the clearest examples of worst-case thinking was the so-called “1% doctrine,” which Dick Cheney is said to have advocated while he was vice president in the George W. Bush administration. According to journalist Ron Suskind, Cheney first proposed the doctrine at a meeting with CIA Director George Tenet and National Security Advisor Condoleezza Rice in November 2001.
Responding to the thought that Al Qaeda might want to acquire a nuclear weapon, Cheney apparently remarked: “If there’s a 1% chance that Pakistani scientists are helping Al Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It’s not about our analysis...It’s about our response.”
By transforming low-probability events into complete certainties whenever the events are particularly scary, worst-case thinking leads to terrible decision making. For one thing, it’s only half of the cost/benefit equation. “Every decision has costs and benefits, risks and rewards,” Schneier points out. “By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.”
An epidemic of worst-case thinking broke out in the United States in the aftermath of the Three Mile Island accident in 1979. A core meltdown in the nuclear power station there led to the release of radioactive gases. The Kemeny Commission Report, created by presidential order, concluded that “there will either be no case of cancer or the number of cases will be so small that it will never be possible to detect them,” but the public was not convinced. As a result of the furor, no new nuclear power plants were built in the United States for 30 years. The coal- and oil-fueled plants that were built instead, however, surely caused far more harm than the meltdown at Three Mile Island, both directly via air pollution and indirectly by contributing to global warming.
The impact of the Three Mile Island accident was probably reinforced by the release, 12 days before the meltdown, of The China Syndrome, a movie in which a catastrophic accident at a nuclear power plant is averted by the courageous actions of the protagonists. The movie’s title is a direct reference to a worst-case scenario—the most dangerous kind of nuclear meltdown, where reactor components melt through their containment structures and into the underlying earth, “all the way to China.”
The question of whether environmental impact statements should include discussion of worst-case scenarios is still the subject of intense debate. Environmental groups tend to advocate such discussion, in part to grab the attention of the general public. The U.S. government originally required discussion of worst-case scenarios but later changed its mind, apparently on the ground that such discussions tend to provoke overreactions. This is a move in the right direction; if the chance that the worst case will happen is extremely low, the benefits of considering it will be far outweighed by the unnecessary fear that such consideration would provoke. Like radiation, fear damages health and is costly to clear up.
As Schneier observes, “Any fear that would make a good movie plot is amenable to worst-case thinking.” With that in mind, he runs an annual “Movie-Plot Threat Contest.” Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with. The purpose of this contest is “absurd humor,” but Schneier hopes that it also makes a point. He is critical of many homeland security measures, which seem designed to defend against specific “movie plots” instead of against the broad threats of terrorism. “We all do it,” admits Schneier. “Our imaginations run wild with detailed and specific threats. We imagine anthrax spread from crop dusters. Or a contaminated milk supply. Or terrorist scuba divers armed with almanacs. Before long, we’re envisioning an entire movie plot, without Bruce Willis saving the day. And we’re scared.”
Psychologically, this all makes a certain basic sense. Worst-case scenarios are compelling because they evoke vivid mental images that overwhelm rational thinking. Box cutters and shoe bombs conjure up vivid mental images. “We must protect the Super Bowl” packs more emotional punch than the vague “We should defend ourselves against terrorism.”
Fear alone is, however, not a sound basis on which to make policy. The long lines at airports caused by the introduction of new airport security procedures, for example, have led more people to drive rather than fly, and that in turn has led to thousands more road fatalities than would otherwise have occurred, because driving is so much more dangerous than flying. Fear of “stranger danger” has also led to huge changes in parental behavior over the past few decades, which may have a net cost for child welfare. That, at least, is what the sociologist Frank Furedi argues in his challenging book Paranoid Parenting.
Parents have always been worried about their kids, of course, but Furedi argues that their concerns have intensified in a historically unprecedented way since the late 1970s, to the extent that these days virtually every childhood experience comes with a health warning.
The result is that parents look at each experience from the point of view of a worst-case scenario and place increasing restrictions on what their kids can do; in the past few decades, for example, there has been a steep decline in the number of children who are allowed to bicycle to school and in the distance from home that kids are allowed to go to play unsupervised. There has also been an increase in the amount of time that parents spend on child rearing; contrary to the common wisdom that parents have less time for their children these days, a working mom today actually spends more time with her kids than a nonworking mom did in the 1970s.
I am not aware of any studies that have attempted to measure the psychological changes that have driven this cultural shift. It would be interesting to measure the risk intelligence of parents by, for example, comparing their estimates of certain risks with objective data about the frequency of those risks. Anecdotal evidence, however, suggests that it might be hard to gather such data.
The problem with paranoid parenting is that, like other cases of worst-case thinking, it ignores half of the cost-benefit equation. In worrying about stranger danger, for example, parents focus on the extreme but improbable risk of a child molester attacking or abducting their children and fail to weigh it against the more mundane but far more likely benefits of exercise, socialization and independence that children gain from being allowed greater freedom. To put it another way, worried parents tend to focus on the risks of giving their children greater leeway and fail to consider the risks of not doing so. The long-term developmental consequences of paranoid parenting include isolation from peers, infantilism and loss of autonomy. Unlike the chance of abduction, though, those risks are highly probable.
Paranoid parenting is also evident in popular attitudes toward fever in children. Fever is one of the most common reasons that parents seek medical attention for their children, a habit that is almost certainly due to the widespread belief that fever is a disease rather than the body’s way of fighting infection. In 1980, the physician Barton Schmitt coined the term “fever phobia” to designate the numerous misconceptions parents had about fever. Schmitt found that 63% of caregivers were worried a great deal that serious harm could result from fever, and 18% believed that brain damage could be caused by a mild fever of 102 degrees Fahrenheit—both of which views were, even then, wildly exaggerated by the standards of proper medical evidence.
Two decades later, a team of pediatricians from Johns Hopkins Bayview Medical Center in Baltimore found that attitudes had not changed much. Concern about fever and its potential harmful effects was still leading parents to monitor their children excessively and give them inappropriate treatments, including sponging them with cool water (which can cause significant shivering as a result of the body attempting to stay warm) or even alcohol (which can cause dehydration and hypoglycemia, particularly in young children). Parents were even more dangerously liberal with fever-reducing drugs than they had been two decades before, giving high doses of acetaminophen and ibuprofen, which placed their children at undue risk of toxicity. Interestingly, 29% of the people surveyed said that they followed the recommendations of the American Academy of Pediatrics, despite the fact that no such policy existed.
Schneier tells a story about a security conference he attended where the moderator asked a panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack, and so on. Schneier didn’t get to give his answer until the afternoon. Finally, he stood up and said, “My nightmare scenario is that people keep talking about their nightmare scenarios.”
Copyright © 2012 by Dylan Evans. From the forthcoming book RISK INTELLIGENCE by Dylan Evans to be published by Free Press, a Division of Simon & Schuster, Inc. Printed by permission.