The True Character of Risk

Michael J. Mazarr

|

June 1, 2016

risk psychology

It is now taken for granted that the 2007-2008 financial crisis suggested an urgent need for improved risk management processes, both in the financial sector and more broadly. But of course risk management was a deeply entrenched, highly institutionalized function long before the crisis.  In a 2006 survey of business leaders, Deloitte Consulting found that the vast majority of firms had a chief risk officer and enterprise risk management processes. Most of these companies proclaimed themselves either very or extremely confident in their risk procedures. In the years before the crisis, risk management had also become a highly quantified and probabilistic discipline, incorporating metrics like value at risk to offer detailed projections of the exact probability, to a very narrow range of confidence, of some damaging event.

Partly as a result, by 2007, many financial firms had great faith in their ability to manage risk. “A belief had arisen during the late 1990s that bankers had so improved their risk-management and loss-prediction techniques that regulators could rely on them and their financial models to develop capital standards,” wrote Gretchen Morgenson and Joshua Rosner in their 2011 book Reckless Endangerment: How Outsized Ambition, Greed and Corruption Led to Economic Armageddon. Reserves could be cut, leverage grown and potentially dangerous financial instruments developed, all because procedural risk management could be relied upon to sound the necessary warnings.

Of course, many of these same companies would soon lead the global economy off a cliff—in significant measure due to the failure of their risk management processes. In response, companies reviewed their risk approaches, added sophisticated new models, and tried to shore up the systems that had just been shown to have serious gaps. They thought, understandably, that it was the processes and models that had failed, and that effective risk management was all about identifying the right risks—mostly external—and assigning them the proper values. This could be called the computational theory of risk. Its basic assumption is that accurate estimates of quantifiable risks will provide senior leaders the information they need to make strategic choices.

But the experience of firms in the 2007-2008 crisis suggests that a different theory might better capture the true source of risk management failures. There is abundant evidence that the fundamental problem in comprehending risk before the crisis was not in processes or models. Risk can be categorized and (partly) measured by such things in ways that help inform senior leader judgment. But risk failures are mostly attributable to human factors—things like overconfidence, personalities, group dynamics, organizational culture and discounting outcomes—that are largely immune to process. In dealing with risk, human factors will defeat procedures every time.

This is the perceptual theory of risk management. The critical foundation for managing risk, this theory suggests, is careful attention to factors influencing the perception of risk by organizations and senior leaders. It is now well established that bias and cognitive dynamics affect the behavior of organizations, but the overwhelming tendency is still to view risk as something objective that can be calculated and precisely mitigated. Yet if the resulting efforts ignore the human factors involved, they will accomplish nothing. The solution, to the limited degree one is available, is to organize the habits, mindsets and decision-making styles of organizations to help mitigate human and organizational factors, not environmental or technical ones. Risk identification can be a procedural and technical endeavor, but truly managing risk and preventing failures is primarily an institutional, cultural and human task.
Sources of Risk Failure

In the late 1990s, one of the most-admired companies in the United States established a sophisticated risk management unit that soon garnered notice as a best practice for the industry. It was called the Risk Assessment and Control unit, or RAC. At its peak, the RAC boasted more than 150 skilled analysts—finance experts, accountants, statisticians—and a $30 million budget. In an apparent reflection of the priority it accorded risk management, the company required formal RAC approval of any significant deal. The CEO once boasted in an interview, “Only two things at [this company] are not subject to negotiation: The firm’s personnel evaluation policy and its company-wide risk management program.” Outsiders were duly impressed. “Even though they’re taking more risk,” an S&P analyst said at the time, “their market presence and risk management skills allow them to get away with it.”

As it turned out, reality was not so rosy. The company was Enron, and its risk processes were, to put it charitably, a sham.

After the company’s collapse, Enron executives admitted that RAC analyses were routinely ignored. The head of the RAC reportedly hesitated to confront senior leaders determined to make deals and take risk. CEO Jeffrey Skilling even boasted of having the foresight to choose someone so compliant for the risk management post. The bottom line was simple: As one anonymous Enron executive told Bethany McLean and Peter Elkind in their book The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron, “The process was there, sure, but the support wasn’t.”

The Enron case offers powerful example of a simple fact: in risk management, process itself means very little. From wishful thinking to groupthink to skewed incentives to imperative-driven thinking to risk-embracing cultures that punish dissent, a large number of human factors can conspire to undermine effective thinking about risk even in the face of exhaustive, and seemingly objective, analytics. Risk analysis in support of complex strategic judgments is (or should be) all about consequences. But a range of human factors tend to dim the image of the future and impede an unbiased consideration of outcomes to the point of dismissing warnings. Even within the risk management functions themselves, skewed incentives and organizational culture can help generate results that support deeply-held organizational goals rather than telling an impartial story.

Individual risk calculus is often biased by a host of cognitive factors. For example, classic biases such as wishful thinking and overconfidence—which tend to be very common among senior leaders in action-oriented firms—keep organizations from taking risk seriously.

A range of personality factors also routinely impede effective risk management. Leaders who are naturally risk-accepting and aggressive often gain disproportionate influence in can-do cultures, where their impulses appear decisive rather than unsafe. This was especially true of the financial sector before 2007, which was crowded with aggressive “cowboys” and “gunslingers” determined to seek massive returns even at great peril and high leverage. Personality factors also crop up in interpersonal ways: Organizations tend to place exaggerated trust in favored managers, whose risky behavior is downplayed or excused because they have been so successful or are viewed as brilliant. They become institutionally untouchable. Just about every firm that crashed in 2007-2008 has at least one story of an irresponsibly aggressive trader or executive who simply could not be questioned because of his or her track record and perceived ties to senior leadership.

At the same time, some organizational cultures become highly aggressive and intolerant of warnings or debate. Group dynamics in aggressive, action-oriented firms can quash dissent and honor the voices arguing for risky ventures with a promise of revenue. If the “command climate” of a firm suggests that the promise of gain is valued more highly than serious consideration of risk, the most elaborate processes in the world will stand little chance of heading off disaster.

The competitive environment in which a firm operates can also produce leaders who become insensitive to consequences. Leaders in high-pressure situations can feel that they “have to act,” and wave off potential consequences as irrelevant. They do not engage in a true comparison of the costs and benefits of a range of alternatives, and they are not guided by the expected future utility of their actions. This was the case, for example, with many firms that felt an urgent imperative to keep up with the industry on mortgage-backed securities. In the end, they simply did not believe that they had a choice at all.

This is a real problem for risk management. When senior decision-makers become immune to outcome-oriented thinking, they will not give serious consideration to risk. They may continue to give it rhetorical emphasis, talking about what could go wrong, but the trajectory of their judgment will never substantially vary. Considerations of risk will become a feeble ornament on a choice made under the shadow of dominant imperatives. Even those charged with the calculation of risk will feel pressure, sometimes implicit, to skew their estimates in ways that fortify presumptions already held by the organization.

In just about every case of a failure to take risk seriously, the problem was not that the organization was unaware of the potential danger. True “black swans”—unknown risks that happen without warning—are very rare. Much more common are “gray swans”—dangers that are known, discussed and even warned about, but then discounted. Computing risk is only the beginning of a legitimate risk management process, not the end. The hardest work takes place in the dialogue about those estimates, which is all about perceptions.

To be sure, in many day-to-day situations, leadership groups and organizations may not be placed under the sort of pressures that make risk failures most likely. Some problems are more subject to probabilistic analysis, with something close to a “right” answer. Risk computations can sometimes influence behavior fairly directly and unambiguously. But under pressure, in complex and uncertain environments, these human factors routinely obstruct risk management. In those cases, a perceptual theory of risk becomes essential. Effectively managing risk is about shaping the understandings, biases and habits of senior leaders and entire organizations.
The Challenge of Uncertainty

Such factors are especially prevalent in the sort of high-level strategic choices that senior leaders get paid to make, like whether a firm should transform its business model or buy a major competitor. These are issues for which there are simply too many variables, interrelationships, unknown factors and unpredictable behaviors to allow an optimal solution. The result is a form of deep or radical uncertainty that characterizes most or all truly strategic decisions facing senior leaders.

In such cases, data-based analysis will never be able to make the choice, in the sense of providing an objective, reliable answer. Causalities are too fickle and nonlinear dynamics abound. Choices will often be determined by subjective values and considerations not subject to modeling, from politics to personalities to ethical considerations. When making big strategic choices under such conditions, the final decision is ultimately, and unavoidably, a subjective and interpretive judgment.

This is widely known and appreciated by business leaders accustomed to valuing their judgment or instincts. What is not as widely appreciated, though, is its implications for the computational theory of risk. In a context where objective measurement simply cannot deliver a reliable answer, the theory does not apply. The danger comes when it is implicitly rejected without being explicitly discussed.

Decision-makers in the financial sector did not always appreciate these critical limitations to risk management. They too often saw issues as technical and technocratic rather than subjective and complex. Organizations took approaches and models entirely appropriate for very discriminate use and employed them to justify big bets under uncertainty, without an intervening layer of rigorous analysis and careful, informed, self-aware and self-critical judgment. Those qualities—rigor, self-criticism, openness to information and alternative perspectives—in turn represent the antidote to the frivolous treatment of risk.

Identifiable risk failures are cases in which human factors cause an organization to arbitrarily and without analysis or justification downplay key risks. This is the true source of risk management failure, and it cannot be addressed with better computational approaches to risk. It can only be addressed by considering the perceptual elements of risk and focusing on a risk-resistant culture.
Toward Risk-Resistant Cultures

The financial crisis uncovered that the primary challenge is not in designing ideal risk management procedures. The health and success of large organizations depends on something much broader: creating a culture that integrates consequence management into strategy, that demands rigorous and structured analysis, and that works to create habits of risk-aware judgment in organizations. Perception turns out to be vastly more influential than computation.

If an organization’s ethos is not serious about risk, a firm can have elaborate processes and even a few key players who offer dire warnings, but still will not be effective. The challenge of effective risk management is in many ways a challenge of useful warning. Creating a culture that tolerates warning and demands candor, as well as encouraging an in-depth discussion of risk, is the first essential step to effective risk management.

In a June 2012 article in the Harvard Business Review, Robert Kaplan and Anette Mikes suggested that the sort of hard-boiled confrontations so essential to real risk discussions are rare, and in fact an unnatural act for most human beings. They point to organizations that create rough-and-tumble dialogues of intellectual combat designed to ensure that risks are adequately identified and assessed. These can involve outside experts, internal review teams or other mechanisms, but the goal is always to generate rigor, candor and well-established procedures for analysis. The result ought to be habits and procedures to institutionalize what Jonathan Baron, professor of psychology at the University of Pennsylvania, has called “actively open-minded thinking”—a combination of a thorough search for information and true open-mindedness to any possibility, while avoiding self-deception through rigorous consideration of alternatives.

Essential to these goals is the cultivation of a widely-supported culture of dissent with the strong buy-in of senior leadership. A common way-station on the road to risk disasters is the exclusion and active punishment of alternative views, sidelining or actively undermining those who raise potential problems with a seemingly urgent course of action. Senior leaders must exhibit, through word and deed, their commitment to value and investigate serious, data-based warnings. Refusing to take such warnings seriously is a common signpost of coming disaster.

In an environment of uncertainty, risk management processes should be less about the precise measurement of dangers than the management of perceptions, which is accomplished in part by creating the right habits and expectations and driving the right kind of strategic conversations within organizations. Ultimately, decisions about risk will be subjective judgments. Best informing those judgments is a matter of creating an environment in which the potential for risk is openly, objectively and rigorously evaluated and, even more important, discussed and debated.

Many risks are objective external factors that impact an organization’s strategic environment. But risk management is a human and institutional challenge. It is about habits of mind and organizational culture more than process or structure. The perceptual aspects of risk carry far more peril than the computational ones. Risk management initiatives can only reflect this fact by attending to the sources of strategic and intellectual rigor within organizations.
Michael J. Mazarr is a senior political scientist at the RAND Corporation and the author of Rethinking Risk in National Security: Lessons of the Financial Crisis.