How to Overcome Cognitive Biases in Risk Management

Shreen Williams , Jason Rosenberg , Lisanne Sison

|

November 6, 2025

Risk professionals often take comfort in frameworks such as COSO ERM and ISO 31000 because they provide structure, discipline and a sense of order for organizations and their assurance capabilities. Regardless of the framework or the level of structure it may provide, there is one component that cannot be removed from the risk management process: human bias.

Biases are shortcuts in our thinking, helping us make quick decisions. Unfortunately, those decisions are not always the right ones. While biases may help us make faster decisions in times of uncertainty, they can also distort judgment. In high-stakes environments like critical business decision-making, even the slightest distortions can lead to strategic blind spots, wasted resources or surprises that have a severe impact.

Biases can show up at any stage of the ERM process lifecycle, from process design to risk identification and assessment to risk monitoring and reporting. Cognitive biases can appear in many forms, including boards choosing complicated solutions that look impressive, leaders explaining away mistakes and placing the blame on others, or teams of people going along with the group rather than speaking up to provide their own perspective.

It is not enough to simply be aware that biases exist throughout risk management processes. The real challenge for risk leaders is to proactively identify biases and develop mitigation strategies to minimize biased decision-making and support risk-informed decisions. Eight of the most pervasive biases shaping ERM today are complexity, innovation, self-servicing, overconfidence, anchoring, confirmation, framing and groupthink. By exploring these biases and the scenarios in which they manifest, risk professionals can better develop pragmatic techniques to counter their effects and limit their impact on business decisions.

When Complexity and Innovation Become a Crutch

Consider a hypothetical scenario: A company’s board of directors decides to hire an external risk management consultant to evolve its capabilities and maturity level. The consultant completes the engagement and delivers a complex, jargon-filled, COSO-aligned framework with multiple taxonomies and negligible practical advice or resources to help the company implement the consultant’s recommendations and ensure adoption from its internal stakeholders. Risk identification stalls because the framework is far too complicated for frontline employees to apply. The assumption is that complexity is automatically better. This is complexity bias at work.

Complexity bias leads organizations to favor overly complicated solutions over pragmatic solutions. This bias is often accompanied by innovation bias, in which the newest version of a framework like COSO ERM is perceived as inherently superior, regardless of whether it drives actual improvements to existing capabilities.

These biases can have significant impact on risk governance. Making things too complicated can confuse frontline employees, delay progress and give stakeholders a false impression that their risk management capabilities are more advanced than they actually are. By making frameworks unusable for frontline teams, these biases can overcomplicate governance, undermine risk identification, and make it more difficult to establish risk frameworks and structures.

To overcome complexity and innovation biases, keep it simple. In risk governance discussions, ask yourself: Could I explain this framework to a new employee in less than two minutes? If the answer is no, it is too complicated, and complexity bias may be in play. Address the bias by trimming the extras and focusing on what really matters. Specifically, summarize risk governance structures and frameworks into one-page resource documents. Then, check with risk champions situated throughout the organization’s ecosystem to validate whether the documents are digestible and accessible enough.

Falling Into the Self-Serving Trap

Imagine a company that is launching a new product. If it succeeds, leaders credit their foresight. If it fails, they blame regulators or “unforeseen” market shifts. After-action reports are shallow and lessons learned are rarely integrated into the ERM process. This is self-serving bias—attributing wins to ourselves and losses to external factors.

In strategy discussions, self-serving bias can lead to selective storytelling. Leaders may take too much credit for successes and downplay outside factors. This creates a false sense of confidence and prevents the company from learning from its mistakes, ultimately weakening the organization’s overall strategy and future decision-making processes.

Overconfidence bias amplifies this problem. Decision-makers often overestimate their predictive abilities, underestimate downside risks and allocate resources based on optimism rather than balanced analysis of objective data. For example, a CFO may project best-case market growth while ignoring signals of regulatory headwinds.

Self-serving bias makes it more difficult to manage the ERM process. Strategic choices ultimately become management actions, including resource allocation, performance review and lessons learned. This is where self-serving attributions distort accountability and prevent organizations from integrating failures back into their risk programs.

To combat this bias, pair every major decision and postmortem review with an independent, objective challenger who is empowered to poke holes—not rubber-stamp—the narrative. This objective challenger could be a dissenting board member, an activist shareholder or an external advisor. Require teams to document both “management-controlled factors” and “external factors” before closing reviews to help ensure balance and accountability. The goal is not to obstruct or criticize, but to achieve objectivity to identify opportunities for improvement.

Anchoring Too Strongly on First Impressions

During a risk assessment, the first concept or idea mentioned can often “anchor” the rest of the discussion, even if it is arbitrary. For example, imagine a company holds an executive team risk workshop to address concerns of a potential cyber disruption event. The CISO tells the group that there is a 25% probability of a cyber disruption event materializing. Despite objective evidence showing the likelihood is actually higher, the number arbitrarily introduced then sets the tone for discussions. This is anchoring bias.

Anchoring bias frequently occurs in risk assessment workshops and budget allocation meetings. Once an initial anchor is set, it is tough for participants to move beyond it, even when better data becomes available. Anchoring bias can complicate risk assessments where risks are evaluated and scored as initial anchors can distort probability and impact judgments.

To prevent anchoring bias when facilitating workshops and meetings, consider sending all participants pre-reads that provide insights into the process and specific risks that will be evaluated or discussed. Use structured materials that require anonymous input from multiple perspectives like finance, operations and legal. Also make sure to calibrate results in validation sessions to reduce reliance on the first number put on the table.

Seeing What We Want to See

Consider a company where the chief risk -officer reviews quarterly risk dashboards. Most indicators show stability, so they ignore a dissenting data set suggesting an emerging third-party vulnerability because it conflicts with their preferred narrative. This is confirmation bias—favoring information that supports what we already believe.

Confirmation bias is especially prevalent in situations where no consideration is given to alternative data and information, regardless of source or availability. Left unchecked, confirmation bias blinds risk management teams to new threats. It perpetuates outdated risk registers, discourages escalation and can leave organizations more vulnerable to severe risks. Confirmation bias interferes with risk monitoring where data and metrics are tracked. When organizations dismiss contradictory signals, they fail to detect changes in exposures or emerging risks.

To avoid confirmation bias, do not just look for evidence that supports your individual perspective. Instead, look for what might prove it wrong. Rotate teams that are assigned to challenge attitudes and assumptions. They should act as adversaries to uncover blind spots in your organization’s defenses and challenge the efficacy of your organization’s internal control mechanisms. If your organization has an internal audit function, that team could also be well positioned to provide this insight. In every decision-making discussion, require leadership to provide at least one fact or example that challenges the current thinking, and review at least one opposing fact or example during each meeting.

Framing the Same Data for Different Decisions

After conducting a risk assessment, imagine a company’s CISO reports to its board of directors that its system uptime is 95%. The board of directors and company leadership feel the targeted system uptime is adequate and use that data to reduce resource allocations for the company’s IT business function.

Alternatively, the CISO could report to the board that their system downtime is 18 days a year. As a result, the board of directors and the company’s leadership demand urgent resource allocations for the IT business function.

Though they may both be accurate numbers, a system uptime of 95% resonates more positively with the company’s decision-makers than 18 days of downtime per year. This bias is known as the framing effect, where the same data can change perceptions and decisions when simply packaged and presented differently.

Framing bias affects how leaders interpret the same data. Positive frames typically encourage risk-taking and negative frames push toward risk aversion. As the way data is presented often directly impacts the choices leaders make, shifts in framing can shape multimillion-dollar investment decisions.

Avoid framing bias by standardizing dashboards and using neutral language in reports to reduce unconscious conclusions and present risk information in a way that showcases both upside and downside. Encourage decision-makers to reflect on the data before reaching a conclusion.

Betting Too Much on Gut Feeling

Consider another company where leadership is confident that their cloud migration will be seamless because their team has successfully executed projects before. They allocate minimal contingency funding, only to encounter months of delays and unexpected security gaps. This is overconfidence bias undermining resilience.

Overconfidence bias leads organizations to underestimate complexity, dismiss early warnings, over-rely on prior successes and overcommit to ambitious timelines. In risk assessments, this often leads to unrealistically optimistic scores, directly impacting how organizations allocate resources, establish timelines and execute risk responses.

To counter overconfidence bias, conduct premortems before all major initiatives, pretending that they have already failed and then working backwards to ask why. This “what could go wrong?” exercise helps uncover blind spots and hidden risks before decisions are locked in. Executive sponsors for the initiative should be able to explain why it could fail. Track variances between forecasted versus actual project outcomes to recalibrate future assumptions and allocate appropriate resources.

Favoring Consensus Over Candor

Boards often pride themselves on consensus, but too much harmony can easily hide both upside and downside risk. Consider a company where board meeting discussions often grow tense, but if the CEO confidently asserts their perspective, dissenting leaders hesitate to challenge the CEO or present their opposing perspective. Instead, they nod in agreement with the rest of the collective group to avoid “rocking the boat.” Decisions are unanimous, and critical risk exposures are ignored. This is groupthink—the preference for consensus over candor.

Groupthink erodes the quality of reporting and oversight. It silences minority opinions, narrows perspective, and prevents boards from fulfilling their role as stewards of diverse stakeholder interests. Groupthink complicates the risk reporting process, where risk information is escalated to executives and boards. Suppressing dissent in reporting weakens oversight and masks exposures.

To overcome groupthink, adopt a formal “speak up” practice that encourages internal stakeholders at every level to speak freely, without any fear of retaliation or retribution. Implement a process for structured dissent, requiring a round of “what are we missing?” at every meeting. Allow anonymous submissions for alternative viewpoints and present them in future meetings to normalize candor and dissent. Embed psychological safety by rewarding dissent, not penalizing it.

The Human Side of ERM

Leaders who can combat bias in real time can help position their organization ahead of its peers and competitors. Frameworks, dashboards and internal controls are essential, but they cannot eliminate the most unpredictable variable in any ERM program: people. Biases creep into strategy discussions, risk assessments and board reports, often without anyone realizing it.

Human biases will never disappear, so risk leaders must embed bias-awareness into every stage of the ERM process lifecycle, not as an academic exercise, but as a daily discipline. Start small by simplifying frameworks, running premortems, rotating teams assigned to challenge perspectives and assumptions, and normalizing and rewarding dissent. Over time, these practices can help create positive risk cultures, healthier governance and more effective risk oversight.

Shreen Williams is founder and CEO of Risky Business SW, LLC.
Jason Rosenberg is senior director of risk and resiliency at Autodesk.
Lisanne Sison is managing director of ERM at Gallagher.