With Edward Snowden’s 2013 data leaks, the National Security Agency saw first-hand the power of one of corporate America’s most critical and amorphous risks: reputation. When NSA Director Michael Rogers took office last year, he determined one of the answers to that crisis was to focus on strengthening and formalizing risk management, and named Anne Neuberger the NSA’s first chief risk officer.
While Neuberger does not have a background in risk management, her work in finance, management, security and technology has helped her develop the policy and technical expertise needed to identify and act on the intelligence agency’s top risks. After more than a decade in the financial services sector, she turned to government service in 2007 as a White House Fellow under Secretary of Defense Robert Gates. She then served as chief management officer and special advisor on enterprise IT programs with the Navy before joining the NSA, where she has held a number of positions addressing cybersecurity and managing the agency’s private sector relationships with regard to technology.
As CRO, Neuberger has led the charge to design and build an enterprise risk management program, engaging stakeholders throughout the agency to establish actionable frameworks and integrate them into daily operations. Unlike in her financial services experience, the NSA’s ERM program has no bottom line to protect or financial metrics against which to gauge success. Instead, she is focused on inculcating the processes of risk awareness and assessment enterprise-wide to ensure that one of the nation’s key intelligence agencies can best protect not only the United States and its allies, but its own reputation.
At the close of her first year as CRO, she sat down with Risk Management to share her insights on building an ERM program from the ground up, grappling with reputation risk, identifying the NSA’s biggest challenges, and cultivating a risk management culture.
What are some of the biggest risks that you deal with on a daily basis?
In some ways, we’re very much like any large, complex global enterprise, and we have some of the traditional risks you might expect, such as infrastructure risk and the security of our facilities and our networks. But there are some that are very specific to intelligence operations, including risk of exposure of some of our operations and risk to our workforce. We’re in a period of significant national security and technological change and we’re in a rather difficult threat environment. For example, from a cyber perspective, there is tremendous opportunity, but also significant risk.
We often say that risk management at the National Security Agency is the space between our worst fear of a threat becoming a reality that we cannot head off or prevent—an attack or danger that might occur—and the need, in a democracy, for intelligence operations to retain the trust and confidence of citizens and of key stakeholders.
So you chose enterprise risk management to get a hold on all of that. How did you design the agency’s ERM program and, now that you’re a year in, what does it look like in practice?
Within our risk management framework we have several key areas. The first we call defining our risk appetite, and that’s critical for us. Over the last year we developed a set of risk principles that the director of the NSA, the equivalent of our CEO, signed off on recently. That sets the level of risk that we’re willing to take in a given area for a given mission. Now, partially, that’s driven by the president and policymakers setting intelligence priorities. For example, they’ll say counterterrorism is one of our highest priorities—preventing a terror attack from claiming Americans’ or allies’ lives. So, we will take more risk in pursuit of that mission than, for example, one to gain insights for policymakers about a given country or other topic. We’ll say we’re not willing, based on the fact that policymakers made that a lower priority, to take that level of risk. We use the best practice of risk appetite statements and translate that to our mission.
The second piece was building a detailed risk framework covering 11 areas. We built it using experts from across the agency as well as by learning from the outside, and defined the drivers of risk in each area as well as the low, medium and high definitions for each risk driver. For example, one of the risks we have to think about is the risk of an operation being exposed or failing, so we ask questions like: Where is it? Are people being deployed? What are we trying to achieve? Is it properly resourced? And for each of these we have definitions for low, medium, and high risk, so that multiple people doing an assessment can come together and have a consistent sheet of music to work off.
The third piece of our framework is strengthening our processes, baking the risk framework into the way we do our work consistently, including all new and existing operations, activities, policies, partnerships.
The fourth element is how we need to organize ourselves as an enterprise. I am the chief risk officer, but we also have individuals, leads identified in each of our key directorates who work with those directorates, and we guide folks and get in the trenches with them. Our goal is not like that of the NSA’s inspector general, who sits outside the agency’s processes identifying faults. Rather, we work together with mission elements to help them incorporate risk into the way they work, to train, to guide, and to really be a part of strengthening risk management across the NSA.
For some of these huge but somewhat amorphous issues, is risk something that can be quantified, or is it more a matter of inculcating a mindset of risk awareness and analytical risk thinking?
It’s absolutely both. One of the things that’s very different about government, especially with our intelligence operations as opposed to, for example, financial services, where I began my career, is that we don’t have a bottom line. It can’t all boil down to expected revenue and expected costs and the traditional, quantifiable cost-benefit ratio. For us, the value or benefit is keeping the country safe and providing policymakers with the insights they need to achieve American interests. Measuring the value of that is something we’ve been working on for years, and it’s certainly not at the quantifiable place, but one thing that has been important is that we manage and measure risk much more carefully. We’re also making the statement that value must always exceed risk, which is driving a more rigorous approach to value, even where not having a bottom line makes it more difficult.
But to your point, beyond efforts to quantify risk, we are working to inculcate greater risk awareness and analytic thinking in risk-taking, which is probably more similar to the work of our counterparts outside government in the sense that we share the goal of making sure our workforce is comfortable raising concerns, that concerns are addressed, and that people feel comfortable suggesting mitigation strategies. Today, there is a discussion around wanting to achieve the objectives with the least risk possible. We still want to take big risks—we’re in the mission of taking risks—but we want to be sure that those we take, we’ve considered, and we’re going in eyes wide open in a thoughtful and informed way.
Many risk managers who are implementing an ERM program find it can be a struggle to sell the risk management process upward. Have you found that to be a challenge? How is it approached culturally?
First, like other organizations, we’ve had an incident that highlighted to us that we had an obligation and a sense of urgency regarding strengthening risk management. The series of media leaks that began in the summer of 2013 showed us that we needed to really tighten our thinking around risk management, whether that is ensuring the security of our networks or considering the cultural impact. Insider threat is an issue in many organizations, but particularly in the intelligence community, the model was to set the barriers really high in terms of security clearances, etc., but once folks came in, they were trusted. Particularly post-Sept. 11, where there was a tremendous focus put on information-sharing, working collaboratively, and getting as many smart minds on problems as possible, we went to a model where information-sharing was done extensively, replacing our former model of asking “does that individual have a need to know that information?” Today, we are working to adjust things to strike a balance between protecting the information we have and knowing we need as many great minds as possible on problems. We need to thoughtfully consider the risk and the value of sharing or not sharing information within the agency or with partner agencies, and come up with more rigorous approaches to assess and manage that risk. We clearly had a sense of urgency from what had occurred.
In addition, the president had put in place a review team that identified for us that we really needed to think carefully about broader risks—the impacts of intelligence operations on relationships with other countries, on U.S. commercial, economic and financial interests, the loss of international trust in U.S. firms, and the credibility of our country with regard to privacy and civil liberties. All of that came together to create an internal sense of urgency. That being said, this was a leading change effort, as any major change in a large bureaucracy is.
Many risk managers also find it can be difficult to get people to see the value and engage in the risk process in a way that truly effects change. How have you addressed that?
There are three levels one needs to work with to effect change: You mentioned that other risk officers talk about the challenge of getting support above them, among their peers, and then out in the broader organization, where the real stuff happens. In our case, the director of the NSA was completely bought-in—he established the role as one directly reporting to him, and consistently over the last year since it was created, whenever anyone asked him about his priorities, he said: strengthening risk management, improving diversity, and a series of other efforts that all roll into something that he calls the Director’s Charge. Those are his three core priorities, and the fact that he would say that shows his buy-in, so that was clearly the easiest of the three for us because he was already committed.
Next was peers and within the organization. The model that we used was to identify the key areas of risk for the agency, including areas like operations, partnerships, policies, business operations (which are some of the more traditional things, like governance, contracts and resources), disclosure, risk to U.S. foreign policy, risk to the U.S. technology sector, civil liberties and privacy. And with that, I then turned to my peers and said, “we’re going to be building a risk framework together that will guide the agency’s operations and make us as effective as we can be. We need you to name the people that you have confidence in so that, if you look at the operations risk model, you’ll feel confident in it because you’ll know that John X and Cathy B, who you really trust and know are experts in the field, were involved.” We had 11 working groups of between 10 and 20 people, based on the topic, and we met weekly for two and a half months to build the risk model. For each, we asked the traditional risk questions: What has gone wrong? What drove the likelihood of it? What drives the severity of it? And that’s what allowed us to build out the models. When it was done, we turned to them and said, “now you’re the influencers—you’re the ones who will help us pilot these in practical operations, roll in any refinements and then make it a way we do business.” That’s been tremendously impactful.
In addition, I communicated a great deal via town halls throughout our enterprise, via our internal social media, and our other internal forms of communication, and early on we said we are going to be operating by three principles. The first was that we must take risks—the country would be at significant risk if we didn’t. What’s changing here is not our willingness to take risks, it’s that we’re putting a focus on the need to ensure it is thoughtful, informed and consistent with a framework, that we know how to measure it, and we know that, when we’re taking risks, we’ve determined it is worth it. The second thing was that we would do everything in partnership with our mission organizations. And the third was that we would do everything transparently, so folks would see how it’s developed, how it’s applied, and how we’re using data to drive decisions.
In any organization, but particularly in yours, it can be hard to measure success when that comes in the form of preventing crises from happening. How do you define the success of your program?
Right now, we define success by three things. One is, across our workforce, do people feel equipped and do they understand how to measure risk in a given area? If you ask somebody in our computer security area if they understand how to measure the security of our virtual infrastructure, we want to feel they’d say yes and that it matches the framework we’ve defined. We want to be sure that the relevant people working on that know how to address it. Second, we want to be sure that, as an enterprise, we’ve gotten our arms around what our key risks are. What are the things that keep our folks up at night, and is that within our risk appetite or do we need to do things to bring down that level of risk, or even bring it up? In some cases, if folks are too comfortable, we’re even asking if we are taking enough risks. We have a country to keep safe. And then the third element is, now that we’ve built a risk model and it’s giving us a static picture, how do we ensure that we maintain an ongoing, up-to-date, automated picture of the risks that we have determined we need to be consistently aware of as an enterprise? And that’s where, moving forward, we really need to put a lot of focus.
Are there any risks you believe cannot be effectively managed?
Of course. Which, as the chief risk officer, is a bit of a frightening answer. When you look at the number and diversity of threats and the intermixing of communication, it’s not like it was 20 to 30 years ago—there are many more threats today. You have countries around the world that are nation-state adversaries of the United States, you have transnational groups, from terrorist enterprises to purveyors of drugs and weapons of mass destruction, and there are so many more forms of communication. For intelligence agencies working to maintain insight on those threats, it’s an increasingly more complex problem. We never throw our hands up, but when you add in those words “effectively manage,” I can say that we’re going to do absolutely the best we can and do so with integrity, but it’s a hard challenge.
Looking forward, what do you think are the biggest emerging risks, both for you and for the larger risk management community?
Certainly cyber is an emerging if not emerged risk. So many more systems are connected and interconnected, many of which were not necessarily designed with security in mind. You know that old line, “security by obscurity”? Well now that they are no longer obscure or are becoming increasingly less obscure, it introduces risk. And it is essential to ensure that we apply risk assessments before we connect systems, particularly those that are tied to physical capabilities or physical resources. We see some industries, for example, where they’re taking control systems and connecting them to corporate systems to do data measurement and all kinds of fascinating work. That introduces real risk, and we need to make sure that people know to assess the risk of connecting such systems together.
A second major risk is the lack of personal data norms. There is a lot more information available about people and about systems, and there are two aspects of that. From a personal perspective, we are committed to protecting the privacy of innocent persons around the world, but there aren’t personal data norms, so it’s hard to know what level of privacy folks are looking for. To try to address that, we’ve developed models internally, and I work very closely with the agency’s new civil liberties and privacy officer. I think the lack of personal data norms affects us and it affects many other industries, such as health care, where, for example, there is a huge benefit in customizing drug delivery based on the statistics of the patient. But since there aren’t data norms defined, I think there are potentially huge advances in many areas that would go far more quickly if they were developed.
And then, I would say that increased globalization—the number and diversity of threats for organizations with global operations, understanding those and understanding, culturally, how to interact with that diversity—is a key area to focus on.
In any organization, we increasingly see that the risk of reputation has very real consequences, but it is hard to define and quantify. As the NSA had a bit of a challenge with that before you took on this role, how do you perceive and specifically try to address that risk?
In a democracy, intelligence operations have to maintain the trust and confidence of the citizens they serve. That is something we’ve put a lot of thought into, and it’s made complex by the fact that intelligence operations are meant to not meet the light of day—their effectiveness is often tied to how much folks don’t know about them. So we set up a set of questions regarding both kinds of the “should-we” question: How would this look on the front page of a national newspaper if it was accurately described and we were able to talk about the purpose of what we were doing? Second, in determining the course of action: Was it the least intrusive way to achieve those goals? How do we assess the civil liberties and privacy risk? How do we assess the risk to relationships with other countries? How do we assess the risk to international trust in U.S. firms? For us, we will at least start to get our arms around these risks by defining a consistent way to do those assessments, asking those questions internally, and then asking if this is the least intrusive way to achieve this objective and if those objectives are worth the risk.
The final piece is mitigation, and that is the reason you and I are having this conversation—this is a marked departure from the way the NSA operated three or four years ago. Now, we take time to sit down and say, “Let me give you a picture of the NSA. Let’s talk about our work and why it is hard, the difficult choices we face and our commitment to both American laws and values.”