Artificial Intelligence and Risk Management

Daniel Wagner , Keith Furst

|

September 17, 2018

The cyber era heralded unparalleled opportunities for the advancement of science, technology and communication, and unleashed a range of new attack vectors for rogue elements, criminals and virtual terrorists. The era of machine learning is doing much the same, for the promise of advancement has gone hand in hand with a range of new perils and an expanded set of actors capable of carrying out attacks using artificial intelligence (AI) and machine learning systems. This flows naturally from the efficiency, scalability and ease of diffusion of AI systems, which can increase the number of actors who can carry out attacks against civilian, business and military targets.

The typical character of threats derived from AI are likely to shift in some distinct ways in the future. Attacks supported and enabled by progress in AI will be especially effective, finely targeted, and difficult to attribute, as they have been in the cyber arena. Given that AI can, in a variety of respects, exceed human capabilities, attackers may be expected to conduct more effective attacks with greater frequency and on a larger scale. This is presenting new challenges for risk managers and promises to present even greater challenges in the decades to come.

Attackers often face a trade-off between how efficient and scalable their attacks will be, versus how finely targeted. AI systems may be able to avoid detection by using a learning model that automatically generates command and control domains that are indistinguishable from legitimate domains by human and machine observers. Such domains can be used by malware to “call home” and allow malicious actors to communicate with host machines. Attackers are likely to leverage the growing capabilities of reinforcement learning to benefit from experience in order to craft attacks that current technical systems and IT professionals are not prepared for.

For example, some AI services allow users to upload variants to a central site and be scrutinized by scores of security tools. This feedback loop presents an opportunity to use AI to aid in crafting multiple variants of the same malicious code to determine which is most effective at evading security tools. Additionally, large-scale AI attackers can accumulate and use large datasets to adjust their tactics, as well as modify the details of the attack for each target. This may outweigh any disadvantages they suffer from the lack of skilled human attention to each target, and the ability of defenders like antivirus companies and IT departments to learn to recognize attack signatures.

Malicious AI actors and cyberattackers are likely to rapidly evolve in tandem in the coming years—in both the virtual and physical arenas—so a proactive effort is needed to stay ahead of them. There is a growing gap between attack capabilities and defense capabilities more generally because defense mechanisms are capital-intensive and the hardware and software required to conduct attacks are becoming less expensive and more widely distributed.

Unlike the digital world, where critical nodes in a network can play a key role in defense, physical attacks can happen anywhere in the world, and many people are located in regions with insufficient resources to deploy large-scale physical defenses. Some of the most worrying AI-enabled attacks may come from small groups and individuals who have preferences far removed from what is conventional, and which are difficult to anticipate or prevent, as with today’s “lone-wolf” terrorist attacks.

Since the number of possible attack surfaces is vast, and the cutting edge of attack and defense capability is likely to be ever progressing, any equilibrium obtained between rival states, criminals, security forces and competing organizations in a particular domain is likely to be short-lived as technology and policies evolve. Technology and social media giants will in all likelihood continue to be the default technological safe havens of the masses, given that their access to relevant real-time data on a massive scale, and their ownership of products, communication channels, and underlying technical infrastructure place them in a highly privileged position to offer tailored protection to their customers. Other corporate giants that offer digitally-enhanced products and services (such as in the automotive, medical and defense sectors) are coming under pressure to follow suit. This is due in large part to a growing trend in which people routinely use the platforms provided by technology and social media, and interact less frequently with small businesses and governments.

Developed countries generally, and the leading countries in AI and cyber capabilities specifically, have a clear head start in establishing the control mechanisms to provide security for their citizens, but maintaining that comparative advantage requires significant ongoing commitment from a plethora of resources. What is also required is the maintenance of forward-thinking organizational strategic planning, which is not necessarily in abundance. Much more work must be done to establish the right balance between openness and security, improving technical measures for formally verifying the robustness of systems, and ensuring that policy frameworks developed in a world that was previously less AI-infused adapts to the new world.

This creates three specific risks. First, intelligent machines often have hidden biases, not necessarily derived from any intent on the part of the designer but from the data provided to train the system. For instance, if a system learns which job applicants to accept for an interview by using a data set of decisions made by human recruiters in the past, it may inadvertently learn to perpetuate racial, gender, ethnic or other biases. Moreover, these biases may not appear as an explicit rule but, rather, be embedded in subtle interactions among the thousands of factors considered.

A second risk is that, unlike traditional systems built on explicit rules of logic, neural networks deal with statistical truths rather than literal truths. That can make it difficult, if not impossible, to prove with complete certainty that a system will work in all cases, particularly in situations that were not represented in training data. Lack of verifiability can be a concern in mission-critical applications (such as controlling a nuclear power plant) or when life-or-death decisions are involved.

A third risk is that, when machine learning systems make errors, diagnosing and correcting the precise nature of the problem can be difficult. What led to the solution set may be unimaginably complex, and the solution may be far from optimal, if the conditions under which the system was trained happen to change. Given all this, the appropriate benchmark is not the pursuit of perfection, but rather, the best available alternative.

Risk managers are becoming more accustomed to integrating unknown unknowns into their risk calculations, but this presumes that they have a firm grounding in the subject matter from which risk is derived. For example, as cyberrisk has evolved, more risk managers have had the opportunity to become more familiar with what cyberrisk actually is, and insurers have had time to develop new insurance products to address those risks. The same cannot yet be said about AI. The truth is, given how new the industry is, most risk managers and decision makers have relatively little knowledge about what AI and machine learning are, how they function, how the sector is advancing, or what impact all this is likely to have on their ability to protect their organizations against the threats that naturally emanate from AI and machine learning.

Risk managers and decision makers clearly need to become more educated about the threats that continue to be produced from artificial intelligence and machine learning. Some organizations are better than others at devoting resources intended to develop these systems internally, but few appear to recognize the need to simultaneously anticipate the threats that doing so creates within their own organizations, much less allocate resources specifically designed to address such threats.Risk managers have a critical role to play in ensuring that management is aware of the potential threats while proposing solutions for how those threats may be neutralized.

The AI-dominated world that is in the process of being created will be kinder to organizations that excel at embracing the technology and anticipating its impacts. Going forward, those organizations that attempt to maintain barriers between humans and machines are likely to find themselves at an ever-greater competitive disadvantage when compared with rivals who prefer to tear such barriers down and put artificial intelligence and machine learning to use in every way possible to effectively integrate their capabilities with those of humanity. Those organizations that can rapidly sense and respond to opportunities will seize the advantage in the AI-enabled landscape. In the near term, AI may not replace risk managers, but risk managers who use AI will replace those who do not.
Daniel Wagner is is senior investment officer for guarantees and syndications at the Asian Infrastructure Investment Bank in Beijing. He has more than three decades of experience assessing cross-border risk, is an authority on political risk insurance and analysis, and has worked for some of the world’s most respected and best-known companies, including AIG, GE, the African Development Bank, the Asian Development Bank and the World Bank Group. He has published eight books—The Chinese Vortex, The America-China Divide, China Vision, AI Supremacy, Virtual Terror, Global Risk Agility and Decision-Making, Managing Country Risk, and Political Risk Insurance Guide—as well as more than 700 articles on current affairs and risk management.
Keith Furst is managing director of Data Derivatives and co-author of the forthcoming “AI Supremacy.”