Alarm bells should be ringing about the risks posed by cyberattackers who are penetrating physical infrastructure with greater frequency. Last December, Ukraine’s power grid was hacked, presumably by Russia, in one of the first known incidents of physical infrastructure having been compromised and severely impacted by a cyberattack. While espionage and theft are the most common objectives of cyberintrusions, Ukraine’s example demonstrates that state and non-state actors can penetrate even the most sensitive and secure command and control structures, simply to create havoc and cause disruption to a nation’s ability to operate.
Protecting Critical Infrastructure from Cyberattack
As businesses across all industries turn to highly networked and outsourced supply chain models to deliver information, products or services, the “attack surface”—which translates to potential vulnerabilities—expands dramatically. Both physical and information-based supply chains are interdependent, with inherent complexities that traditional supply chain risk management strategies often fall short of addressing. Check out more on cyberrisk to critical infrastructure from the October issue of Risk Management.
Not only did the perpetrators in this case succeed in disrupting the flow of electricity to some 200,000 people in Western Ukraine for several hours, they simultaneously targeted the automatic control systems of rail, mining and airport networks. According to the U.S. Department of Homeland Security (DHS), the attack was deliberately timed to occur during the specific period of the day when customers contact the help desks of Ukrainian electricity companies with the greatest frequency so that support staff were pre-occupied and attention was more likely to have been diverted from the initial network intrusion. In doing so, the hackers were able to test and monitor the companies’ and government’s reaction, which may in turn presage a future attack, designed to cause even greater havoc and disruption.
The malware used against the power companies was subsequently identified as BlackEnergy 3, believed to be of Russian origin and designed specifically to attack infrastructure systems. According to the DHS, a unique feature of BlackEnergy 3 is its KillDisk function, which enables the attacker to rewrite files on infected systems with random data while blocking the user from rebooting their systems, rendering them inoperable. The virus also searches victims’ computers for software primarily used in electric control systems, indicating a likely focus on critical infrastructure systems. The Ukraine example provides a good glimpse into the future, where attacks on infrastructure could become common, once the malware is perfected.
The best known example of a cyberattack on physical infrastructure was the Stuxnet malware, which began in 2008 and was used to stifle Iran’s ability to produce nuclear weapons by attacking computers at its nuclear facilities, thereby interrupting the country’s ability to successfully spin centrifuges. Stuxnet was spectacularly successful, and is believed to have been a contributing incentive for Iran to complete its nuclear agreement with the West.
While not widely known, Iran may have attempted to return the favor via a cyberattack on the Bowman Avenue Dam near Rye, New York in 2013. By gaining access to the dam’s control systems, hackers were able to acquire operational information (such as water temperature and flow rates), and would have been able to gain control of the dam’s gates if they had not been coincidentally disconnected at that time for maintenance.
There are dozens of other examples in which control systems have been attacked. These include an attack on a northwest U.S. rail company in 2011 in which signals were disrupted, a 2014 attack on a German steel mill that enabled access to the firm’s technology and operating environment, and a 2001 attack on an Australian sewage and water system, resulting in the release of waste water and sewage into local parks and water tributaries. The problem is global.
In 2013, President Obama issued an Executive Order entitled “Improving Critical Infrastructure Cybersecurity,” which was intended to enhance the security and resilience of America’s critical infrastructure by encouraging efficiency, innovation and economic prosperity. Coming from that order, the National Institute of Standards and Technology (NIST), developed a Cybersecurity Framework, which identified 16 critical infrastructure sectors, including financial services, communications and food production and attempted to set industry standards and best practices to help organizations to manage cybersecurity risks. Yet in spite of companies’ knowledge of the existence of this threat, only 17% of 600 IT security executives surveyed from 13 countries in 2014 said their companies had achieved what they would regard as a “mature” level of cybersecurity (i.e., actually had IT security programs in place the thwart an attack).
These policies are intended to be guidelines, rather than a mandate for corporate behavior, and herein lies a familiar problem—there is no law requiring compliance, nor any penalties for a failure to comply so many organizations fail to take action. Part of the issue here is that both governments and companies are reluctant to take measures that will slow their economies down or interfere with the ability to operate. Implementing sufficient IT countermeasures takes time, sucks up resources and cuts into profit. Not factored into many organizations’ thinking process, however, is how to put a price on a cyberattack or the inevitable loss of reputation that result when such an attack becomes public. If corporate executives were thinking more along these lines, perhaps more companies would be taking the risk more seriously.
As the online world meets the physical world, the risk of cyberattacks will only increase. This applies not only to governments and companies, but to individuals, as the use of smart home alarm systems, televisions, appliances and other electronics become more popular. All of these items can be hacked, meaning that our homes can be accessed remotely by unwanted intruders. Few consumers consider this darker aspect of living the “smart” life.
One advantage individuals have is that they tend to upgrade their computers and other electronics more frequently (every three to five years) than companies tend to upgrade theirs (every couple of decades in the case of control systems). On that basis it is easy to see why control systems are targets of choice on the part of cyberattackers—their software is typically outdated, often years behind current technology in relative terms.
So what can be done, apart from raising awareness to the problem, devoting more resources, and making counter cyberattack actions compulsory instead of voluntary? Creating a more holistic approach to the problem by thinking proactively about how to address the problem, implementing routine cybersecurity audits, and creating teams of individuals dedicated solely to the problem inside companies is a start. Budgets therefore need to be adjusted to devote more resources to addressing the problem across the board. Security and privacy risk mapping, benchmarking, and scenario planning should become a standard component of a cyberrisk management protocol.
But what is also needed is a change in how we think about cyberrisk and other forms of man-made risk. Less understood types of risks only tend to get on our radar in a meaningful way after the fact. Not only do we need to become more proactive on this subject, we should presume that cyberattacks will become as common in our collective psyche as climate change and terrorism have become. Governments, corporations and individuals are only beginning to give cyberrisk the attention it deserves, but the risk is staring us all right in the face and is guaranteed to affect us all in the near future.