David Sanger's extensive New York Times piece about the United States and Israel's covert cyberwarfare operations on Iran's nuclear facilities is the first article I've seen that explicitly confirms the two countries' involvement in Stuxnet's development. But this revelation isn't particularly surprising. Given the virus' complexity and purpose, the list of possible developers was rather short. Rather, what I found most interesting was this section towards the end:
But the good luck did not last. In the summer of 2010, shortly after a new variant of the worm had been sent into Natanz, it became clear that the worm, which was never supposed to leave the Natanz machines, had broken free, like a zoo animal that found the keys to the cage. It fell to Mr. Panetta and two other crucial players in Olympic Games — General Cartwright, the vice chairman of the Joint Chiefs of Staff, and Michael J. Morell, the deputy director of the C.I.A. — to break the news to Mr. Obama and Mr. Biden.
An error in the code, they said, had led it to spread to an engineer’s computer when it was hooked up to the centrifuges. When the engineer left Natanz and connected the computer to the Internet, the American- and Israeli-made bug failed to recognize that its environment had changed. It began replicating itself all around the world. Suddenly, the code was exposed, though its intent would not be clear, at least to ordinary computer users.
The question facing Mr. Obama was whether the rest of Olympic Games was in jeopardy, now that a variant of the bug was replicating itself “in the wild,” where computer security experts can dissect it and figure out its purpose.
“I don’t think we have enough information,” Mr. Obama told the group that day, according to the officials. But in the meantime, he ordered that the cyberattacks continue. They were his best hope of disrupting the Iranian nuclear program unless economic sanctions began to bite harder and reduced Iran’s oil revenues.
Within a week, another version of the bug brought down just under 1,000 centrifuges. Olympic Games was still on.The excerpt highlights one of the unique and troubling aspects of "cyberweapons" - their use against adversaries permits their proliferation. Despite all of the effort at keeping Stuxnet both hidden and narrowly tailored, the virus escaped into the public and its source code is open to be analyzed by pretty much everyone. While competent coding can make it difficult to reverse engineer and re-deploy the virus against other targets without a significant investment of time and resources, it's still a distinct possibility. Cyberweapons create externalities - side-effects that don't directly affect the militaries using them, but can have spill-over consequences on other sectors of society. For example, a SCADA worm like Stuxnet which targets industrial control systems could theoretically be re-targeted at civilian infrastructure like power or manufacturing plants.
Certainly most governments using cyberwarfare will likely want to limit these externalities since they do create an indirect threat (such as non-state actor attacks on critical infrastructure). This is evidenced by the fact that the U.S. and Israel not only tried to designed Stuxnet and its ilk to be difficult to detect, but also to have very tailored aims. The virus was designed to work on the specific reactor designs possessed by Iran, thereby somewhat limiting the initial damage of a leak (imagine what would have happened were Stuxnet to deploy its "payload" on all computer systems that it lands on). Nevertheless, these externalities exist so long as governments with the capacity to do so continue to use cyber espionage and attacks. The logic of collective action suggests that governments are also unlikely to unilaterally refrain altogether from utilizing these technologies since a blanket ban would be both impossible and entirely unverifiable due to the dual-use nature of the weapons.
This got me thinking a bit about what sorts of institutions could help mitigate some of the consequences of "leaks". Proposals for an international cyberweapons convention have been thrown around, but most have been very vague and poorly defined. Kaspersky Labs founder Evgeny Kaspersky recently suggested a treaty along the lines of the Biological Weapons Convention or the Nuclear Non-proliferation Treaty (the Russian government has also floated similar proposals). However, an out-right ban on "cyberweapons" would be highly unlikely and generally impractical. As I mentioned, verifying compliance would be substantially more difficult than it has been for either the BWC or the NPT. Given that both have been violated by a number of states party to them via clandestine programs, a "cyberweapons" ban would be toothless, even if it only banned particular types of attacks (such as those on SCADA systems). Moreover, states find cyber-capabilities significantly more versatile and useful than either biological or nuclear weapons. The category of "cyberweapon" is broad enough to include highly-developed viral sabotage (Stuxnet) to simple distributed denial of service (DDOS) and these sorts of technologies are useful not only to militaries, but also to intelligence services. Finally, the dual-use nature of information technology and its globalization make locking-in a "cyberwarfare oligopoly" a-la the nuclear monopoly of the NPT near-impossible. The "haves" cannot credibly promise disarmament to the "have-nots" and the "have-nots" face significantly lower barriers to developing basic cyber-espionage or warfare capabilities.
If restraining the development of electronic/information warfare techniques is not possible, could certain aspects of this "warfare" be regulated to limit unnecessary damage and suffering? One of the more promising suggestions made by many scholars would be some sort of addendum to the regime governing the Laws of Armed Combat clarifying what is unacceptable targeting and disruption of civilian infrastructure from cyberweapons. Drawing the line between what is civilian and what is military in such a murky field is certainly a challenge, and even if states do agree to a set of limits along the lines of the Hague Convention, the problem of enforcement remains. However, I think enforcement in this case is much less difficult (though still a significant task) than it is in the "weapons ban" case, largely because of the way states' incentives are aligned.
The attribution problem is certainly a significant one and a sizable barrier to effectively enforcing limitations on cyberwarfare. Through the use of proxies, non-state third parties, IP spoofing and other, more advanced techniques for obfuscating one's identity, attackers can avoid detection, and thereby censure. However, this challenge is not insurmountable. Security experts are already developing methods for attributing routed or masked packets. Additionally, the type of attack and its target also give clues as to the likely culprit. The attacks against Estonia and Georgia left really only a single probable source - Russia. Likewise, the complexity of Stuxnet, and its apparent target - Iran - made the U.S. and Israel the probable originators. While it may be difficult to link the behavior of in-state proxies to the direct orders of the government, especially since governments are likely to vociferously deny their involvement, it is certainly a solvable problem. International institutions could possibly play a role in improving detection and attribution of attacks. By assembling and coordinating cross-national groups of experts and technological resources, an institution modeled along the lines of the Comprehensive Nuclear Test-Ban Treaty's International Monitoring System (which watches for evidence of unlawful nuclear testing), could help overcome the difficulty of attribution.
The risk of discovery can itself play a role in tempering the reckless use of cyberweapons and international institutions can play a role in building expertise for better global detection. But independently of whether they can get away with it or not, states have a reason to refrain from attacks that are too broad. As I argued previously, releasing a worm with a very broad objective risks substantial blowback. Either the virus will replicate itself in an unintended manner (as in the case of Stuxnet) or it can be easily modified to attack new targets. States have a reason to keep their weapons "precise" or limited in complexity. Additionally, the dependence of many economies on global markets that are underpinned by information and communication technologies makes "all-out" attacks on a country's civilian infrastructure highly unlikely.
An international agreement on what is "unacceptable" cyberattack, even if merely aspirational, could buttress the already existing reasons that states have to keep their cyberwarfare operations in check. And while it is unclear how responsive leaders will be to any such "costs" of violating the agreement, it may nevertheless have a valuable signalling purpose. The United States, as the current "leader" in cyber-offensive capabilities, has an interest in sending a message to other countries that it will not use them recklessly. After the Stuxnet revelations, it may be wise for the U.S. formally commit to a limitation on cyber-attacks on non-combatant infrastructure (such as medical facilities or large power grids), especially now that it is clear that such attacks could certainly be carried out. By "tying its hands" with the weight of global opinion, the United States might calm the fears of other states and dissuade them from rushing headlong into rapidly developing similar weapons. Perhaps more importantly, an international commitment to a set of "laws of technological combat" would reinforce the U.S.' already existing incentives to keep its operations contained, thereby minimizing the externality of cyber-proliferation.
However, agreeing on a set of "rules" will take a sizable amount of time and effort. In the short term, an institutional solution is more likely to come from the sub-governmental or non-governmental levels. The best way to limit the impact of cyberweapon leaks is to improve the capacity of public and private sector infrastructure to defend itself. This is not something that all states need necessarily do on their own. It may be useful to pool the expertise of non-governmental digital security experts and connect it to relevant government agencies and infrastructure operators. A "World Health Organization" for cybersecurity, which would issue policy guidance to governments and private-sector actors on how to best secure their data and systems from attack could help minimize the damage from new security threats that get released into the wild. Quick responsiveness to "zero-day vulnerabilities" - those security holes that have yet to be discovered and patched by software manufacturers - is essential. Additionally, so many cyberattacks rely on poor security practices by the targets. Even the Iranian attacks, as sophisticated as they were, depended on scientists failing to realize the dangers of using insecure USB drives to transfer information. The same poor practices likely occur in civilian infrastructure in most countries, despite the enormous amounts of money poured into developing security systems. Although companies like Kaspersky and Symantec already provide some of this guidance, aggregating expertise into an international organization whose primary goal is to issue comprehensive policy advice could further enhance the resilience of global infrastructure to rogue cyberattacks.
The "threat" of cyberterrorism is perpetually over-hyped and exaggerated. Nevertheless, as states develop more advanced technologies for espionage and technological disruption, the capabilities of non-state actors are also likely to grow. As the Stuxnet case shows, military-developed viruses can become available to the world at large when used. Cyberweapons are likely to proliferate upon use. Just as nuclear weapons generated the negative externality of radiation from nuclear testing, cyberweapons generate their own negative externality. Since no state is going to give up its technological capabilities any time soon, it is important to begin establishing regimes governing their use and minimizing their "fallout."