Any Hope for Cyber Deterrence Goes Out the Window with AI

As all things cyber rise to the forefront of the collective conscious, talk of cyber deterrence continues to be debated in state capitals as well as among thought leaders, politicians, and military officials

Despite these ongoing discussions, governments seem more and more inclined to continue to invest time, effort, and money in developing cyber capabilities, less concerned about trying to deter hostile activities and more concerned about being able to dominate them. Indeed, much of the publicized comments by senior officials from leading cyber powers demonstrates less concern about defending against attacks and more about being able to execute them. Such a focus makes trying to dissuade hostile actors a nominal token at best, hoping that the sheer volume of cyber fire power may be enough to deter nefarious state cyber activity.

In late February 2018, Admiral Rogers – the head of the National Security Agency and U.S. Cyber Command – testified before Congress that the President had not give him specific orders or authority to strike back at Russian cyber operations at their source ahead of the midterm elections. Presumably the intimation was that in the sake of guarding itself, Cyber Command would initiate offensive attacks intent on changing the calculus and behavior of these Russian operations. Then in March 2018, a news report revealed that the Russians were building “an automated reconnaissance and strike system” driven by Artificial Intelligence (AI) The goal, according to General Valery Gerasimov, current Chief of the General Staff of the Armed Forces of Russia, was “to cut down on the time between reconnaissance for target collection and strike by a factor of 2.5, and to improve the accuracy of strike by a factor of two.”

While Gerasimov was referencing AI’s incorporation into kinetic weaponry, the confluence of AI and cyber attacks is largely seen as an inevitable future, a fear expressed by security experts in February 2018. The first AI-related cyber attack was observed in India in early 2017 in which a never-before-seen attack used rudimentary machine learning to observe and learn patterns of normal user behavior inside a network. As one expert points out long term AI-presence on a network will enable it to learn a user’s e-mail habits, writing style, contact base, and key themes implemented in the e-mail’s content. Hostile cyber actors have demonstrated continued sophistication in socially-engineered e-mails, which facilitate phishing and spear-phishing efforts. It is easy to see how AI could be leveraged to further polish such efforts via the learned behavior.

Indeed, AI is driving a cyber arms race, not deaccelerating it. Already the cyber weapons market is forecast to reach approximately USD 524.3 billion by 2022, according to one report. The rapid growth is attributed to the demand from both public and private organizations, a further indication that offense – and not defense – is the more valued commodity. While early AI technology used machine learning to bolster cyber defense systems, its potential adoption into offensive attacks gives pause for concern, particularly as governments like China and Russia seek to become world leaders in AI development. According to one study, more than 91 percent of security experts polled worried that they will soon face and try to mitigate AI cyber attacks. As AI attacks are considered too difficult for humans to recognize and stop because of their ability to mimic user behavior on systems leading to an AI-cyber attack versus an AI-cyber defense matchup.

Does cyber deterrence have a role in an increasingly AI cyber environment? All signs point to “no.” It is difficult to implement a cyber deterrent strategy because once released, there is little control over these AI systems. As one source points out, AI does not wait for orders from the base; it learns quickly, making its own decisions often while deep inside the targeted environment. The human element is not only not needed; it’s preferably removed from the AI equation completely, begging the question how does one dissuade a self-thinking, self-controlled cyber attack weapon?

And this goes back to Rogers’ frustration about not being able to hack back to the source as a means of influencing an actor’s change of behavior. It’s bad enough that hacking the source of any attack assumes the actor is operating out of computers from the home country, and not for example, in the United States or an Allied country off a server servicing hospitals, schools, or utilities. How would such a hack-back deter AI-driven attacks? And if the intent is to change the leadership ordering the AI attacks, why would the disruption of AI attacks influence state leadership directing them? What is the one-for-one correlation?

Cyberspace continues to prove that any strategic thinking continues to be based on the present without any consideration of future developments. As AI becomes more of a reality, and as nation states compete to develop AI-related expertise, deterrence must not only take into consideration this autonomous technology, but the next one just being thought about.

About the Author

Are Cyber-Related Indictments a Good Deterrent Strategy? TechNativeEmilio Iasiello has more than 12 years’ experience as a strategic cyber intelligence analyst, supporting US government civilian and military intelligence organizations, as well as the private sector. He has delivered cyber threat presentations to domestic and international audiences and has published extensively in peer-reviewed journals and blogs. Follow Emilio on Twitter