Artificial Intelligence (AI) development has gained substantial traction of late and is fast becoming the new “cyber” in generating attention, speculation, and fear
Sine 2010, AI has grown at a compounded annual growth rate of almost 60 percent, according to one source. Competition among nation states to “dominate” the AI sphere is reported to be fierce, raising concern that an “intelligence arms race” has already commenced with adversarial governments jockeying for supremacy.
Dating back to 1956, AI was first coined by a Stanford University researcher and defined its key mission as a sub-field of computer science. Fast forward to today, AI conjures up images of autonomous robots replacing humans that will quickly take over the world. Simply, AI is the ability of a machine/computer to not only think, but also learn, like human beings. The idea is that this technology is capable of interacting with their surrounding environment to such a degree that its actions are considered “intelligent.” Learning is just one aspect of AI, and include other functions such as natural language processing, inference, algorithms, neural networks, etc, and its technology has been used in healthcare, retail, sports, and manufacturing. Some examples of AI in our daily lives are Apple’s Siri, autonomous vehicles, Amazon’s Alexa, Netflix, Pandora, and NEST’s thermostat.
However, the one area that generates the most concern is in military weapons development, and in at least one world leader’s perspective, the country that’s able to dominate AI development will have tremendous influence on the global stage in the future. Seven states have been actively engaged in AI development to support their respective militaries and have dedicated or planned to dedicate considerable financial investment into its continued development. Two of these governments in particular are adversarial competitors, suspected of carrying out aggressive cyber operations.
Understandably, there is much consternation about governments using this technology to make already potent weapons even more effective and destructive. The following incidents demonstrate how some governments are seeking to further modernize weapons and weapons systems:
- One country has been reported testing autonomous and semi-autonomous combat systems with an AI that is claimed to make its own targeting judgments without human intervention.
- Another country is believed to be integrating AI into its tanks, naval forces, and aircraft as part of its asymmetric warfare strategy to build up its high-technology arms to enable its army to defeat superior forces.
- One government initially allocated USD $1.1 billion to AI research in 2015, and doubled that amount in 2017. The same government launched an autonomous unmanned surface vehicle (USV) in 2016 as part of its anti-submarine warfare program.
- Another country developed an anti-radar drone designed to be launched by ground troops and autonomously conduct missions to find and destroy enemy radar.
Unsurprisingly, many are concerned that AI development will undoubtedly spill over into cyberspace, particularly given the first AI-generated cyber attack is believed to have targeted Indian targets. In that incident, machine learning was used to study patterns of normal user behavior within a company’s network. Speculation is that more of these attacks will occur in the near future, taking advantage of continued reliance on centralized systems and increased use of automated botnet attacks. Machines able to mimic human behavior deserves our attention, especially when the volume and availability of personal information to be used in social engineering attacks is readily available. Furthermore, the longer an AI-presence remains on a network, the more it can learn about the organization’s habits and styles, which can be used to refine attacks. This has led some to conclude that such attacks will invariably be impossible for humans to stop.
Much of these concerns are rooted in what could happen based on what is understood about technology and how attackers have used and exploited cyberspace in support of their operations. Therefore, the weaponization of AI to support cyber attacks is worthy of our attention largely because being proactive in addressing and planning for potential threats is part of any sound risk management strategy. But providing more specific milestones that better track the expected trajectory of AI-enabled cyber attacks to include volume, vectors, targets, for example, provide better information to defenders to advise their defensive postures. Like all milestones, they are subject to change as the environment changes, but at least it provides a foundation from which to start looking at this threat. This will prove extremely difficult to do if and when we find ourselves in the thick of it.
Alerting that AI cyber attacks are likely coming provides practical advanced warning. But are these attacks going to be primarily the purview of state actors supporting their national interests, or cyber criminals looking to exploit anyone and anything they can. There are several notable differences between these two types of threats to include financial/material resourcing, sophistication, and target selection, to name a few.
More helpful that just providing a “sky is falling” alarm is to reduce uncertainty accompanying this approaching threat by informing people when and to what extent the sky is falling to help them make better decisions.
Panic has never been a good tactic to keep people safe. An educated consumer, on the other hand, provides the necessary counterbalance that can lessen potential disasters to manageable situations.
About the author
Emilio Iasiello has more than 12 years’ experience as a strategic cyber intelligence analyst, supporting US government civilian and military intelligence organizations, as well as the private sector. He has delivered cyber threat presentations to domestic and international audiences and has published extensively in peer-reviewed journals and blogs. Follow Emilio on Twitter