Should we be worried about AI?

When Elon Musk speaks, people listen, and Musk’s concerns about artificial intelligence have captured a significant amount of attention recently

Musk’s comments follow in a long line of well-known scientists and technical experts warning about potential disasters artificial intelligence could cause. Opinions on the subject range significantly, but knowing a bit about the history of artificial intelligence can help people better gauge developments and help businesses prepare to adjust.

Artificial intelligence covers a range of fields. An AI capable of playing chess, for example, presents little threat to the human race. However, development of an artificial general intelligence, or AGI, opens up potential risks. There are fundamental differences between today’s AIs and AGIs, the primary difference being based on how computers and humans operate. Modern AI systems do what computers do best: handle large sets of data and perform sophisticated calculations quickly. Generalized AI systems are projected to think more in line with how the human brain works.

DeepMind, which is being developed by Alphabet, the parent company of Google, broke a barrier by defeating the top human Go player, a feat few thought would be possible in 2017. Using sophisticated neural networks and deep learning, DeepMind has made progress never before seen in AI, including advanced image visualization accomplishments. Although neural networks aren’t new, Alphabet’s sophisticated take on the methodology offers a level of generalizability some think will lead to more human-like capabilities.


Some of the first discussions about controlling artificial intelligence date back to at least the early 1940s, when Isaac Asimov listed the “Three Laws of Robotics” in order of importance. First, a robot must not harm a human through action or inaction. Second, robots must obey humans. Third, robots must protect their own existences. As many, including Asimov, have pointed out, rules created to govern robot or AGI behavior are still open to interpretation. If a sufficiently powerful system notices an ongoing war, for example, the first rule might instruct it to take over as an effort to stop harm to humans through inaction.

The difficult with reigning in AGI is that it addresses philosophical issues humans are still debating, and AGIs, which are governed by rules, will respond exactly as they’re programmed. With the increased complexity that comes with AGI systems, ensuring there are no edge cases that will cause systems to seize control or cause widespread harm also becomes more difficult. One of the primary concerns of experts is that humans simply aren’t equipped to handle these systems as they come online.

Musk isn’t alone among qualified influential people who believe strong action is needed to control AGI. Bill Gates expressed support for Musk’s general ideas. Perhaps the world’s most famous scientist, Stephen Hawking, expressed similar concerns, stating that artificial intelligence could “spell the end of the human race.” Bill Joy, an important Unixlike developer, is perhaps the strongest bell-ringer when it comes to AGI and has stated that humans should abandon certain fields that could radically upset society, although his position has softened over time.

There are two main camps who argue against anxiety over AGI. One positioned is shared by many in the computer science field: AGI systems that could pose a threat to mankind are so far from being developed they’re not worth worrying about. Speculating about future AGI may be a worthwhile intellectual pursuit and enlightening thought experiment, but it is future thinkers who will have to deal with any problems that arise. The other camp is perhaps best explained by futurist Ray Kurzweil. Kurzweil has a much more optimistic view of the development of an AGI, believing that the technology to develop it is close at hand. To Kurzweil, however, controlling systems doesn’t pose nearly the challenge others have projected.

One of the primary questions about AGI is how it will develop. Some think modern AI advances will lead to incrementally more generalized intelligences, ultimately building up to superhuman-like thinking. Others believe a fundamental breakthrough is needed to make substantial progress. Regardless, attention is merited for both governments and businesses alike. Businesses, in particular, need to keep up with the AI advances that seem to be coming at an increasing rate. As computers can perform an ever-increasing number of jobs formerly done exclusively by humans, businesses need to adapt to compete. While a world-changing AGI breakthrough might seem unlikely, businesses need to have an open mind about its possibility.