How far away are we really from artificial intelligence?

General-use AI may still be some time away yet

By the mid-1950s, the world realized that computers were going to play a major role in future technology. Military, business and educational entities began investing heavily in computers, and rapidly advancing hardware meant that the potential for computing seemed endless. Artificial intelligence, perhaps more than any other aspect of computing, captured the public’s imagination, and predictions of a future ruled by computation and robots were common in news stories and throughout science fiction literature and cinema.

To understand why early experts were so optimistic about artificial intelligence, it’s important to understand Moore’s Law. Computers developed rapidly through the 1950s and early 1960s, and Gordon Moore, a co-founder of computing giants Fairfield Semiconductor and Intel, predicted that the number of transistors in a given area on a circuit board would double every year, leading to exponential growth in processing power. He later downgraded this prediction to a time period of 18 months in 1975, and contemporary studies show that rates double approximately every two years. With so much power on the horizon, many posited it was only a matter of time before computers outpaced humans in nearly every cognitive area.

Perhaps the most famous popularizer of artificial intelligence in the 21st century based his predictions on a more generalized view of Moore’s Law. Ray Kurzweil, in 1999, explained his belief that Moore’s Law leads to accelerating rates of advancement in other fields as well, and he backs up his arguments with examples of profound technological and scientific advances throughout the latter half of the 20th century. His 2001 essay, “The Law of Accelerating Returns,” projects a future where science reaches a so-called technological singularity, where computers are able to replace humans in science and create a runaway cycle of self-improvement leading to almost unimaginable progress throughout society. Companies like IBM are spearheading that progress through intensive research and development.

“Whilst the concept of AI portrayed by Hollywood may be some time away, there are real opportunities and moments for businesses to act now as cognitive powered insights become increasingly embedded into decision making across the value chain, regardless of industry,” says Karen Lomas, Executive Partner – IBM Cognitive Curation.

“The cognitive era is fundamentally transforming the relationship between humans and machines by delivering innovative cognitive solutions that enable positive benefits for society and new opportunities for business. It’s not unreasonable to expect that, within this rapidly growing body of digital information, lie the secrets to defeating cancer, reversing climate change or even fixing the global economy.

At IBM, we are working on ‘augmented intelligence’ versus ‘artificial intelligence’, reflecting the critical difference between systems that enhance and scale human expertise and those that attempt to replicate human intelligence.

While there has been lots of noteworthy progress, there is a clear disconnect between the AI of science fiction and what’s possible with today’s technology. Kurzweil’s arguments are dependent upon the creation of a generalized artificial intelligence, the likes of which hasn’t yet been developed, and this requirement has lead to criticism. Although Moore’s law has largely held true, computer scientists and neuroscientists have come to appreciate that faster computer speeds don’t necessarily lead to more human-like capabilities. In fact, computers today still operate based on the same architecture described by John von Neumann in 1945, and modern computers don’t resemble the human brain any more than older computers did.

Many experts, including Kurzweil, argue that computers don’t have to operate like a human brain; they only have to be able to simulate one. If human cognition can be quantified and explained mathematically, they argue, computers should be able to replicate this functionality. If computing power continues to grow according to Moore’s law, it’s simply a matter of developing software that replicates this functionality. Although generalized artificial intelligence software is undoubtedly complex, the potential it promises will spur interest and funding.

Even this argument, however, raises a question: Is Moore’s Law still in effect? Researchers and technology companies have predicted that we might be up against size limitations as making smaller transistors might soon be impossible or at least economically unviable. Modern processor development is approaching physical limitations that might present impassable barriers, potentially putting a cap on performance. There are ways to do more while still using the same number of transistors, but the rapid development that has typified computer development might come to an end.

“I don’t see the “danger” or a super-human AI coming too soon. We might be years (or decades?) away from an AGI (artificial general intelligence – an AI that can “think” on more that one subject), but that would still be nothing like human-like intelligence.” notes Chris Fotache, founder and CEO of AI research company CYNET.

While many computing and logic operations have been reproduced by computer algorithms, cognitive reasoning can’t even be explained by current neuroscience, and without reason, we can’t speak of human-like AI.

The Death of Moores Law?

There are reasons why some aren’t yet proclaiming the death of Moore’s Law. The first is based on history; experts can recall countless predictions of the law’s impending doom only to later find it continued unabated. The second is that the future is inherently unpredictable. Although experts might be unable to see was how processor designers can work around issues, that doesn’t mean others won’t find a way.

The death of Moore’s Law wouldn’t necessarily signal the end of Kurzweil’s ideas. Multiprocessor computers are now the norm, and tasks that can take advantage of parallel processing can benefit tremendously. Networks can still become faster, letting computers communicate faster. With the right software, the dream of a brain-like computer might still be attainable.

While artificial intelligence discussions often focus on breakthrough developments, incremental changes relying on more conventional concepts have made modern artificial intelligence systems powerful. Businesses, in particular, are able to take advantage of complex algorithms to identify trends that humans simply lack the time and ability to find. The internet is perhaps the best example; modern search engines rely on complex processing to find information users are looking for and can detect obscure information hidden within billions of websites.

Perhaps the most promising contemporary developments lie in what’s called “big data.” Traditional data processing is powerful, but it struggles when it comes to sifting through unimaginably large data sets collecting on ever-growing storage arrays. Small effects can be impossible to find using traditional searching methods and statistical tools, but big data tools under development can find them, letting businesses take advantage and eke out an edge on the competition. Other fields can benefit as well. The medical industry has struggled to find useful conclusions from the large backlogs of data collected over the years, but modern analysis tools let medical researchers, pharmaceutical companies, and practicing doctors find more effective treatments.

“Through the luck of evolution, a super-powerful AI might get to that, but I think that would be more about intelligent design than evolution,” adds Fotache.

Our software designers don’t even know how humans acquired those capabilities, making it impossible to program it into a computer.

The question of how far we are from artificial intelligence is complex, but there are two main positions. When it comes to using computers to make sense of the world from a scientific, governmental, or business perspective, artificial intelligence is here in a big way, and computers will continue to play a larger role as the tools are further developed. When it comes to generalized, human-like artificial intelligence, we still may be decades or even longer away. However, it’s important to bear in mind the possibility of a breakthrough. A few clever ideas might be enough to usher in a new era of human-like artificial intelligence sooner than predicted.