The Evolution of Machine Learning

Machine Learning isn’t new – but it’s just getting started

In recent years, the term ‘machine learning’ has become very popular among developers and business alike, even though research in the field has been going on for decades. Essentially, machine learning is about teaching machines to learn concepts and techniques the way humans do. Earlier, machines were only able to think in boolean logic – having a stringent ‘yes’ (1) or ‘no’ (0) answer (output) to a question (input). This limited the type of questions one could ask a machine.

Fuzzy logic systems were later introduced to address this particular issue by enabling machines to answer on a scale of values ranging from no to yes. In a binary logic system, the answer to “will it rain today?” would be a strict yes or no, while on a fuzzy system, the answer may vary from a definite yes to a definite no, such as definitely, very likely, probably, or not likely, depending on the probability of the occurrence of rain. The fuzzy system allowed the restrictions on answers to be lifted, but the restrictions on questions remain. A computer still cannot answer questions like how to solve world hunger, or should I ask her out, however, with recent advances in machine learning your smartphone’s personal digital assistants might help in approximating the answers.

Data Equals Profit

Artificial intelligence as a field of science had lost years of research and funding due to the inability of computer scientists to prove its viability. While computer scientists were making huge strides in increasing computational performance by utilizing advancements in hardware to enable machines to solve complex calculations, hypotheses by their fellow researchers from AI on the ability of machines to think and act like humans were met with skepticism. Imminent researchers in the field of AI such as Alan Turing had laid the foundations for AI but couldn’t convince others to take interest due to lack of proof of concept. The field of AI was labeled as fodder for fiction and was promptly defunded for several years.

Other fields of computer science were rapidly growing as separate industries. With the advent of the internet, and subsequently, mobile technologies and social applications, companies saw a huge potential for the proliferation of data. This led to the emergence of big data and its related technologies. Soon it was realized that the amount of digital information, especially consumer data, was worth billions of dollars to large corporations and the government.

A sub-field of AI, machine learning, saw rapid growth when companies such as Google and Facebook began to find new ways to utilize the troves of data for more profit. Data was the new oil, and machine learning was the refinery. The speed and accuracy of which machine learning techniques have proven to be useful in extracting vital information from terabytes of raw data make them attractive solutions to a variety of problems. Neural networks are a major component of machine learning and are now being used almost everywhere – in computer vision to gather information from images, facilitating chatbots and customer support with natural language processing, or analyzing public sentiment for a brand from more than a million tweets.

Machine learning is now a hot industry teeming with new jobs and new demands. Several startups are built around providing machine learning services, and traditional companies are venturing into this field through innovation. Amazon’s Alexa, Uber’s self-driving cars, and Google’s translation services are some of the exciting products built on neural networks. New graduates are crowding machine learning and AI classes to get their foot into the industry as early as possible. The amount of investments in terms of both money and research spent on this field have ensured that machine learning is here to stay.