Rapid new advances are now underway in AI – with uses multiplying from recommendation engines to driverless cars
Both start-ups and big business alike are involved in driving this technology forward.
Yet, as AI gets more widely deployed, the importance of having explainable models will increase. Simply, if systems are responsible for making a decision, there comes a step in the process whereby that decision has to be shown — communicating what the decision is, how it was made and – now – why did the AI do what it did.
We’re already seeing the impact of AI on people’s lives and the struggle to explain decisions that are made. Poor data sources, skewed programme logic and even the biases of the developers can mean that systems can easily reproduce human prejudices. Indeed, there are allegations that mortgage reviewing bots have become racist while some algorithms learn that if the only people seeing and clicking on adverts for high-paying jobs are men, then those adverts will only be shown to men.
On ethics alone, we all want to know how and why things are decided. For example, a mortgage analysis takes into account many factors beyond the applicant and so the AI may end up with 5000 factors to review, including things the burgeoning homeowner doesn’t even really know about. They might think they’re better on paper than their neighbour but not know the full contingent of factors being analysed.
Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a series of events, AI’s intended use is to truly increase our clarity and knowledge of how the world works. More must be done to make it also truly explainable.
Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science.
Why is it so hard to explain?
Simply, AI models are so complex that it makes it very hard to describe what is being done; when, where and why. With AI, the more complicated the model, the more accurate but, simultaneously its outputs become harder to “explain” in real-terms.
It’s important to remember here that there are two types of AI — one that is supervised, often mathematically driven, and explicitly programmed to be predictable while the second is newer and often focused on using deep-learning; trying to mimic the human brain. This second is unsupervised — it is fed data and expected to work it out. This results in a model that is non-linear and appears chaotic, making it impossible to predict outputs ahead of time. However, these models are great for fine details, identifying features or elements of a problem that cannot be addressed by traditional means.
The good news is that people are working on ways and tools that will provide generalised explanations and make AI, in both cases, more understandable.
Explainable AI
‘Explainable AI’ (as a concept or a movement) is not about simplifying models. It is about focusing on delivering techniques to support how humans interpret the machine outputs in an easy and clear way. Historically, human researchers have been biased towards using models and tools that yield to our intuition and this has, in part, led the boom in data visualisation techniques — which have been very useful.
Take, 2D or 3D projections, for example. These involve taking a large multi-dimensional space and presenting the data involved in a lower dimensional order (2D or 3D). Correlation graphs are also helpful because here, 2D graphs show where there are variables and the thickness of the lines between them represent the strength of the correlation.
Like with all modelling, there is always a point before processing begins by which the data scientists or team involved decide how interpretable they want the AI modelling to be.
Machine Learning techniques such as Decision Trees, Monotonic Gradient Boosted Machines and rules-based systems are popular and have all led to good results. However, in cases where accuracy is more important than interpretability, traditional visualisation techniques are being brought back in to support human understanding.
These tools include the likes of decision tree surrogates — these provide a simplified decision flow in order to explain a more complex model, creating a middle-man of sorts — or partial dependence plots — these provide a view of how on average the machine learning model functions, helping understand the importance of variables in a given model. Granted, these approaches may not represent the full complexity of the AI but they do serve to provide a better feel for the data in human terms.
Conclusion
All these capabilities are going to be key as we advance Deep Learning and AI. And, part of this process will continue to rely on good visualisation and powerful GPU technology will continue to be needed to turn large data sets into usable representations. Equally, it will also be essential that not only are models and their associated training data archived properly by humans, but that that data are subject to rigorous version control actions. Combined, this will have a knock-on effect on businesses at the data centre and cloud level — shaping everything from power and cooling, to infrastructure resiliency.
Ultimately, the demand for people with the skills to help articulate the findings to non-data scientist and non-technical audiences will undoubtedly continue to grow. In the meantime, actively choosing to tackle the issue of ‘explainability’ from the outset is a solid step towards meaning that models are built correctly and that everyone can truly benefit from AI — understanding what it does and ensuring it remains ethical, fair and efficient.
About the Author
Vasilis Kapsalis is Director for Deep Learning and HPC at Verne Global, a provider of advanced data center solutions for high-performance computing (HPC) and Machine Learning. Prior to joining Verne Global he worked at NetApp looking after emerging technology solution sales, including Deep Learning and Hybrid Cloud, and prior to that at Microsoft (Azure) and IBM covering HPC and Big Data. He holds a MEng in Electronic and Information Systems from the University of Cambridge and a Diploma in Law.