AI and machine learning (ML) applications have been at the centre of several high-profile controversies, witness the recent Apple Card credit limit differences and Amazon’s recruitment tool bias.
Mind Foundry has been a pioneer in the development and use of ‘humble and honest’ algorithms from the very beginning of its applications development. As Davide Zilli, Client Services Director at Mind Foundry explains, ‘baked in’ transparency and explainability will be vital in winning the fight against biased algorithms and inspiring greater trust in AI and ML solutions.
Today in so many industries, from manufacturing and life sciences to financial services and retail, we rely on algorithms to conduct large-scale machine learning analysis. They are hugely effective for problem-solving and beneficial for augmenting human expertise within an organisation. But they are now under the spotlight for many reasons – and regulation is on the horizon, with Gartner projecting four of the G7 countries will establish dedicated associations to oversee AI and ML design by 2023. It remains vital that we understand their reasoning and decision-making process at every step.
Algorithms need to be fully transparent in their decisions, easily validated and monitored by a human expert. Machine learning tools must introduce this full accountability to evolve beyond unexplainable ‘black box’ solutions and eliminate the easy excuse of “the algorithm made me do it”!
The need to put bias in its place
Bias can be introduced into the machine learning process as early as the initial data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.
Gender for example might be a useful parameter when looking to identify specific disease risks or health threats, but using gender in many other scenarios is completely unacceptable if it risks introducing bias and, in turn, discrimination. Machine learning models will inevitably exploit any parameters – such as gender – in data sets they have access to, so it is vital for users to understand the steps taken for a model to reach a specific conclusion.
Lifting the curtain on machine learning
Removing the complexity of the data science procedure will help users discover and address bias faster – and better understand the expected accuracy and outcomes of deploying a particular model.
Machine learning tools with built-in explainability allow users to demonstrate the reasoning behind applying ML to a tackle a specific problem, and ultimately justify the outcome. First steps towards this explainability would be features in the ML tool to enable the visual inspection of data – with the platform alerting users to potential bias during preparation – and metrics on model accuracy and health, including the ability to visualise what the model is doing.
Beyond this, ML platforms can take transparency further by introducing full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations – such as the European Union’s GDPR ‘right to explanation’ clause – and helps effectively demonstrate transparency to consumers.
Providing humans with the tools to make breakthrough discoveries
There is a further advantage here of allowing users to quickly replicate the same preparation and deployment steps, guaranteeing the same results from the same data – particularly vital for achieving time efficiencies on repetitive tasks. We find for example in the Life Sciences sector, users are particularly keen on replicability and visibility for ML where it becomes an important facility in areas such as clinical trials and drug discovery.
Models need to be held accountable…
There are so many different model types that it can be a challenge to select and deploy the best model for a task. Deep neural network models, for example, are inherently less transparent than probabilistic methods, which typically operate in a more ‘honest’ and transparent manner.
Here’s where many machine learning tools fall short. They’re fully automated with no opportunity to review and select the most appropriate model. This may help users rapidly prepare data and deploy a machine learning model, but it provides little to no prospect of visual inspection to identify data and model issues.
An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation – where it will visualise what the chosen model is doing and provide accuracy metrics – and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.
…and subject to testing procedures
Building greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.
During model deployment, the most effective platforms will also extract extra features from data that are otherwise difficult to identify and help the user understand what is going on with the data at a granular level, beyond the most obvious insights.
The end goal is to put power directly into the hands of the users, enabling them to actively explore, visualise and manipulate data at each step, rather than simply delegating to an ML tool and risking the introduction of bias.
Industry leaders can drive the ethics debate forward
The introduction of explainability and enhanced governance into ML platforms is an important step towards ethical machine learning deployments, but we can and should go further.
Researchers and solution vendors hold a responsibility as ML educators to inform users of the use and abuses of bias in machine learning. We need to encourage businesses in this field to set up dedicated education programmes on machine learning including specific modules that cover ethics and bias, explaining how users can identify and in turn tackle or outright avoid the dangers.
Raising awareness in this manner will be a key step towards establishing trust for AI and ML in sensitive deployments such as medical diagnoses, financial decision-making and criminal sentencing.
Time to break open the black boxes
AI and machine learning offer truly limitless potential to transform the way we work, learn and tackle problems across a range of industries—but ensuring these operations are conducted in an open and unbiased manner is paramount to winning and retaining both consumer and corporate trust in these applications.
The end goal is truly humble, honest algorithms that work for us and enable us to make unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.
Recent research shows that 84% of CEOs agree that AI-based decisions must be explainable in order to be trusted. The time is ripe to embrace AI and ML solutions with baked in transparency.
About the Author
Dr Davide Zilli is Client Services Director at Mind Foundry. Businesses looking to harness the potential of their data turn to Mind Foundry’s human augmented machine learning solution for answers. Discover how to prioritise opportunities for applied machine learning with our unique “innovation year” offering.
Featured image: ©MaZi