Ben Taylor, CEO of automated decision-making platform Rainbird and advisor to the All Party Parliamentary Group on AI (APPG AI) argues that it’s time for policymakers to collaborate more in the fight for AI transparency and not bury their heads in the sand hoping tech companies will self-regulate 

Taking an apathetic view towards AI and automation might feel like the path of least resistance to many policymakers. The enormity of the task involved in effectively regulating this highly complex group of technologies — while at the same time not derailing or choking its innovative potential — is a difficult challenge.

But it’s a challenge that has to be tackled. The civic and economic impact that AI and automated decision-making technologies are having demands strong leadership and a brave, forward-thinking mindset from policymakers.

Without meaningful regulation to address the use of AI tools within businesses, the negativity, fear and sensationalism around AI will only be amplified. What’s needed is an honest informed debate about the impact of AI technologies; one that leads to the implementation of policies that protect and empower society, without stifling innovation.

Now I’m not suggesting for one minute that regulating a technology in its early days isn’t hard. And I’m also not absolving tech companies, like my own, of any responsibility. In a perfect world, tech companies would self-regulate in line with greater social, economic and political good — but this isn’t a perfect world. It’s unrealistic to think that all tech companies will do so in the same consistent, ethical and meaningful way —  therefore government needs to step in and shoulder some of the burden.

Instead of retrospective regulation, which will always be playing catch-up to the tech, we must try to get ahead of things. We need to take a proactive and future-proof approach that tackles the trust and transparency problems that AI and automated decision-making technologies pose today. And that means policymakers setting out the principles for making algorithms accountable, whilst also preparing us to be able to make the most of the huge opportunities that lie ahead.

IP protection and transparency are not natural partners

Much of the fear and ‘ignorance’ towards AI and automated decision-making stems from what we call the “black box” problem. This refers to systems where it’s very difficult for developers, analysts or regulators to explain the internal workings or explain the decision-making process behind the output. It’s an issue most commonly encountered when using deep learning or neural networks to process large quantities of data.

Of most concern to the policymakers, industry regulators and the general public is transparency. But this comes with its own set of challenges. In regard to highly complex AI technologies such as deep learning and neural networks, it’s almost impossible for developers, who possess an understanding of the technology on an algorithmic level, to explain all of the many processes. And then there is the issue of trade secrecy, where firms don’t want to disclose the inner workings of their products and systems.

The major issue with this is that we’re being led down a path where determinations are being made, and decisions taken by AI, that cannot be fully analysed, evaluated and explained. Because of this, when it comes to “black box” decision-making, there is no way to establish or build trust. While we may not care when being served a movie recommendation on Netflix, or playlist on Spotify, when the stakes are higher, and the decision potentially life-changing, it becomes a much more difficult pill to swallow.

The challenge for policymakers then, is in balancing companies’ concerns over IP and trade secrets with the public demand for transparency and clarity; the knowledge that a decision taken by AI has been done so legally, ethically and free of bias.

If algorithms are to be deployed in situations where the output has the potential to affect a person’s life, then being able to scrutinise the decision-making process needs to be a human right, not an optional extra.

The explosion in the number of algorithms used by employers, banks, police forces and others over the last few years has proven one thing: not all AI and automated decision-making systems are created equal. As we’ve seen, some can, and do, demonstrate bias and make bad decisions that detrimentally impact people’s lives.

The House of Lords Select Committee on AI has already expressed its view that “…it is unacceptable to deploy any AI system that could have a substantial impact on an individuals’ life, unless it can generate a full and satisfactory explanation for the decisions it will take.”

The trouble is that words and views are easy to express. Creating meaningful and binding policy is much, much harder.

We must do better than GDPR

Last year, GDPR’s “right to an explanation” was heralded by some as a regulatory saviour… but the belief that it delivers accountability and transparency for AI is a huge red herring. The right to challenge automated decision-making has been contained within the UK Data Protection Act since 1998. But there are no legal guarantees.

GDPR does not define how automated decisions should be made or what constitutes a satisfactory explanation. It simply compels companies to reveal how an algorithm works and the type of data it draws on to make determinations. This is something companies can do even if they’re using deep learning and neural networks. They can explain the rules behind the system, but not the internal logic.

In practice, this means that instead of releasing a full explanation for a specific AI decision, a company can simply describe how the algorithm works. For example, a person turned down for a bank loan might be told that the algorithm took their credit history, age, and postcode into account, but not actually learn the specific reason why their application was rejected.

The right to be informed also applies before decisions are made, not after the fact. And decisions can only be challenged when they are completely automated from start to finish, and the outcome of the decision has legal or other similarly significant effects. The obligation vanishes if there is the slightest form of human involvement.

GDPR has definitely achieved some good in terms of stimulating genuine public interest in data privacy and protection. Businesses are much less likely to get away with mishandling digital information today. But GDPR is not the regulatory answer to the black box problem.

So, how can business applying AI build-up public trust and satisfy the demands of external regulators?

A collaborative approach to AI regulation

Some say that putting such a premium on transparency sets too high a burden for the AI industry. As a member of the All-Party Parliamentary Group on AI (APPG AI) I’ve been part of formal discussions on the explainability of AI. My view is that businesses need to start with AI-powered technologies that are fully auditable and modelled on human expertise. Through the use of these solutions, they can inspire public trust in AI systems and satisfy stringent regulations.

While deep learning and neural networks are being looked at by many large organisations to deep dive into the highly sensitive, highly detailed data that they hold, it’s currently impossible for them to provide transparency or to offer explainability. As a result, as a subset of AI, these solutions will only serve to increase the fear and distrust that surrounds the use of AI and big data.

By scaling businesses and economic decision-making with auditable AI, we not only build confidence and trust, but we help guide policymakers in regard to legislation on algorithmic accountability. Auditable and explainable AI solutions, while unable to be used against big data, are perfect for uniting technologists, business, regulators and policymakers, and laying a foundation for the use of AI in decision-making. It’s a fine framework for understanding how the technology works within industry, how it will evolve and disrupt, and how to regulate growth in a way that promotes innovation rather than hinders it.

At present, we have tech companies pitted against policymakers, and businesses in confrontation with regulators. By using auditable solutions, and providing clarity and explainability, we help each other rise to the challenge of legislating. What we need is collaborative effort towards integrating and implementing AI, and transparent systems are the best place to start.


About the Author

Ben Taylor is CEO and co-founder of Rainbird Technologies. He holds a degree in Artificial Intelligence and is also a member of the All-Party Parliamentary Group on Artificial Intelligence (AAPG AI).

Ben is a former Adobe Computer Scientist, having worked on both Acrobat and the PDF format. After leaving Adobe, he took a position as the Director of Technology at a motor insurance start-up. He led the technical development of an award-winning AI system which became a market leader in the motor insurance industry before starting Rainbird AI.