UK AI strategy at risk unless diversity and data literacy taken seriously

The UK’s newly launched national strategy will help keep the UK competitive as AI transforms businesses and jobs

The power of AI to drive growth and innovation is clear, evidenced by the McKinsey Global Survey on AI that suggests that organisations are using AI as a tool for generating value, increasingly, in the form of revenues. But the black box approach taken by most AI companies comes with serious risks to scale bias like never before, intentional or not. As part of any AI strategy, diverse teams must be part of the process from the ground up to recognise biases in the data on which models are trained and to scenario plan how minorities may be impacted. Ensuring AI is explainable with a greater degree of transparency in training data, data gaps, and algorithmic logic can further reduce bias at scale. These are the key ways to mitigate this very real risk.

Strengthen diversity for better AI outcomes

As the Government cited in the Understanding the UK AI labour market research, diversity in the AI sector is low. Over half of firms (53%) said none of their AI employees were female, and 40% said none were from ethnic minority backgrounds. The strategy announced by the UK Government is a good first step in helping unlock the value of AI. To ensure this value is applied without harming others, however, we need to go further and require diverse AI teams that reflect the world around us in much the same way the UK has led in requiring diverse Board of Directors.

Most of the data used to train AI systems was not captured with diversity in mind. As we use that data to train models and automate much of our world with similarly homogenous teams building and designing AI systems, the bias already inherent in the data will scale like never before. The UK can change this by allowing a wider population the ability to engage directly with the outputs of AI models and see how they compare to reality.

Problems at scale

After a year of use Amazon software engineers uncovered that the program used for reviewing CVs discriminated against women applying for technical roles, and they scrapped it. Sensor operated soap dispensers in public bathrooms did not react to dark skin. There are multiple stories of facial recognition or profiling leading to Bad outcomes in crime prevention, housing, and hospital care, around the world.

These were all serious outcomes that negatively impacted individuals and often huge numbers of the population.

Combating bias

There are three key ways to combat these biases. First, the systems themselves need to be more transparent so we can understand how their decisions are made. Second, we need more diverse teams building the systems from the start. Lastly, we need to make people generally more data literate, so checking the output of these systems across society instead of blindly accepting them and the consequences of their outputs.

Open the black box

Right now, the growth, and opaque nature of AI, is counterintuitively driving a demand for greater data literacy, because whilst individuals don’t need to do the analysis themselves, they do need critical data literacy skills to dig into what the AI systems produce. Without the UK gaining higher data fluency the very real risk is that the human element perpetuates unconscious bias at scale through ignorance of the working fitness of AI to its task. Scaling bias, with fewer humans in the loop, will inevitably lead to bad decisions becoming institutionalised, inefficiencies, and maladaptive outcomes.

If the UK wants to gain ground in AI, it needs to learn from the mistakes that the first movers made, and ensure that British companies do not go down such egregious dead-ends and embarrassing u-turns.

The great majority of AI and technology enabled errors that lead to such inequitable outcomes are avoidable. Some of the routes to smarter technology delivery are incredibly obvious, such as hiring and listening to a diverse workforce, testing ideas on a diverse range of users, and ensuring that data is taken from diverse and reality-reflecting sources. Technologists and business managers should consider how the enterprise encodes for smarter behaviours. It’s the equivalent of a certification or best practice exercise in compliance and requires those involved to ask the right questions and to be mindful of who is working on solving business problems.

Data literacy is a tougher challenge to overcome. Diversity can be complex, and ensuring that staff have the mental toolbox to engage with data, to interrogate it successfully, and to understand if it is fit for purpose, is as much of a challenging proposition. The former challenge can be overcome with best practices, encouraging forethought and questioning, data literacy is not something that every person can grasp as readily.

This is why the fields of business intelligence and data analytics have been evolving so rapidly. From dedicated analysts preparing reports, to static dashboards, and on to the modern analytics cloud, there is now a consumer-grade front end to enterprise data. So now anyone that can use office software or popular consumer apps outside of the office, can use their expertise and be empowered to turn data into actionable insights. Because data skills are in such short supply (as the Government acknowledges in another report from this year), augmenting human brainpower with technologies that allows all constituents to ask questions of data, such as AI training data, allows a wider population to use their expertise and loved experience to ensure that data reflects the concerns and needs to real people.

The key to the success of self service analytics involves creating a culture of data permissiveness. Users must know that they are allowed, encouraged, and empowered to leverage their knowledge jointly with data. There must be allowance for a wider contribution into technology and data development with many voices allowed to advise on the direction of growth. And in fact, technologies like AI can help guide users into making ever better decisions. In the case of analytics, where there are prompts that surface questions or unasked for insights can encourage the “human” involved to more deeply consider the data problems and ensure that their natural biases and blindspots do not become baked into their part of the process.

From the UK Government to every technology company, end user enterprise, and every business user, literacy must be encouraged, and in the meantime, the right solutions for real-time data investigation and testing be put in place so anyone, absolutely anyone, can become their own analyst, leveraging the unique experiences together with data insights, to make a much more equitable future for all.


About the Author

Damien Brophy is the Vice President EMEA for ThoughtSpot, responsible for managing operations across Europe, the Middle East, and Africa. He focusses on helping ThoughtSpot customers unleash the power of their cloud data. He has 20 years’ experience in business development across well-known technology, data, and cloud companies.

 

Featured image: ©Pixel

Copy link