Mitigating AI Bias: The hidden challenge 

Artificial Intelligence (AI) is transforming the way businesses operate across the UK, offering tools that can manage customer service inquiries, automate tasks, analyse huge amounts of data, and even make decisions. 

Last year, 60% of large UK enterprises and 35% of small and medium-sized businesses were already using AI in some form and, according to a report by McKinsey, AI could contribute a staggering £232 billion to the UK economy by 2030. 

But as AI systems become more integrated into our daily lives, this rapid adoption brings risks. If not carefully designed and monitored, AI can reflect and even amplify societal biases. These biases, if left unchecked, can reinforce harmful stereotypes, potentially harming both businesses and the wider community, while excluding certain groups from benefiting fully from online advancements. 

Chris Bush, Head of Design Group at Nexer Digital, a design agency focused on creating human-centred digital experiences, discusses why we can’t ignore AI bias and explains what can be done to mitigate it to ensure that these systems are fair, inclusive, and accessible to everyone.

What is AI bias?

AI bias occurs when AI produces results that unfairly favour certain groups over others. This can happen for several reasons. For instance, the data used to train the AI might be biased, the design of the algorithm might be flawed, or the AI might be tested in ways that overlook potential problems.

In the UK, where businesses are increasingly focused on diversity and inclusion, AI bias poses a significant challenge. If an AI system is biased, it could lead to discriminatory practices – whether in hiring, customer service, or product development. 

This isn’t just bad for those affected by the bias; it’s bad for business. Companies that rely on biased AI risk damaging their reputations, losing customer trust, and even exposing the company to legal risks under the UK’s Equality Act 2010.

AI bias in action

One of the most common ways AI bias manifests is through the reinforcement of stereotypes. For instance, AI image generation often returns results that reflect and amplify existing societal biases. A quick search for images of a “successful businessman” typically yields pictures of white men in suits, while searches for “nurses” predominantly show women. These not only reflect existing stereotypes but also perpetuate them.

AI bias can also exclude entire groups of people. Large language models (LLMs), which are used to generate everything from chat responses to news articles, often produce content that mirrors the biases of the data they were trained on. AI trained on data from the internet may fail to represent communities less active online, such as older adults, non-English speakers, and individuals from cultures with limited internet access.

For example, an AI system designed to predict consumer behaviour trained primarily on data from English-speaking users might fail to address the needs and preferences of non-English speakers or those with different cultural practices. This can result in products and services that are less relevant or inaccessible to significant segments of the population.

In the UK, where around 18% of the population is over 65 and a significant portion speaks languages other than English at home, this kind of bias can result in digital products that do not serve the needs of a diverse society.

It also becomes particularly problematic when AI systems are used in contexts like hiring, where AI might inadvertently favour candidates who fit the “norm” it has learned from biased data. 

Inclusive AI

Efforts are underway to combat these biases and create more inclusive AI systems. One notable example is the work being done by Latimer AI, an initiative in the United States that is pushing the boundaries of how AI can be trained to be more representative of diverse cultures.

Latimer AI’s approach is to train large language models using data sources that include lesser-represented cultures, folk tales, and community-driven oral histories. Its “AI for Everyone” model doesn’t just aim to avoid bias; it actively seeks to include a broad spectrum of voices that are often overlooked. This includes stories and histories from Black communities, indigenous cultures, and other groups whose narratives are not always prominent in mainstream data sources.

What’s innovative about Latimer AI is that it mainstreams cultural diversity in a way that feels natural and integrated. The AI doesn’t have to specifically call out that it’s being inclusive; it just is.

Why this matters for UK businesses

For UK businesses, the implications of AI bias are serious. Not only does biased AI risk alienating potential customers and clients, but it can also lead to reputational damage. 

Take marketing, for example: if an AI system unfairly targets ads to certain demographics while excluding others, a company might miss out on reaching a broader audience. However, in finance, a biased credit scoring system could unfairly deny loans to certain groups, leading to reputational damage and regulatory scrutiny.

Consumers are increasingly aware of and concerned about issues of fairness and inclusion, so businesses that fail to address AI bias could face backlash and lose customer trust. Given the UK’s diverse society, with a population comprising a wide range of ethnic, cultural, and linguistic backgrounds, non-inclusive AI systems risk excluding large segments of the population, resulting in products and services that do not meet the needs of the broader community. 

To stay ahead, companies need to take proactive steps to identify and mitigate these biases.

How to mitigate AI bias

So, what can businesses do to tackle AI bias? 

One of the most effective ways to prevent AI bias is to ensure that the teams developing these systems are diverse in terms of gender, race, and cultural background. When people from different backgrounds work together, they bring a wider range of perspectives, which helps to identify and address potential biases.

It’s also important to keep checking AI for bias. This means conducting regular audits to see how the AI is performing and whether it’s producing any unfair outcomes. It’s not just about looking at the data the AI was trained on, but also about testing the AI in real-world scenarios to see how it behaves.

Transparency will also be key to building trust in AI. Businesses need to be open about how their AI systems work, including what data they use, how decisions are made, and what steps are being taken to avoid bias. Being transparent also means being accountable. If something goes wrong, businesses should have processes in place to address the issue quickly and fairly.

In the UK, where about 14.6 million people live with a disability, designing AI systems that are accessible to everyone is not just a legal requirement, it’s also a business opportunity. Inclusive design means considering the needs of all users, including those with disabilities, when creating AI-powered products and services.

Businesses should actively seek input from the communities that their AI systems will affect. This could involve consultations, user testing with diverse groups making sure the AI works well with assistive technologies, and designing interfaces that are easy to navigate, and partnerships with organisations that represent underrepresented communities.

Addressing AI bias requires a concerted effort from businesses, technology providers, and policymakers. It starts with acknowledging the problem and understanding that AI, like any other tool, is only as good as the data it’s trained on and the people who develop it.

The future of AI needs to be inclusive

As AI continues to shape the future of business in the UK, addressing AI bias will become increasingly important. By taking proactive steps to ensure that AI systems are inclusive and representative of all people, businesses can create products and services that truly meet the needs of a diverse society. The future of AI should be one where technology enhances human potential without reinforcing existing inequalities.


About the Author

Chris Bush is Head of Design Group at Nexer Digital. Founded in 2007, Nexer Digital (formerly Sigma) is a human-centred research, design and development agency which creates products and services that enhance people’s lives and work. With a focus on digital inclusion and social impact, Nexer works with companies in the public, private and not-for-profit sectors, both nationally and internationally. The team believes strongly in developing long-term, mutually beneficial strategic partnerships with its customers, with key clients including, NHS England, AstraZeneca and the Department for Education. They’re also the organisers of the annual UX design conference Camp Digital, held in Manchester. Nexer Digital is part of the Nexer Group AB – a Swedish technology firm with over 2,300 colleagues around the world.

more insights