How do we keep bias out of AI?

From the coining of the term back in the 1950’s to now, AI has taken remarkable leaps forward and only continues to grow in relevance and sophistication

But despite these advancements, there’s one problem that continues to plague AI technology – the internal bias and prejudice of its human creators.  

The issue of AI bias cannot be brushed under the carpet, given the potential detrimental effects it can have. A recent survey showed that 36% of respondents reported that their businesses suffered from AI bias in at least one algorithm, resulting in unequal treatment of users based on race, gender, sexual orientation, religion or age. These instances incurred a direct commercial impact: of those respondents, two-thirds reported that as a result they lost revenue (62%), customers (61%), or employees (43%). And 35% incurred legal fees because of lawsuits or legal action. 

This shouldn’t scare organizations off adopting AI. It is truly a question of when businesses adopt it, rather than if, as they turn to it to drive productivity and efficiency, which in turn helps reduce the strain on human workers. However, as this adoption increases in more varied use cases, the potential for bias to present itself is heightened.  

Any business looking to incorporate AI must ensure that it’s delivering a fair and equal experience to every user and customer, or risk facing the consequences listed above. Putting processes in place that can eliminate bias at source, or at least remove it after discovery, is essential – the question is, how do businesses go about it?  

A diverse team brings diverse views  

By 2023, Gartner anticipates that all organisations will expect AI development and training personnel to “demonstrate expertise in responsible AI” to support algorithmic fairness. 

There is good reason for this expectation. While AI is not inherently biased, algorithms are influenced by the biases and prejudices of their human creators. But there are steps organisations can take today to ensure developers are able to detect and address bias in AI solutions. 

All AI programmers and designers should receive training on how to recognise and avoid bias in AI. This is something that the World Health Organisation advocates for to help them recognise and avoid ageism and other prejudices in their work, after a recent study found that healthcare AI solutions are often embedded with designers’ “misconceptions about how older people live and engage with technology.”  

Training is key for detecting and eliminating not just ageism, but also sexist, racist, ableist and other biases that may lurk within AI algorithms. However, while training programs can help to limit bias, nothing compares to the positive impact of building a diverse analytics team. As noted in a recent article from McKinsey, bias in training data and model outputs is harder to spot if no one in the room has the relevant life experience that would alert them to issues.” The teams that plan, create, execute and monitor the technology should be representative of the people they intend to serve. 

That’s why it is so important that the industry actively invests in diversifying its talent pool and building truly inclusive workplaces. Amelia’s Women in AI programme, for instance, shares the journeys of dynamic industry leaders to show the diverse routes women have taken to join the industry and encourage others to pursue careers in STEM and shrink the industry’s gender gap.  

Monitor your AI, step by step 

Another step that organisations can take to avoid bias is fostering a practice of regularly conducting fairness audits of AI algorithms. As stated in an article from Harvard Business Review, one of the keys to eliminating bias from AI is subjecting the system to “rigorous human review.” 

Several leaders in the AI and automation field have already put this recommendation into practice. Alice Xiang, Sony Group’s Head of AI Ethics Office, explains that she regularly tells her business units to conduct fairness assessments, not as an indicator that something is wrong with its AI solution, but because it is something they should continuously monitor. Similarly, Dr. Haniyeh Mahmoudian, Global AI Ethicist at DataRobot, emphasizes the importance of surveilling AI at every step of development to ensure bias does not become part of the system. She describes how this process allows AI teams to determine whether their product is ready for public deployment. 

In some cases, these surveillance-like steps can be built directly into AI solutions to aid in the bias-elimination process. For example, when the Amelia conversational AI assistant encounters a workflow that she has not previously performed, she creates new business process networks based on her interactions with users. However, any newly created process must be approved by human subject matter experts before it is deployed, providing an important checkpoint to reduce the risk of bias creeping in.  

Transparency builds trust 

Even after building a diverse AI development team, training team members on responsible AI practices and regularly assessing algorithms throughout the development process, organisations cannot afford to let their guard down. 

Once companies deploy their AI product, they should be transparent with end users about how the algorithm was developed, the intention of the product, and the point-of-contact for end users to connect with in case they have questions or concerns. Dissolving the mystique of AI can encourage open dialogue between companies and users. This empowers developers to leverage user feedback to improve their solutions, and to reduce harm by ensuring any erroneous algorithmic biases are resolved in a timely manner. 

Companies owe a duty of care to the end users of their AI programmes. The responsibility lies with businesses to ensure that their technology provides a fair experience for all end users – this means eliminating discrimination at the source and keeping close surveillance of the technology once it has been deployed to detect any bias or prejudice that has been unknowingly introduced. Following these steps can build the foundations for a responsible AI strategy focused on providing a fair, high-quality customer experience.  


About the Author

Faisal Abbasi is Managing Director UK & Ireland and Europe at Amelia. Amelia is the world’s largest privately held AI software company delivering cognitive, conversational solutions for the enterprise. As the leading digital workforce company, we team humans with digital employees to unleash creativity and deliver business value at scale. Amelia streamlines IT operations, automates processes, increases workforce productivity and improves customer satisfaction – delivering bottom line results.

more insights