With artificial intelligence advancing at a rapid pace, the importance of safeguarding humans from automated decision-making becomes inevitable.
Despite the many benefits AI technology can provide – for instance, AI models can detect breast cancer more accurately than radiologists – we also need to be aware of the potential negative consequences of AI, including deepfakes and nefarious uses of facial recognition.
In fact, the regulation of artificial intelligence is emerging as a key disagreement among the world’s biggest tech companies. Most recently, Sundar Pichai, CEO of Google and parent company Alphabet, added to the heated debate with an op-ed, calling for greater regulation of AI technologies.
Especially with AI permeating into areas of our lives that used to be based on human decision-making such as healthcare, recruiting, and criminal justice, we need to ensure we’re still placing people at the centre of these modern technologies.
How to prevent AI biases
Bias can creep into algorithms in many different ways. Most often the lack of limited training data can be the root cause. In simple terms, if insufficient information goes in, an inadequate result will go out. This is because for an AI system to identify patterns and make predictions, it relies on quantities of data to ‘learn’ from. This data is gathered from either private or public databases, and is used to help the AI spot patterns on which it can base its decisions.
If an AI system is built in a contrived laboratory environment with data that isn’t representative of the target audience, or worse, patterns in the data reflect prejudice, the AI’s decisions will also be prejudiced. It is incredibly difficult for algorithms to ‘unlearn’ these patterns, so it is important that biases are not built into the algorithm from the first phases of implementation.
Origins of bias can be nuanced and hard to spot, ranging from historic impartialities based on race and gender, to a lack of diversity within training sets. As a consequence, certain groups are disproportionately represented.
A study by the National Institute of Standards and Technology (NIST) found that facial recognition misidentified African-American and Asian faces ten to 100 times more than Caucasians, while Native Americans were misidentified more than any other group. The study also revealed that women were falsely identified over men, and senior citizens had more than 10 times the issues faced by middle-aged adults.
According to a report by AI Now Institute at New York University, the lack of diverse training data also threatens to worsen the historic underemployment of disabled people. In fact, it’s robbing candidates of job opportunities. In some cases, remote video interviewing technologies are unfairly disqualifying candidates by drawing inferences about their employability based on speech patterns, facial expressions and tone of voice.
Needless to say, the impact of these outcomes could be damaging, ranging from missed flights and job opportunities, to tense police encounters, false arrests or worse.
Bringing testing back to humans
Whilst companies are increasingly researching methods to spot and mitigate biases, many fail to realise the importance of human-centric testing. It is essential for a sophisticated, rigorous form of software testing to be in place that harnesses the power of crowds – something that cannot be achieved in a static QA lab.
It is a complex challenge because minimising bias is about more than numbers. Working through bugs and usability issues requires access to training data from diverse demographics. The theory is that the broader mix of language, race, gender, location, culture, hobbies (and more) that goes into the system, the better the representation is within the data sets. Therefore, companies must take a proactive approach in order to prevent the algorithm from further perpetuating societal inequalities. Companies that do will be much more likely to have a product which will deliver value to both the customer and the end user.
However, adequate training data is only one aspect of reducing bias.
Developing a valuable AI algorithm requires continuous feedback and modification to improve it for future users. With the right partner and making use of vetted crowdtest communities, companies can quickly access training data at scale and garner iterative feedback from users in real time. They can use this data to update the algorithm and retest it with the crowdsourced community, reassured that the algorithm they are producing is representative of the real world.
The reality is that despite limitless technical implementations, AI can only ever be as good as the humans who programme and test it. There are a lot of opportunities to diversify this pool but the issue raises considerable questions when we factor in all of the conscious and unconscious biases that every person carries to some degree. It may not be possible to have a completely unbiased human, so it will be hard to build completely unbiased AI algorithms, but by harnessing a diverse and large collection of real human interactions prior to release the industry can certainly do better than it is today.
About the Author

Kristin Simonini VP Product at Applause. At Applause, Kristin leads our product organization in setting our strategic roadmap. Her organization partners with Engineering to develop features and enhancements to our industry-leading crowdsourced testing platform.
Featured image: ©Raw Pixel