The positive effects that AI has on everyday life are well documented
However, as this technology becomes more wide-spread, ethical questions are being raised over the influence that this has in society.
In extreme cases you can look at the controls and legislation on autonomous unmanned weapons. UAVs are more common in warfare and it seems only a matter of time before the ability to fire without human input will be added.
A South Korean turret made for the North Korean border was initially designed to be completely AI-controlled until demand forced them to change direction.
At the moment the chain of responsibility for these weapons’ decisions is unclear, some would say intentionally so. If something went wrong, who would be to blame? The human operator? Their organisation? The software developers? The hardware manufacturers?
This example won’t apply to many businesses, but the clarity of any mistakes highlights ethical issues surrounding AI and automation.
By recognising the limitations of a system, identifying the greatest risks, and not being afraid to challenge the elements involved, you can mitigate the risk for any product you use or sell that contains AI.
Install manual control breaks
Even the best system will have bugs. Computer programs that are 100% bug-free have been the subject of scientific study. When you add in the ability for a computer to make changes to its own behaviour, the potential for unforeseen results are high.
The biggest issue relating to this is the speed with which your software can propagate a mistake. For example, automated stock traders execute thousands of deals per minute, and if one trade goes wrong it can multiply at an alarming rate before they are stopped and rectified.
Developers should identify the points in the process where the most damage will occur if something goes wrong and install control breaks. This will ideally be in the form of a person giving a manual sign-off. Users should be made aware of consequences should they click the wrong thing in the configuration.
Recognise user limitations
If you’ve ever provided software as a service you will be aware of the different knowledge levels that exist in different businesses and their end users. A program which an engineer believes has a strikingly obvious system of control might not register the same to consumers who haven’t spent the last year knee-deep in its development.
In order to mitigate this issue you should provide documentation that your average user will be able to consume and understand. Mark out the inputs and options. You should then identify high-risk configuration options and put them behind confirmation messages. A full suite of logging services will allow issues to be traced back to their root cause.
Data limitations
The software may be fool proof, but the same cannot be said about the data. Biases in the initial data the program is learning from will quickly spread to its outputs.
Amazon had to scrap its recruitment AI tool because it started penalising CVs for containing the word “women’s”. In the male-dominated IT industry, men had been recruited at a higher rate than women. Words unique to women’s CVs appeared much less in successful hires compared to general words like “leadership”. The AI concluded that these words must be of low value and started penalising them.
The lesson to be learned from this example is to identify gaps in the data and apply weightings so that demographics are equally represented.
Don’t assume it’s working right, even when it does
The fidelity with which an AI can classify massive amounts of data can even discourage looking for errors. Who’s going to argue with a program that can classify thousands of people’s faces with 98% accuracy via impenetrable mathematics?
This is compounded by the so-called Black Box AIs that never show their workings. Typically, it involves the software projecting the data across high-dimensional mathematical spaces to extract unique features, but it is very abstract.
Resist the temptation to outsource your thinking to the program or assume it knows what it’s doing. All it’s really doing is sorting data into statistically significant different groups. There is nothing wrong with examining and questioning the data. Why is the program using algorithm X instead of algorithm Y. There is no one-size-fits-all algorithm and the one best suited depends on the type of data.
Identify the risks
Your goals for your AI program may be harmless, but the results could have unintentional consequences. For example, did your advert recommendation software just highlight someone’s medical condition based on past internet searches?
Think about your consumers before you start processing the data. To some extent this has been compounded by new GDPR restrictions on data profiling. GDPR requires a Data Protection Impact Assessment before any automated decision making or profiling is carried out.
Profiling extends to any form of grouping user records by economic situation, health, personal preferences, location or movements. The necessity and the proportionality of the AI solution should be assessed. You should always bear in mind how seemingly anonymous data might suddenly become not-so under AI scrutiny.
In summary, AIs are a great tool but they are not miracle panaceas that can be deployed unsupervised with minimal input. By recognising their limitations, knowing how they work, and identifying their risks you can greatly mitigate the chances of them misbehaving.
About the Author

Simon Davies is an executive for the London-based web development company Zodiac Media. He obtained his PhD in the field of Brain-Computer Interfaces from the University of Warwick and specialised in Artificial Intelligence for his undergrad, with a background in AI ethics, consciousness, and machine learning.
Featured image: ©knssr