The strange bedfellows of AI and ethics

Over the last decade, we have heard a lot of doom-saying about how artificial intelligence (AI) would result in the loss of huge numbers of jobs

However, the picture (across both public and private sectors) is now starting to look not only more nuanced but also more positive.

A 2017 report from consultancy PWC suggested that embedding AI across all sectors is likely to create thousands of jobs. In the UK, one estimate suggests that it could contribute as much as 5% of GDP within 10 years. That’s not to say that we won’t lose jobs, because we undoubtedly will. However, they will be replaced by other jobs, and crucially by those that require machines and humans to work in partnership.

Opportunity for governments

Also in 2017, HM Government committed to putting the UK at the forefront of the AI and data revolution in their Industrial Strategy: Building a Britain Fit for the Future policy paper and formed a number of specialist AI institutions (the Alan Turing Institute, the Office for Artificial Intelligence, the Centre for Data Ethics and Innovation, and the AI Council). This is an opportunity for governments to adopt AI and analytics for government in their capabilities and processes, as well as champion AI for the greater good in industry. And like all organisations that embark on AI projects, governments need to particularly adhere to responsible AI and ensure they are accountable in the right way.

We have already seen the benefits of AI and analytics and how they complement human endeavours, for example, in cancer screening. Recently I watched a TED Talk from Tom Gruher (co-founder of Siri). He shared a shining example of where AI plus human gives a better outcome: an experienced and expert human pathologist can detect cancers with 96.6% accuracy. This is better than an AI program, which can only manage 92.5%. However, when the two work together, their combined accuracy is over 99%.

But with these benefits come responsibilities. We must manage the ethics of AI appropriately. In her blog, my colleague Kalliopi Spyridaki aspires to a Charter of AI to protect human rights and societal values but also gives examples of how to start small with AI regulation. Similarly, I have been doing some research on this, and I’ve found some clear conclusions about how we can build ethics into AI development and implementation.

We should use AI to do the dirty, dangerous work

For millennia, it has been the poorest and most vulnerable people who have always done the dirtiest, most dangerous work. They are often the only ones prepared to do that work for the money offered – or perhaps those who are least able to refuse. Principles are often a luxury reserved for those who can afford them.

However, if we get AI right, we can turn this on its head. We can decide that we want to use AI to mean that people are no longer put in this position. We can let the machines do the dangerous work – or even just the dull, repetitive, boring stuff – and ensure that people have access to better, more rewarding work

We need to recognise that AI alone does not always make good decisions

What’s not to like about a recommendation engine? It sees what you’ve bought before, and it recommends other things that you might like. How can that not be a winner?

Well, it’s not great when you’ve bought a one-off purchase that you don’t ever want to replicate:

No human would make that mistake. We need to design partnership into AI systems, to ensure that they do what we want, rather than blindly following rules. If we don’t, all we will do is replace one form of bureaucracy with another.

We need to ensure that we keep our biases out of AI

There is a tendency to assume that computers cannot be biased – but that is not the case. AI-based systems learn from the data that they are fed. If we feed them the “wrong” data, we can inadvertently build in biases that we may not even notice.

For example, historically, there have been more men than women in technology jobs. It is a very short step from that data to a position where a hiring algorithm learns that men are more likely to do a technology job, and then “decides” that men must be better than women in those jobs.

The good news is that we can manage this. We can, and should, be aware of our own biases. However, we should also build diverse teams to work with AI, as a way of ensuring that we surface more of the inadvertent biases – the ones that we don’t even notice because they have become norms.

We need to be proactive

It is not going to be enough to respond to developments in AI. We need to be proactive in setting up ethical safeguards to protect us all. A recent webcast from SAS Canada on AI and ethics recommends that organisations should develop a code of conduct around AI and foster AI literacy. They should also establish a diverse ethics committee to manage and oversee development and implementation processes.

Organisations also need to ensure diversity when developing assets – those diverse teams that I mentioned before. Finally, they must anticipate errors and set up processes to put them right. AI is not a monster, but we must ensure that it does what we want and does not take on a life of its own. I believe that proactivity in establishing ethical boundaries and processes is the best way to achieve this.


About the Author

Caroline Payne is Head of Customer Advisory for Public Sector at SAS UK & Ireland. SAS is the leader in business analytics software and services, and the largest independent vendor in the business intelligence market. Through innovative solutions, SAS helps customers at more than 70,000 sites improve performance and deliver value by making better decisions faster. More: www.sas.com

Featured image: ©the lightwriter

more insights