Self-driving cars, eerie human-like robots, and a machine that can beat the world’s greatest chess player are just a few of the many exciting technologies that the field of AI has produced
There is no questioning the true potential that AI has to offer. However, as AI technology continues to develop and grow in sophistication, we are beginning to become more aware of the possible ethical and moral consequences associated with machines that are designed to think for themselves.
The Tunnel Problem is one popular ethical thought experiment involving an autonomous car. Let’s say that there is a self-driving car carrying a passenger and it is driving into a tunnel. Suddenly, a child runs across the road and trips, forcing the car into an impossible situation – either swerve over to the side of the tunnel, which will most definitely kill the occupant of the car, or run over the child. What should the car do? If answering that question seems impossible, consider an arguably more important question – who gets to decide what the car should do?
Considering how big such a decision would be, it does not seem right to give the authority to choose to the engineers and designers of the car. It, therefore, only seems natural to let the owner of the car decide what to do in a situation like that. Perhaps one owner would prefer that the car saves the life of the child than his or her own, and others would have it the other way around. Maybe as part of the design, engineers should install some sort of initial start-up system where a new owner can answer questions regarding how he or she would like his or her car to react in situations where harm is inevitable. Although this sounds like the perfect solution, it is riddled with many problems. The “tunnel problem” is just one of the infinite number of possible situations a car might get itself into. How could we determine every single possible scenario before they happen? No matter how much thought is put into this, it seems that we will just have to accept that new ethical scenarios are always going to come up and as part of its development, AI technologies are going to have to continue to adapt.
Autonomous cars are of course not the only machines that have to deal with significant ethical issues. New, even more intelligent machines will be made in the future and these questions are not going to get any easier to answer. Although it is clear that AI can dramatically improve our lives in ways that we cannot imagine, it is essential to think about how engineers can program these machines to behave more ethically, especially as they become more commonplace.

Fortunately, the tech field is considering these questions and outlining what experts believe to be potential risks for AI. The Google-owned DeepMind team, for example, is researching potential risks posed by AI. Some of the risks outlined are mirrored in non-AI fields. How should society handle privacy issues, which can be exacerbated as more data is gathered and analyzed? Critically, how should the issue of ownership be understood? Consent is a top priority, but how do we choose what types of data requires consent, and how do users grant consent?
The economic impact of AI is projected to be tremendous, and this will inevitably disrupt the labor market. In particular, tech, business, and government entities need to come to terms with how burgeoning technology will affect income inequality. One concerning question outlined by the DeepMind Ethics and Society team: Can we even model the likely impact of AI on the job market?
In the UK, a recently published House of Lords report calls on the British government to ensure the country remains a leader in the development of artificial intelligence in order to shape the ethics behind the technology. Microsoft’s UK CEO Cindy Rose welcomed the report in it’s view that ethical AI can change lives “for the better”.
As part of our vision to empower every person and organisation on the planet to achieve more, Microsoft is ensuring everyone can benefit from AI in a way that is safe and ethical, and that work can be seen every day in the products and services our customers use. We understand that to ensure AI continues to be used as a force for good, it is crucial that it’s developed according to strong ethical guidelines.
As with all powerful technologies, AI comes with risks to society as we understand it today. While some of these risk factors are easy to anticipate, other potential risks might be difficult or impossible to predict. There are also those who would purposefully use AI in a dangerous or even deadly manner: AI-empowered weapons of war, for example, have the potential to make existing technology even deadlier. The allure of autonomous weapons can be strong, and steps will be needed to prevent a new type of warfare that’s even more indifferent to human life.
At the core of many of these debates is AI morality. Implementing any AI system requires making value judgments, and these values cannot be implemented in an ad hoc manner. The tyranny of the majority, in the absence of proper safeguards, can cause systems to disproportionately affect those most vulnerable. Building human values into AI systems can help mitigate some of these risks, but finding the proper way to implement these values will prove to be challenging.
Tech companies have a valuable voice in AI discussions, but so do other entities. The World Economic Forum is studying the potential impact of AI and has outlined a number of concerns. What will happen when more jobs are replaced by AI-fueled automation? AI machines create wealth, but will this wealth be distributed in an equitable and sustainable manner? AI technologies will likely mean we spend more time interacting with machines and software; what impact will this have on society?
Already, AI systems have been used, both intentionally and unintentionally, in a racist manner. If most users of a system are white, for example, society will need to find a way to ensure people of color are treated fairly. Growing complexity will make systems more complex, and there is concern about whether we can stay on top of such changes. The known risks are concerning enough, but it’s perhaps the unanticipated risks that are most frightening as AI systems grow more powerful.
