Ethical concerns around using technology in business have been around for decades, and the last several years have seen a plethora of guidelines aiming to provide guidance in using tech responsibly
However, it’s only really in the last year that we have really seen digital ethics hit the mainstream, with organisations in the private and public sector focusing their attention and, increasingly, resources, on these matters. So, what’s caused this shift? Here I explore four key drivers:
Accelerated adoption of digital
Many organisations underwent an overnight transformation during the pandemic to survive and at the heart of this was the accelerated adoption of more advanced technologies such as automation and artificial intelligence (AI). Organisations are now taking a more serious look at what being data-driven means for them, developing data strategies that could shift entire business models.
Without integrating digital ethics into this acceleration, the ethical risks proliferate. If we use more data and more advanced technologies without sufficient testing and without embedding the right guardrails, there is more potential for harm– to individuals, society and the reputations of the organisations themselves.
Increased awareness of technology
There are many benefits for using technology to improve our everyday lives and it will continue to play a prominent role in addressing some of the world’s biggest challenges, from climate change to providing better healthcare.
However, the public is now more aware of the unintended consequences of technology such as misinformation and bias from faulty algorithms, as well as how some business models make use of their personal data.
The proliferation of media stories reporting incidents of ethical failings due to technology or data use has also put these issues in the public consciousness, elevated by the popularity of films such as The Social Dilemma and Coded Bias.
A need to (re)build trust
The annual Edelman trust Barometer report illustrated a number of emerging concerns over the past few years, including a widening trust gap between the informed public and the general population, and a decline in public trust in the technology sector.
While levels of trust are clearly falling, the understanding that trust has real value to businesses and public sector organisations – and is a critical factor in achieving success – has grown. The Open Data Institute and Frontier Economics found there is a link between trust and people’s willingness to share their data – something many organisations would like their customers to do in order to provide more or better services.
With public trust in a precarious position, organisations are starting to recognise the need to address digital ethics concerns.
Potential regulation
While governments are not keeping pace with the acceleration of advanced technologies, there are now signs they are on the political and legislative radar. In April the European Union outlined new AI regulation, while the US is increasing its scrutiny on tech giants including Facebook and Amazon. In this country, the first joined-up National AI Strategy, published in September, indicates that government policy will focus on ethical concerns. It is now likely many parts of the western world will see regulation introduced in the next couple of years.
Where to go from here
Organisations don’t need to wait for formal legislation to take action. There are plenty of reasons to integrate digital ethics into business strategies and governance immediately, not just to mitigate the risks outlined above, but also to prepare for future requirements we might see coming out of legislation, such as AI risk assessments and product labelling, and algorithmic transparency requirements.
We recommend starting by identifying digital ethics risks and opportunities within your current digital programme, as well as in your future roadmap by asking three critical questions:
- Have I evaluated the extent to which my digital programme supports or undermines my organisation’s other strategic aims? For example, do customers understand how we use their data and have we confirmed they understand this so we can build trust and loyalty?
- Do I understand how the technology we use really works? Most business leaders don’t or will at least have gaps in their knowledge around the digital tools and products they use and create. And the more gaps, the more risk.
- Am I making the most of data and technology to deliver benefits to society and improve digital ethics? Digital ethics is not just about risk mitigation but understanding how an ethical approach to technology can open opportunities to create a positive impact, such as collecting data to create more accessible services for customers.
By critiquing your organisation against these three areas, you can get a high-level understanding of where challenges – and opportunities – may lie. It’s then possible to prioritise action against the areas of biggest concern, as well as build in the mechanisms needed to create better alignment between organisational and digital strategy so that you are well placed to reap the benefits of new technologies and maintain stakeholder trust.
About the Author
Jen Rodvold is Head of Digital Ethics & Tech for Good at Sopra Steria. Sopra Steria, a European leader in consulting, digital services and software development, helps its clients drive their digital transformation to obtain tangible and sustainable benefits. It provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a fully collaborative approach.
Featured image: ©Alexander
