AI: Preventing a Frankenstein’s monster

One of the key lessons taught by Mary Shelley’s infamous story of Frankenstein’s monster is that things aren’t always greater than the sum of their parts, regardless of the quality of the parts themselves

An altogether less visceral but equally composition-based process goes into building today’s artificial intelligence (AI) platforms. One of the most powerful AI models used today is deep learning, a machine learning algorithm that identifies patterns in different sets of input data, and uses them to generate insights that help inform human decision-making. Deep learning applies vast layers of artificial neural networks to data, creating a ‘black box’ of calculations that are impossible for humans to understand.

Making a monster of AI

Like Frankenstein’s monster, not knowing how the constituent parts of an AI algorithm interact with one another ultimately undermines the quality of the individual parts themselves. Luckily for data scientists, preventing the creation of a ‘monster’ when developing AI requires an understanding of data validity, rather than the supernatural.

AI platforms built on deep learning assume that more data equals better accuracy. This generally holds true, but actionable insights produced by AI are only as good as the data ingested. That’s why frameworks like the Oxford-Munich Code of Data Ethics (OMCDE) must apply to the collection, processing, and analysis of data.

What is the Oxford-Munich Code of Data Ethics (OMCDE)?

The OMCDE is a code of conduct built by researchers and leading representatives in Europe, designed to address both practical and hypothetical ethical situations pertaining to data science. Its stipulations are categorised into seven different areas: lawfulness; competence; dealing with data; algorithms & models; transparency, objectivity and truth; working alone and with others; and upcoming challenges. Owing to the complexity of data science issues, the OMCDE assumes that even well-intentioned data professionals cannot always know and act in the best way without guidance, and is therefore subject to constant amendment.

Why does the OMCDE apply to AI?

Being able to process and make sense of masses of data, at lightning fast speed, can often make AI a superior decision-maker over humans. This was exemplified by DeepMind’s AlphaGo when it defeated reigning world champion Lee Sedol in a five-game match of Go in 2016. But without human oversight, Ai may not always produce the best results, especially considering the wide number of areas where AI can be applied.

Consider this example – a company uses AI to analyse its workforce, technological advancement and economic trends and produce a model predicting which job roles arem likely to be impacted and face redundancy. Without proper interrogation of the data – looking for sampling biases, issues with /validity etc. –potential issues with the input risk being carried over into the output. Unlike a board game, people’s livelihoods are at stake when poor data practices are compounded in AI.

How to apply the OMCDE to AI in practice

Data analytics and AI teams typically follow a development process that starts with the decision to build AI models, followed by designing, building, deploying and monitoring the model. At every stage, those involved must ensure good data governance practices are followed. One way to implement this is via activity documenting: an auditable, time-based record incorporating the source, methods, and discoveries related to the data used.

Aside from being briefed on how to spot potentially erroneous data, all stakeholders should have full knowledge of the range of data being used, share the burden of responsibility and facilitate wider levels

of transparency. Finally, all stakeholders must uphold a professional duty to correct any misunderstandings or unfounded expectations of colleagues, managers or decision makers who on their work.

Final thoughts

The ability to store, process and transfer data has increased exponentially for the past 50 years or so. At the same time, the relative cost to do so continues to drop. With the development of AI so closely aligned to these abilities, we inevitably see AI algorithms being used in an increasing number of business applications, technological innovations and everyday situations.

A clear set of responsibilities and guidelines for those involved in the development of AI is imperative to make sure that this future is sustainable. Without these, its potential decision-making power could be its undoing – and ours too.


About the Author

Richard George is Faethm AI’s Chief Data Scientist. Every responsible employer around the world relies on Faethm for navigating the Future of Work. Our sophisticated SaaS AI platform enables companies and governments to create value from the impact of emerging technologies; supporting jobs, maintaining ongoing development and retaining talent.

Featured image: “Frankenstein” by twm1340 is licensed under CC BY-SA 2.0