AI and GDPR: Unhappy Bedfellows?

Two buzzwords of 2018 are undoubtedly GDPR and AI, writes Dorothy Agnew, Partner at Moore Blatch

With data sitting at the heart of AI, what will happen to AI based technologies once GDPR comes into force?  And how may AI need to adapt if these technologies are to be fully compliant?

Some say that GDPR may be the catalyst for the digital transformation to go mainstream, not least as AI is already playing a major role in helping firms meet regulatory compliance.    But how compatible are GDPR and AI? And how must AI adapt if it is to meet its potential for the fast processing of vast amounts of data, whilst meeting GDPR requirements?  With just months to go before the regulation comes into force, businesses must be aware of the pitfalls to avoid if AI is to be utilised in a way that is fully GDPR compliant.

AI in support of GDPR

Few could have predicted that one of the most valuable commodities of the 21st century would not be oil, gold or hardware, but data.    Alongside the explosion and exploitation of data we’re living through a digital transformation – a transformation occurring at a breathtaking pace.  Whereas in the 20th century oil dominated the commodity market, in the 21st century we’re now grappling with how to deal with and regulate a very different resource, and one that promises to be just as valuable.

GDPR is just part of a regulatory landscape that is becoming fantastically complex, and for businesses to effectively adhere to this gamut of regulations many are looking to automation – AI in particular, to support their compliance processes.

But GDPR presents a new and unique challenge for AI, and one that could shape the future of AI itself.

Transparency

Transparency is a key component of GDPR, with the regulation stating that information about how an organisation processes an individual’s personal data must be given in way that is ‘concise, transparent, intelligible and easily accessible.’  But AI often relies on ‘black box’ algorithms that don’t lend themselves to external scrutiny.

Explaining decisions

The issue around transparency is part of a larger problem, in that explaining itself it not AI’s strong point.  And as the complexity of the data itself grows – in terms of amount, variety, and number of data sources – technology will evolve accordingly becoming more and more complex.    Behind a decision may lie a web of complex algorithms and data.    Making the decision may be easy.  Explaining it, however, is hard.

Automated decision making

Perhaps the biggest challenge of all is the regulation states that individuals have the right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’    The most significant word in that sentence is solely; organisations will need to have processes in place to allow decisions taken by AI to be reviewed by a real person.

This right may have little consequence for now, but as the capabilities of automation and AI grow in sophistication, it could become very significant indeed – this right could also become costly to administer and could erode any cost efficiencies introduced by using an AI based decision making process.

Lawful processing

The potential issues here are two-pronged.   Part of the massive appeal of AI and automation is their ability to instantly sift through vast amounts of data from multiple sources.    But the larger the amount of data used, the more difficult it becomes to ensure you have a legal basis for using all the data available to you (for example consent or legitimate interest).

From a legal standpoint, where you are relying on consent as a basis for processing the question arises as to whether you have secured valid consent to your processing.   Added to that is the process of demonstrating that individuals have consented to processing of their data which could prove hugely costly for businesses.

Further, there could a risk that the AI will find new uses for the data sets and use data for different purposes from that for which it was originally collected.

When data is no longer relevant for the purposes for which it was collected, it should be erased.  But with the nature of the data web so complex, ensuring this is the case presents huge challenges.

So, what is the solution?

There are problems with AI’s compatibility with GDPR.   But the irony is that AI is probably the solution to the not insignificant challenges GDPR presents.  And it is this that could provide the catalyst for AI to go mainstream.

There is much talk about explainable AI, and with GDPR likely to be only the first step in stringent regulations covering how our data is used, explainable AI will most likely become essential.   More fundamentally, though, automated technologies will increasingly need to be designed hand in hand with the regulators.  AI shouldn’t have to adapt to cope with GDPR, it should be designed at the outset with the current and likely future regulatory landscape in mind.


About Dorothy Agnew

 Recognised by Legal 500, Dorothy Agnew is a partner in Moore Blatch‘s commercial team, specialising in IT, intellectual property law, e-commerce, data protection, distribution, agency and general commercial contracts.