Securing Healthcare AI with Confidential Computing

The healthcare and pharmaceutical industry has found itself in the spotlight as all eyes turn to it in the race to find and develop a treatment in the fight against COVID-19.

With the brightest minds within the healthcare and life sciences industries working together across international boundaries, sharing research data and findings, there has been increased pressure placed upon medical researchers to find the answers that everyone is looking for in the current crisis.

Not all of this research can be done manually by the scientists and researchers, so we have seen a dramatic uptick in the application of Artificial Intelligence (AI) and Machine Learning (ML) techniques, which enable researchers to press “fast-forward” on their ability to analyse data, identify trends and/or anomalies, and deliver meaningful results that can then be acted upon. 

As a result of the vast amount of data that is now being generated, collated, processed, and stored, public attention has focused on how this data is to be secured, in line with expanding international privacy laws and regulatory requirements. Keeping the reems of personal healthcare records and the swathes of Intellectual Property (IP) contained within these AI workflows secure is of paramount importance and this can now be achieved efficiently and effectively with the rise of a new technology – Confidential Computing.

The growth of AI – speed and accuracy

The benefits generated from the use of AI and ML have been lauded for many years. The ability of these technologies to analyse and make sense of complex data sets at speed and with a high degree of accuracy, while also offering efficiency and cost saving benefits, has led to it being seen as a utopian solution for the analysts and scientists tackling today’s healthcare challenges. 

Within the field of experimental data, the reduction in the amount of time and resources required to achieve meaningful research, coupled with the potential for genuine scientific advancement through new insights into under-studied or unknown medical conditions, demonstrates how AI can bring transformational benefits to the field of healthcare and life sciences.

Within practical clinical settings, AI assists medical professionals in real-time by processing large volumes of data and determining diagnoses that are beyond the capability of any human clinician. The 2019 book by Eric Topol titled “Deep medicine: How artificial intelligence can make healthcare human again” cites the case of an eight-day-old baby boy admitted to hospital suffering from continual seizures. Due to the use of AI and a natural language processing (NLP) algorithm, the child’s genome was able to be analysed, compared against a database of genetic variations, which when combined with the child’s data then led to the identification of a faulty gene that was causing the seizures. Equipped with this diagnosis, doctors were able to treat the child swiftly and he was discharged from hospital less than 36 hours after admission and without suffering any long-term effects of his initial condition.

With results like this, it is no wonder that the use of AI and ML is rising in its prominence within the range of techniques being used to develop new drugs and therapies. While this is all positive news, these capabilities come hand-in-hand with challenges that the industry must address if the public are to be confident that their personal data is being used in an appropriate and secure manner.

Securing data at rest, in transit, and in use

In a sign of the challenges facing the healthcare industry, within the 2020 HIMSS Healthcare Cybersecurity Survey it was clearly identified that organisations “need to make cybersecurity a fiscal, technical, and operational priority” with HIMSS further emphasising that “patient lives depend upon the confidentiality, integrity, and availability of data, as well as reliable and dependable technology infrastructure.” 

To put robust data security in place, attention needs to be paid to the entire lifecycle of the data. There is no point in only securing one piece of the puzzle, only for it to all fall apart later down the line. Data encryption is often seen as the minimum and most fundamental level of security required for any organisation, as outlined in the EU General Data Protection Regulation (GDPR) and the US Health Insurance Portability and Accountability Act (HIPAA).

Where stored data is encrypted and encrypted communication methods are employed, even if attackers breach the system and exfiltrate data, the information is useless unless they obtain the decryption keys. As long as the keys are kept secure and separate using an appropriate key management service, the confidential data should be safe. However, while providing elementary data security, this approach leaves a crucial gap where AI workflows are deployed within collaborative environments where algorithm developers need to train their models over sensitive healthcare data. At the point where data is used it is necessary to decrypt that data to enable analysis, but this leaves it vulnerable to a potential breach via the computer processing the AI workload. This third critical area of data protection is often overlooked.

So how do we protect the data when it is in use? While encryption can stop an attacker from accessing the data when stored or when it is being transmitted across network communications, the data is exposed in the memory of the computer running an application and this can be exploited by a potential attacker. Overcoming this vulnerability of data in use is central to the safe implementation of AI solutions in healthcare and has restricted adoption of cloud computing services, where these computing resources are regarded as an untrusted computing platform. In truth, every computing platform should be regarded as vulnerable to attack, even when it resides within infrastructure that is considered to be under a healthcare organisation’s exclusive control.

Combining AI and Confidential Computing for holistic data security

Fortunately, a new cyber security paradigm is coming to the fore and generating increased interest – it is called Confidential Computing. At its heart, Confidential Computing protects sensitive applications and data from being compromised or tampered with at runtime by processing them in a completely isolated trusted execution environment (TEE), often referred to as a ‘secure enclave’. The TEE isolates the data and code in a secure region of the CPU memory to prevent unauthorised access, even if the infrastructure is compromised.  Working across multiple environments, from on-premises, to public cloud, and edge devices, the secure enclaves render sensitive information invisible to host operating systems, cloud provider administrators, and external attackers.

By isolating and protecting AI workloads using this new technology, Confidential Computing offers healthcare data providers, AI application developers, clinicians, and data scientists a solution to the practical problem of protecting electronic health data and the intellectual property contained in AI algorithms, even on untrusted infrastructure, while they are being processed.

Alongside security, Confidential Computing also delivers operational flexibility. The solution can be applied to any other data environment, which means that organisations can freely use off-the-shelf and open-source machine learning tools without the need for creating specialised frameworks or adapting their security implementation to suit specific workloads. 

Using Confidential Computing technology, healthcare organisations benefit from seamless, auditable, and scalable data security that can be applied to the complex workloads associated with AI in all healthcare settings. Organisations will benefit from the data security necessary to reduce the development time of new AI solutions, improve the clinical outcomes for patients, and realise new medical breakthroughs, knowing that the data is encrypted and secure at every stage. This means that medical research teams will be able to bring the full power of AI to bear without being held back by the threat of data breaches and regulatory fines. 

Over the next year and beyond, we can expect this important technology to not only continue in the fight against COVID, but to revolutionise the way that medical treatments are developed and deployed within a context of increasingly stringent regulation and public scrutiny.

About the Author

Dr Richard Searle is Customer Solutions Director at Fortanix. Fortanix is a data-first multicloud security company. With Fortanix, organizations gain the freedom to accelerate their digital transformation, combine and analyze private data, and deliver secure applications that protect the privacy of the people they serve.

Featured image: ©Gennadiy Pozniyakov