Greg Martin, MD, MPH, MBA

The emergence of artificial intelligence (AI) technology is set to transform almost every aspect of healthcare and will translate into improvements in health outcomes that will be akin to the discovery of penicillin. However, as we hurtle toward this promised utopia with heart-thumping enthusiasm, it might be worth taking a moment to consider some of the risks and perils of AI in healthcare.

Before exploring the pros and cons of the application of AI in clinical settings, it’s worth taking a step back to consider what it is that we mean when we talk about AI. The human mind (a physiological function of the brain) is our ability to think, learn, reason, make and act on decisions, and ultimately to be conscious. A definition of AI is that it is the ability of computer systems to replicate aspects of the human ‘mind’.

Let’s consider the way in which we humans learn and make decisions. We can memorise and follow instructions; we can learn by watching someone else; we can learn complex behaviours (a collection of tasks) that are driven by rewards and punishments; and we can direct our actions through conscious choices (the subjective sensation of ‘free will’).

With the exception of the last aspect of human intelligence (consciousness), computers are currently able to mimic, and in some ways surpass, human capabilities, where they are directed at a very specific task. The advent of artificial general intelligence (AGI) in the future might include computers that mimic human consciousness or develop some other sense of self-identify. It was in reference to AGI that Stephen Hawking (and others) have suggested that humans face one of our greatest existential threats.

It’s worth mentioning at this point, that AI isn’t just one thing. It is comprised of a multitude of different technologies that work in very different ways. For the purposes of this discussion, we’ll look at two categories of AI that will be of particular importance in the healthcare setting: supervised learning and reinforcement learning.

Supervised learning is an example of machine learning that is best illustrated by the ability of an AI to learn to distinguish between pictures that include (for example) a cat, from those that do not. At first, the AI’s best guess as to whether or not a picture contains a cat will be poor and produce a large ‘error signal’. However, by iteratively refining its algorithm through perhaps billions of attempts, the AI will eventually reduce that error signal to near zero. The key to supervised learning is having a large set of training data to work with.

The most obvious application of supervised learning in healthcare is in any diagnostic process (like radiology) that includes identifying an abnormality in an image. Fortunately, in disciplines like radiology, access to training data is readily available in the form of prelabelled images that go back decades. Where AIs have been taught to identify common pathologies (like sarcomas), a process called ‘transfer learning’ can be used to teach the AI to build upon that capability to identify pathology for which there might be far less training data (like soft tissues sarcomas).

Reinforcement learning is a little different to supervised learning. Imagine wanting to teach a self-driving car to drive from point A to point B. The car is allowed to make random driving decisions (initially) for which it is given positive and negative rewards. It might for example, be given two points for driving in the right direction and have ten points subtracted if it leaves the road. The car’s AI will periodically update its driving algorithm so as to improve its “reward function”, eventually enabling it to drive with near perfect precision from point A to point B.

The applications of this approach in the healthcare setting include the potential ability of AIs to develop and implement decisions in complex clinical scenarios that require sequential actions. The treatment of acutely ill or unstable patients in hospitals could be significantly improved by the immediate response of an AI to any adverse change in their clinical condition. The alternative would be to wait for the six-hourly observations undertaken by nursing staff, who would then contact a tired and overworked junior doctor, who would then liaise with his or her consultant before implementing any changes to the care plan at the next possible opportunity (which might be many hours from the original change in the patient’s clinical condition).

While the application of AI in healthcare will, without question, translate into improvements in healthcare delivery and health outcomes, there are important ‘governance’ issues that must be addressed if we are to circumvent or at least mitigate some of the inherent risks. AI algorithms can be thought of as a ‘black box’ that produces an output in a way that can’t be interrogated. If AIs include preloaded preferences (for one drug over another, for example), it would be difficult, if not impossible, to detect – we might simply assume and trust the program that the treatment is superior. Inadvertent harm could be done by an AI that is optimised to improve health outcomes in a subset of society (for whom training data was available) that is inappropriately applied to another population. Other issues for which there seems to be no clear answer at the moment include thresholds for safety; accountability structures; the potential need for new regulatory mechanisms; the use of sensitive and person identifiable data; issues of informed consent; and concerns over the privatisation of “knowledge”.

AI is clearly here to stay. By getting a clearer understanding of what AI is and how it might be used (and potentially abused), we’ll be able to engage in meaningful conversations to solve problems and ultimately improve lives.