A range of projects are using artificial intelligence in healthcare, yet there are challenges around the technology and patient acceptance
In Steven Spielberg’s Minority Report, police can predict who will commit a crime before it even happens. The idea that medicine could predict health threats has a similar touch of sci-fi – yet it’s happening today.
Canada’s Artemis Project, for example, provides insights into possible future heart attacks. In the US, the Vanderbilt University Medical Center has been able to identify people at risk of developing certain auto-immune diseases. There are many other such instances, all of which use machine learning and artificial intelligence (AI) to crunch data.
The medical world has long employed AI in making its diagnoses. The sector is an ideal candidate for the technology thanks to healthcare’s data tsunami, the vast majority from imaging technology.
You wouldn’t expect biased data to deliver fair predictions, but current AI development in healthcare isn’t addressing that kind of ethical issue
But recent years have seen rapid progress. Barely a week passes without some new study suggesting AI is on a par with or even outperforms medical specialists in their diagnoses, through its talent for pattern recognition. As part of a project with London’s Moorfields Eye Hospital, Google’s DeepMind can now recommend the correct referral decision for over 50 eye diseases with 94% accuracy.
Critically, AI performs its tasks much faster than humans. It’s also cheaper and presents less of a burden on typically stretched healthcare resources. And as doctors grapple with the chaos of a dynamic situation, AI diagnosis will likely lower the risk of medical misdiagnosis, too.
Rise of the sensors
Many of us might struggle with the notion of putting our health in the hands of a computer. However, “the pandemic has had a major impact on acceptance,” reckons Haider Raza, lecturer in computer science at the University of Essex, who’s developing a proof-of-concept system for skin cancer detection and analysis using smartphone photographs.
Raza started his project before the pandemic and had to convince people to get involved. Today, “people are signing up for self-referral, because Covid has shown them how so much healthcare can be managed online.”
That’s just as well, because healthcare tech is only going to play a greater role in our lives. As domestic medical devices – ever-smaller wearables like activity trackers, glucometers, smart inhalers, heart rate and blood pressure monitors – become more commonplace and connected in real time to the so-called internet of things, the potential could be revolutionary.
Yet AI may not be the silver bullet for medical diagnoses that it’s sometimes made out to be.
“We’re heading towards something good but there is still a lot of hype,” according to Maarten Van Smeden, assistant professor of epidemiology at University Medical Centre, Utrecht.
A University of Birmingham study in 2019 found that most machine learning algorithms are on a par with doctors in assessing medical imaging. However, it concluded that of 20,530 studies on disease-detecting algorithms published since 2012, fewer than 1% were rigorous enough to be included in its study in the first place.
“If you need an AI model to make a diagnosis, ask why, because it probably means it’s hard to make a diagnosis. The fact is that AI needs many high-quality data points to distinguish between those with a disease and those without, so it becomes a circular problem,” says Van Smeden.
Shang-Ming Zhou, professor of e-health at the University of Plymouth, thinks we’re only starting to grapple with the many issues revealed by the use of AI in medicine. The data sets are often smaller than might be hoped, due to issues around privacy, patient confidentiality and data ownership. What’s more, various data sources may be inconsistent depending on how they’re produced, potentially building problems into any AI model.
Data, like doctors, comes with its own biases. An algorithm trained on Caucasian population data may provide misdiagnoses for other ethnicities. Some diseases, the likes of sickle cell or Tay-Sachs, are shaped by ethnicity, others by geography.
“You wouldn’t expect biased data to deliver fair predictions, but current AI development in healthcare isn’t addressing that kind of ethical issue,” says Zhou. Then there’s the challenge of regulation, with no current legal framework for data protection in private healthcare research.
“The other challenge is that the current AI model cannot generalise to a new population of patients or consider that healthcare practices evolve over time,” he adds.
That’s why talk of AI replacing human doctors – whose insights are born of experience and patient interactions – still seems far-fetched. “AI is a powerful tool, but a tool nonetheless,” says Van Smeden.
The use of AI diagnosis by medical professionals could remain in the background, largely unknown to patients. But transparency will be key, argues Zhou, who’s currently researching patient attitudes to AI. That’s a problem, because AI’s “thinking” is opaque in reaching its conclusions, unacceptably so for healthcare. This can allow unnoticed errors to become systemic faults; a bad model can end up harming patients.
“The conclusions that AI reaches have to be fully explainable and fully interpretable,” stresses Saurabh Johri, chief science officer at Babylon, a digital healthcare specialist. “A significant proportion of the populace is informed about data and open to being better informed as a result of its use. But what they still want to know is the value of that data and they can’t without transparency.”
A matter of trust
So why don’t patients trust AI? It’s not because they think it will fail to give a better diagnosis, says Zhou. Rather, it’s “because the perception is that it can only provide standardised practice and treatment - it doesn’t address the medical needs of the individual”. Each person has a unique profile, but current AI is only suited for the “average” patient.
Other big questions must also be resolved. For example, does the use of AI in diagnoses challenge the authority of the clinician? Are its diagnoses aimed solely at extending life, perhaps ignoring a patient’s wishes to instead minimise suffering? Does it undermine the traditional doctor-patient decision-making process, and does this force doctors to align their standards with that of the algorithm, or to defer their decisions to it?
But progress is being made. The impact of machine learning in healthcare will likely be profound, not just in treatment after diagnosis, but in heading off disease in the first place.
“If you want to diagnose a disease, of course that’s fine, but if you want to treat a disease you need to understand how it develops. And nowadays we can measure what genes are expressed at the single-cell level and use machine learning to pinpoint what’s important, what is driving a disease,” explains Ziv Bar-Joseph, professor of computational biology at Carnegie Mellon School of Computer Science.
“We need machine learning for the DNA analysis that will allow us to understand which diseases people are predisposed to in the first place,” he notes, adding that if we can deploy some of the AI-driven tests currently in development, “it will be amazing.”