Managing Type 1 Diabetes Is Tricky. Can AI Help?


SOURCE: HTTPS://WWW.WIRED.COM/STORY/MANAGING-TYPE-1-DIABETES-IS-TRICKY-CAN-AI-HELP/
JUL 10, 2023

Managing Type 1 Diabetes Is Tricky. Can AI Help?

In a simulation, AI aided virtual patients in meeting blood glucose targets. It could be a step toward trusting machine learning to help real people too.

THE WEEK BEFORE heading off to college, Harry Emerson was diagnosed with type 1 diabetes. Without the ability to produce insulin, the hormone that transports blood sugar to fuel other cells, he’d need help from medical devices to survive, his doctors told him. Eager to get on with school, Emerson rushed through the process of familiarizing himself with the technology, then went off to university.

Because people with type 1 diabetes make very little or no insulin on their own, they need to keep careful track of their blood sugar as it changes throughout the day. They inject insulin when their blood sugar is too high or when it’s about to spike after a meal and keep fast-acting carbs ready to eat when it dips too low. The mental math can be dizzying. “Every time I eat, I have to make a decision,” Emerson says. “So many subtle factors have minuscule effects that add up, and it’s impossible to consider them all.”

For many, tracking this data means finger pricks, manually logging the results from their blood glucose monitor every few hours, and injecting insulin accordingly. But those privileged enough to access state-of-the-art devices can outsource some of their decision-making to machines. Continuous glucose monitors, or CGMs, measure blood sugar every few minutes via a tiny sensor under the skin, sending readings to a pocket-sized monitor or smartphone. Insulin pumps, tucked in a pocket or clipped on a waistband, release a steady stream throughout the day and extra doses around mealtimes. If the CGM can talk to the insulin pump in what’s called a “closed-loop” system, it can adjust doses to keep blood sugar within a target range, similar to the way a thermostat heats or cools a room.

These control algorithms work, but they rely on hard-coded rules that make devices inflexible and reactive. And even the fanciest systems can’t get around life’s imperfections. Just as a phone’s fitness app can’t track steps you take when you’re phoneless, a CGM can’t send data if you forget to bring your monitor with you. Anyone who’s tracked macros knows how tricky it is to accurately count carbs. And for many, eating three predictably timed meals a day feels about as realistic as going to bed at the same time every night.

Now a PhD student at the University of Bristol’s Department of Engineering Mathematics, Emerson studies how machine learning can help people live with type 1 diabetes—without thinking about it too hard. In a June study published in the Journal of Biomedical Informatics, Emerson collaborated with the University Hospital Southampton to teach a machine learning algorithm to keep virtual diabetes patients alive. The team trained the AI on data from seven months in the lives of 30 simulated patients, and it learned how much insulin to deliver in a variety of real-life scenarios. It was able to figure out a dosing strategy on par with commercial controllers, yet it needed only two months of training data to do so—less than a tenth required by previously tested algorithms.

To Emerson, machine learning algorithms present an intriguing alternative to conventional systems because they evolve. “Current control algorithms are rigidly defined and derived from lengthy periods of patient observation,” he says, adding that this training is also costly. “It’s not necessarily practical to keep going about it that way.”

There’s still a long road to AI-powered diabetes tech. Under both United States and United Kingdom medical device regulations, commercially available automated insulin delivery systems—without AI—fall in the highest risk class. AI-driven systems are in the early stages of development, so conversations about how they should be regulated are only just beginning.

Emerson’s experiment was entirely virtual—testing AI-assisted insulin delivery in people raises a host of safety concerns. In a life-or-death situation like insulin dosing, giving control to a machine could be dicey. “By the nature of learning, you could absolutely take a step in the wrong direction,” says Marc Breton, a professor at the University of Virginia’s Center for Diabetes Technology who was not involved in this project. “A small deviation from the prior rule can create massive differences in the output. That’s the beauty of it, but it’s also dangerous.”

Emerson focused on reinforcement learning, or RL, a machine learning technique based on trial and error. In this case, the algorithm was “rewarded” for good behavior (meeting a blood glucose target) and “punished” for bad behavior (letting blood sugar get too high or low). Because the team couldn’t test on real patients, they used offline reinforcement learning, which draws on previously collected data, rather than learning on the fly.

Their 30 virtual patients (10 kids, 10 adolescents, and 10 adults) were synthesized by the UVA/Padova Type 1 Diabetes Simulator, a Food and Drug Administration-approved replacement for preclinical testing in animals. After training offline on the equivalent of seven months of data, they let RL take over the virtual patients’ insulin dosing.

To see how it handled real-life mistakes, they put it through a series of tests designed to mimic device faults (missing data, inaccurate readings) and human errors (miscalculating carbs, irregular mealtimes)—tests most researchers without diabetes wouldn’t think to run. “The majority of systems only consider two or three of these factors: their current blood glucose, insulin that’s been dosed previously, and carbohydrates,” says Emerson.

Offline RL successfully handled all of these challenging edge cases in the simulator, outperforming current state-of-the-art controllers. The biggest improvements appeared in situations where some data was missing or inaccurate, simulating situations like those when someone steps too far from their monitor or accidentally squashes their CGM.

In addition to cutting training time by 90 percent compared to other RL algorithms, the system kept virtual patients in their target blood glucose range an hour longer per day than commercial controllers. Next, Emerson plans to test offline RL on data previously collected from real patients. “A large percentage of people with diabetes [in the US and UK] have their data continuously recorded,” he says. “We have this great opportunity to take advantage of it.”

But translating academic research to commercial devices requires overcoming significant regulatory and corporate barriers. Breton says that while the study results show promise, they come from virtual patients—and a relatively small group of them. “That simulator, however awesome it is, represents a tiny sliver of our understanding of human metabolism,” he says. The gap between simulation studies and real-world application, Breton continues, “is not unbridgeable, but it’s large, and it’s necessary.”

The medical device development pipeline can feel maddeningly stalled, especially to those living with diabetes. Safety testing is a slow process, and even after new devices come to market, users don’t have much flexibility, thanks to a lack of code transparency, data access, or interoperability across manufacturers. There are only five compatible CGM-pump pairs on the US market, and they can be pricey, limiting access and usability for many people. “In an ideal world, there would be tons of systems,” letting people choose the pump, the CGM, and the algorithm that works for them, says Dana Lewis, founder of the open source artificial pancreas system movement (OpenAPS). “You’d be able to live your life without so much thinking about diabetes.”

Some members of the diabetes community have started to speed up the pipeline on their own. Lewis uses her past data to fine-tune insulin delivery for her artificial pancreas, which is made from commercial devices and open source software, and she shares code online to help people rig their own versions. “I can’t imagine doing diabetes without it,” she says. (Her website notes that because OpenAPS is not commercially sold, it’s “not an FDA-approved system or device.” Users are essentially running an experiment on themselves.)

Although Lewis doesn’t see RL taking full control of systems like hers anytime soon, she envisions machine learning supplementing existing controllers. Making a small fix to a real problem, as opposed to “trying to boil the ocean,” can be a game changer, she says.

Demonstrating that AI will perform as intended is one of the biggest challenges researchers, developers, and policymakers face, says Daria Onitiu, a postdoctoral researcher at the Oxford Internet Institute. Currently, if a new device is substantially different from an existing one, it needs a new certification from regulatory bodies. The inherent adaptability of AI complicates this framework, Onitiu says. “An autonomous AI algorithm can modify its internal workings and update its external output.” Under current regulatory guidance, she says, “If the change alters the device’s intended use, you would need to get it recertified.”

AI in health care, Onitiu points out, is not entirely new. The FDA lists 521 AI-enabled medical devices on the market in the US alone as of October 2022. However, most of these leverage AI for things like analyzing urine samples or biopsy diagnoses—decisions that may be helpful to clinicians but don’t involve dosing medication or otherwise treating a patient in real time.

Two months ago, Breton’s research group applied for, and received, an Investigational Device Exemption from the FDA that will allow them to test an AI-powered insulin pump in humans. Until then, he says, “it wasn’t clear at all that the FDA would allow a neural net anywhere near insulin dosing because it’s very difficult to demonstrate that it’s going to do exactly what you want it to do.”

But, Breton points out, the slow dance between academia and regulatory bodies happens for a reason. Academics have the freedom to explore with low stakes: If a simulation fails, the consequences are virtual. Industry is constrained by safety and consumer interest. “Academia pushes the envelope, and the FDA draws boxes,” says Breton. “But we need to be careful when characterizing the FDA as a hurdle. They want advancement, but they don’t want it to hurt people.”

Just last week, the first person with diabetes to try out an artificial pancreas run entirely by machine learning checked into a clinical trial. Led by Breton’s colleagues at the University of Virginia, this study will test a pump controlled by an artificial neural network on 20 people with type 1 diabetes while they stay in a hotel with round-the-clock care for 20 hours. The AI will be on a tight leash: It won’t be allowed to adapt after its initial offline training, and it will be restricted to learning the same control methods as the commercial devices it’s being compared to.

But it’s an important step toward testing whether an AI can be granted more control in the future. In diabetes research, that trust will be built one drop at a time.

Similar articles you can read