Speech-Recognition Tool May Predict Worsening Congestion in HF


SOURCE: TCTMD.COM
MAY 22, 2022

MADRID, Spain—There’s more encouraging, albeit preliminary, evidence that a smartphone app using speech-analysis technology may help identify heart failure patients at risk of impending decompensation leading to hospitalization.

According to William Abraham, MD (The Ohio State University, Columbus), who presented the Cordio HearO Community Study during a late-breaking clinical trial session at the European Society of Cardiology Heart Failure (ESC HF) Congress 2022, the technology has the potential to reduce acute decompensated heart failure (ADHF) hospitalizations and improve patient quality of life, while also saving on the costs of unnecessary hospital admissions.

“The idea really came from clinical observations that, with patients who we see coming into the clinic or being admitted to the hospital when their heart failure is decompensated, you can hear changes in their voice—their respiration, their speech, it's just different,” Abraham told TCTMD. “So the idea is, could you use speech analysis processing and algorithms to detect those changes earlier, before you’d actually be able to hear them? This is intended to provide early warning of decompensating heart failure.”

Other studies using implantable hemodynamic monitors have established that patients actually begin to get congested about 2 to 3 weeks before needing acute care, Abraham continued.

“It's not like all of a sudden they develop pulmonary edema overnight and end up in the hospital. It is sort of a slow process of decompensation,” he explained. “Our hope was that through speech analysis, we could detect these very early changes and provide a very simple noninvasive tool to determine patients who are at risk for worsening heart failure or heart failure hospitalization.”

Changes in Lung Fluid

The Cardio HearO speech analysis technology uses a patient-specific model to detect changes in lung fluid that can lead to a number of physiologic changes including swelling of the soft tissues in the vocal tract as well as swelling of the vocal folds.

Abraham and colleagues have previously reported results from the ACUTE study using the same technology showing that automated speech analysis could differentiate between a “wet” clinical state during a hospitalization for ADHF, as compared with a “dry” clinical state at the time of hospital discharge. The current study is focused on stable patients who recorded daily speech samples that were then analyzed retrospectively to see if detectable changes predicted subsequent decompensation and hospitalization.

The Cordio HearO Community Study set out to analyze patient’s speech data during periods of HF remission to see whether an impending deterioration could be detected in speech patterns and, if so, how early this would be possible. The study has enrolled 430 congestive heart failure patients to date, who record five sentences into a smart phone app every morning.

In the preliminary analysis reported by Abraham at ESC HF, 460,000 recordings have been analyzed from 180 patients who, over a 2-year period, have experienced a total of 49 heart failure decompensation events, of which 39 were first events.

As Abraham showed here, about half of the 180 patients from Israel enrolled so far are women and the majority were Hebrew or Russian speakers, with a minority speaking Arabic or English. Just 15% had a reduced ejection fraction (LVEF < 40%) and mean NYHA class was 2.5.

When first HF decompensation events—either hospitalization or medication changes—were analyzed in relation to speech-detected changes, the speech analysis algorithm successfully predicted an event in 82% of cases, while the system delivered a false negative in 18% of patients.

A sensitivity of 82% from a noninvasive technology is “really fantastic when you put it into the context of the current standard of care, which is monitoring of daily weight change, which only has a sensitivity of about 10% to 20% for predicting a heart failure hospitalization,” Abraham told TCTMD.

A false positive occurred about once every 5 months per patient, Abraham said, suggesting that the app can also pick up other changes in pulmonary status unrelated to heart failure status, but similar activation of the healthcare cascade can also happen when patients report symptoms that do not, ultimately, prove to point to worsening heart failure.

Of note, the system was able to predict subsequent HF events approximately 18 days prior, he pointed out, “which fits very nicely with what we know about the time course for decompensation and suggests that the speech analysis processing application could detect worsening heart failure events well in advance. That may give us a window of opportunity then to intervene and to prevent those worsening heart failure events.”

In general, said Abraham, patients had positive experiences with the app, with approximately four out of five being highly compliant with their daily recordings. A similar phenomenon has been seen with CardioMEMS, which has allowed patients to be “more deeply engaged” with the technology and as a result, he told TCTMD, “do a better job taking care of themselves. I think they feel like they’re regaining a little bit more control over their illness as well.”

The next step for this technology will be an interventional study to determine whether acting on alerts from the system with treatment modifications might help stave off hospital admissions. “This approach has the potential to reduce ADHF hospitalizations and improve patient quality of life and economic outcomes, but of course we need to show that now in larger, randomized clinical studies,” Abraham said.

To TCTMD, Abraham stressed that a vast range of speech-recognition and processing technologies are currently under investigation, both within the heart failure sphere and beyond, calling it an excited field of research. In terms of this specific technology, “this is the first time we've studied this technology in the ambulatory setting” for heart failure patients, he noted, “which is really exciting.”

Following his late-breaking presentation Saturday, panelists and audience members had questions about whether he’d looked into whether the voice recognition technology fared better or worse with different accents or languages, or whether things like humidity and temperature, which might affect speech, made a difference.

Addressing the accents and languages question first, Abraham noted that a trial is only now getting underway in the US, so it will be interesting to see how the technology fares against, for example, Southern US accents versus Northern ones. “I think this has performed well across multiple different languages, so it will likely perform well across different dialects across the US and other geographies as well,” he predicted.

As for how climate or geography might influence interpretation, “that’s a great question,” he said. “As far as I know it performs the same across different types of environmental conditions,” Abraham continued. “But don’t take that answer to the bank, because I don’t know that that’s well studied. I’ll have to look into that for you.”

Shelley Wood

by Shelley Wood

Shelley Wood is Managing Editor of TCTMD and the Editorial Director at CRF.

Similar articles you can read