AI ‘improves’ nurse–doctor collaboration in spotting deterioration
Artificial intelligence (AI) can be used to improve collaboration between nurses and medical staff in order to help better detect patient deterioration, according to US researchers.
Their study found primarily found that the implementation of an AI-based model was associated with a significantly decreased risk of deterioration escalations among hospital patients.
“This model is powered by AI, but the action it triggers, the intervention, is basically a conversation that otherwise may not have happened”
Ron Li
However, those behind the research said it also demonstrated the potential of AI as a “facilitator” that helped nurses and doctors connect to achieve more efficient, effective patient care.
Their study, published in JAMA Internal Medicine, describes an AI-based model in use at Stanford Hospital in California that predicts when a patient is declining and notifies the patient’s clinicians.
Researchers said the algorithm pulls data – such as vital signs, information from electronic health records and lab results – in near-real time to predict whether a patient is about to decline.
They noted that clinicians were unable to monitor all of these data points for every patient all of the time, so the model runs in the background, looking at these values about every 15 minutes.
It then uses AI to calculate a risk score on the probability the patient is going to deteriorate and, if the patient seems like they might be declining, the model sends an alert to the care team.
During the study, 9,938 patients were admitted to one of the hospital’s four units, with 963 of them used as a subset for analysis of the intervention.
The AI-based intervention was associated with a 10.4 percentage point absolute risk reduction in the combined primary outcome of rapid response team activation, transfer to an intensive care unit (ICU), or cardiopulmonary arrest during admission.
The authors said: “Findings of this study suggest that use of an AI deterioration model-enabled intervention was associated with a decreased risk of escalations in care during hospitalisation.
“These results provide evidence for the effectiveness of this intervention and support its further expansion and testing in other care settings,” they added, writing in the journal.
But, as well as intervening to prevent patients from deteriorating and ending up in intensive care, the alert system also helped clinicians connect more “efficiently and effectively”, said the researchers.
Senior author Dr Ron Li, a clinical associate professor of medicine and medical informatics director for digital health, said it “fosters clinician connection in a ceaselessly buzzing hospital environment”.
“This model is powered by AI, but the action it triggers, the intervention, is basically a conversation that otherwise may not have happened,” said Dr Li.
While nurses and doctors had conversations and handovers when changing shift, he said it was difficult to “standardise these communication channels due to busy schedules and other hospital dynamics”.
He said: “The algorithm can help standardise it and draw clinicians’ attention to a patient who may need additional care.
“Once the alert comes into the nurse and physician simultaneously, it initiates a conversation about what the patient needs to ensure they don’t decline to the point of requiring a transfer to the ICU.”
Dr Li noted that, originally, the system had been set up to send an alert when the patient was already deteriorating, which he said clinicians did not find very helpful.
As a result, he and his team subsequently adjusted the model to focus on predicting ICU transfers and other indicators of health decline.
“We wanted to ensure the nursing team was heavily involved and felt empowered to initiate conversations with physicians about adjusting a patient’s care,” highlighted Dr Li.
He said: “When we evaluated the tool, which we had running for almost 10,000 patients, we saw a significant improvement in clinical outcomes – a 10.4% decrease in deterioration events… among a subset of 963 patients with risk scores within a ‘regression discontinuity window’, which basically means they’re at the cusp of being high risk.
“These are patients whose clinical trajectory may not be as obvious to the medical team,” he said. “For that group of patients, this model was especially helpful for encouraging physicians and nurses to collaborate to determine which patients need extra tending.”
However, Dr Li acknowledged that the model was “far from perfect” and nurses and doctors had responded to the model in different ways.
“The reactions have overall been positive, but there is concern about alert fatigue, since not all alerts are flagging a real decline,” he noted.
“When the model was validated…, we calculated that about 20% of patients flagged by the model did end up experiencing a deterioration event within six to 18 hours.
“At this point, even though it’s not a completely accurate model, it’s accurate enough to warrant a conversation,” he said. “It shows that the algorithm doesn’t have to be perfect for it to be effective.”
He added: “With that said, we want to improve the accuracy; you need to do that to improve trust. That’s what we’re working on now.”
Related news
Clinical practice