United States: Oncologists play a key role in helping cancer patients make tough decisions, but it’s often overlooked. At the University of Pennsylvania, an AI algorithm now helps doctors start crucial conversations about treatment and end-of-life preferences by predicting survival chances.
However, by no choice is it a ‘set it and forget it’ type of tool. A tech checkup done routinely showed that the algorithm got worse during the COVID-19 pandemic as it became 7 percentage points worse in determining who was likely to die, according to a study done in 2022.
As reported by the Medicalxpress, I suspect there were actual effects in real life. Study lead author Ravi Parikh, an Oncologist at Emory University, told KFF Health News it missed the mark hundreds of times in not encouraging doctors to start that critical conversation—on avoiding unnecessary chemo—with the patients who might benefit from it.
He thinks that several algorithms that were oriented on improve medical care deteriorated during the pandemic, not only at Penn Medicine. “Several institutions are not frequently evaluating the performance” of their products, said Parikh.

Algorithm glitches are one facet of a dilemma that computer scientists and doctors have long acknowledged, but that is starting to puzzle hospital executives and researchers: Management of artificial intelligence systems needs steady employment to make and to maintain them efficient.
In essence: You need people, and more machines, to fix instances that the new tools blew up.
“Yesterday, everyone believed that AI would assist us in our access and capacity and the care and things like this,” said Nigam Shah, the chief data scientist at Stanford Health Care. ‘All of that is nice and good but if that mean it puts a 20% add on cost for care then it cannot be viable.’
State authorities are concerned that hospitals are unable to provide enough volume to evaluate the efficacy of such technologies.
As one of the attendees of a recent agency panel on AI, Food and Drug Administration Commissioner Robert Califf noted, ‘I have looked far and wide.’ “I don’t think that there is a single health system in the United States that is equipped to be able to verify an AI algorithm once it has been integrated into a clinical care system.”