news

Less burnout for doctors, better clinical trials, among the benefits of AI in health care

Jacob Wackerhausen | Istock | Getty Images

Hands, tablet and doctor with body hologram, overlay and dna research for medical innovation on app. Medic man, nurse and mobile touchscreen for typing on anatomy study or 3d holographic ux in clinic

  • Artificial intelligence has traditionally been used to make health care safer and better. Now generative AI is making efficiency a priority.
  • A recent study found that using AI to generate draft replies to patient inbox messages reduced burden and burnout scores for medical professionals, even though they spent the same amount of time on the task.
  • AI-enabled solutions are on the horizon for efficiently matching potential participants to clinical trials, expediting drug development, and completing the time-consuming aspects of translating documents for non-English speaking patients and trial participants.

Over the last few decades, traditional artificial intelligence has largely been in service of making healthcare safer and better (the Institute of Medicine's 2000 report "To Err Is Human" described that nearly 100,000 people died annually of medical errors in hospitals). However, it's only its successor — generative AI — that has made efficiency a priority.

Nvidia, known primarily as a hardware and chip company, has been working to optimize the health care space for 15 years. Kimberly Powell, Nvidia's vice president of health care, and her team build domain-specific applications for health care, including in the realm of imaging, computing, genomics and drug discovery, under the umbrella of the "Clara" suite.

"It's really just taking these mini applications, wiring them up so that they can perform and deliver a valuable service to an end market," said Powell.

Health care is one of the largest data industries, Powell says. Naturally, it's also a massively regulated industry and must be brought to market with care.

"Some come at it from the idea that we're late to the game. I'm not sure that's true," said Dr. Josh Fessel, director of the office of translational medicine at the National Institutes of Health. "You're dealing with human beings and you have to be incredibly careful with issues of privacy, security, transparency."

Translational medicine, Fessel's bread and butter, is how you get from a good idea to a thing that is actually poised to help people. In that, AI is the quest of the moment.

AI is already being deployed to streamline contact centers, modernize code to make institutions cloud native and create documents to help reduce medical burnout (Adam Kay's memoir "This Is Going To Hurt," in which he describes what led up to his own career-ending burnout, is not an anomaly). However, the thing about document creation, says Dr. Kaveh Safavi, senior managing director for Accenture's global health care business, is that medical professionals must learn to verbalize their findings in the exam room. "That's all part of the reality," he said. "The technology requires the human to change in order to gain the benefit."

A March study found that using AI to generate draft replies to patient inbox messages reduced burden and burnout scores in medical professionals, but didn't reduce the amount of time they spent on this task. But time is not the only factor that's important, Fessel says.

AI to address nursing shortage

Meanwhile, AI-enabled solutions are on the horizon for efficiently matching potential participants to clinical trials, expediting drug development, and completing the time-consuming aspects of translating documents for non-English speaking patients and trial participants. Safavi says that globally, the nursing shortage is the biggest problem in health care (an Accenture report calls it a "global health emergency"), and he anticipates new technologies will begin to deploy within the next year to address this pressing concern.

Amidst all this, there are still kinks to work out. For example, the Clinical & Translational Science Award (CTSA) Program for the Mount Sinai Health System found in October that predictive models that use health record data to determine patient outlooks end up influencing the real-world treatments that providers give those patients, ultimately reducing the accuracy of the technology's own predictions. In other words, if the algorithm does what it's supposed to do, it will change the data — but then it operates on data that's different from what it learned, ultimately reducing its performance. "It changes its own world, basically," said Fessel. "It raises the question: What does continuing medical education for an algorithm look like? We don't know yet."

To combat knowledge gaps like this, Fessel argues for a team approach across institutions. "Sharing what we're learning is absolutely vital," he said. Having a chief AI officer in place at a health institution can be helpful as long as they are empowered to bring in other brains and resources, he says.

Nvidia practices this by partnering with a range of organizations to deploy "microservices," or software that integrates into an institution's existing applications. In addition to helping navigate evolving regulatory terrain (like looming requirements for software as a medical device, or SaMD, per the U.S. Food & Drug Administration), it makes transformation more within reach. For example, Nvidia partnered with a company called Abridge on one of its first applications, which integrates into the electronic health record system Epic to streamline medical summaries.

Meanwhile, Nvidia is collaborating with Medtronic, which uses computer vision to identify 50% more potentially cancer-causing polyps in colonoscopies. And in tandem with the Novo Nordisk foundation, they're developing a national center for AI innovation in Denmark that will house one of the most powerful AI supercomputers in the world.

Right now, what provider organizations are largely prioritizing is getting ready for generative AI, says Safavi. This includes getting their technology house in order to prepare for cloud-native tools that need to be able to access the data.

A human is the 'last mile'

This also involves developing a responsible AI posture that protects privacy and intellectual property but dissuades the use of technology for diagnosis, Safavi said. "We want the human to be the last mile of the judgment," he said.

Safavi said his biggest fear of AI in the health care space is that organizations won't employ policies against technological diagnosis, and something bad will happen as a result. "There's a reason to be proactive around putting boundaries," he said. "In the absence of that, a bad outcome is likely to result in an overly generalized regulatory schema, which none of us benefits from."

In March, the European Union adopted the Artificial Intelligence Act, which addresses AI safeguards, biometric identification systems, social scoring and rights to complaint and transparency. Safavi has worked in about 25 countries over the last 15 years and says any regulatory system the U.S. adopts will likely reflect that of the E.U., but we're not there yet.

Even with all these evolutions, there is still so much unknown about how various health conditions develop and the role the environment plays. "To pretend that there are no black boxes in medicine is not true," said Fessel. Redefining how health care operates gives the field an opportunity to re-examine many fundamental ideas about how we deliver care and learn new things, he adds. "That, to me, is one of the things that makes it so potentially transformative."

Copyright CNBC
Exit mobile version