Source: Modern Healthcare
Article link: AI in healthcare raises need for guidelines to protect patients | Modern Healthcare
Author: Gabriel Perna
Date: December 20, 2022
As health systems and insurance companies ramp up adoption of artificial intelligence and machine learning technology, experts fear clinical algorithms are not ready for prime time—with potential consequences for patient safety and outcomes.
The White House Office of Science and Technology Policy released a blueprint for an “AI Bill of Rights” in October to help healthcare and other sectors navigate the potential perils of the technology. But the development of AI in healthcare has greatly outpaced fledgling government efforts to control it. Some leading health systems are putting their own guardrails in place. Finding the tech talent needed to oversee an expansive, self-regulated AI division has proven challenging for others, however.
While 85% of healthcare leaders said in a Deloitte survey from June that they expect to increase their AI investments in the next year, only 57% say their organization is prepared to handle failure or bad decisions stemming from use of AI. Nearly half are not prepared to handle new or changing regulations concerning AI.
“It’s the Wild West and it’s becoming embedded in every organization,” said Keith Figlioli, partner at venture firm LRVHealth. Before joining LRV, Figlioli was a member of the Office of the National Coordinator for Health Information Technology’s Standards Committee for three years.
He said guidelines should be enacted around how an AI model is introduced, tested and applied to various demographics, and health systems should publicly clarify what protocols are in place when something goes wrong.
“The promise of AI is so great that we sometimes forget these algorithms are programmed by people. And assumptions are made by people,” said Dr. Vindell Washington, chief clinical officer for Alphabet’s life sciences firm, Verily, and a former national coordinator for health IT. Washington noted such assumptions could lead to faulty information being fed to models.
“Even if the algorithms are working perfectly, the places and sources of data are sometimes imperfect as they’re collected and delivered,” he said.
AI Bill of Rights
Several studies have raised potential issues with using AI in a healthcare setting over the past few years.
A 2019 analysis published in the journal Science found a commercial algorithm from Optum used by a health system to select patients for a care management program assigned less healthy Black patients the same risk level as white ones, meaning Black patients would be less frequently identified as needing extra care.
An Optum spokesperson said in a statement that the algorithm is not racially biased and that the researchers mischaracterized a cost prediction algorithm based on one health system’s incorrect, unrecommended use of the tool.
“The algorithm is designed to predict future costs that individual patients may incur based on past healthcare experiences and does not result in racial bias when used for that purpose—a fact with which the study authors agreed,” the spokesperson said.
In 2021, researchers at the University of Michigan Medical School published a peer-reviewed study that found a widely used sepsis prediction model from electronic health record giant Epic Systems failed to identify 67% of people who had sepsis. It also increased sepsis alerts by 43%, even though the hospital’s overall patient population decreased by 35% in the early days of the pandemic. Epic did not make the team who worked on the AI sepsis model available for an interview.
The White House Office of Science and Technology Policy included both instances, without naming the companies, in a report accompanying its “AI Bill of Rights” blueprint, meant as a guidance for multiple industries.
While the framework does not have an enforcement mechanism, it includes five rights to which the public should be entitled: Algorithms should be safe and effective, be nondiscriminatory, be fully transparent, protect the privacy of those they affect and allow for alternatives, opt-outs and feedback.
Jeff Cutler, chief commercial officer at Ada Health, a healthcare AI company offering symptom checking for patients, said his organization follows the five principles when developing and deploying algorithms.
“It’s really important that the industry takes the ‘Bill of Rights’ very seriously,” Cutler said. “It’s important that users and enterprises embracing these platforms are asking the right questions around clinical efficacy, accuracy, quality and safety. And it’s important that we’re being transparent with users.”
But experts say real regulation is needed to make a difference. Although the Food and Drug Administration is tasked with overseeing software as a medical device, including AI, experts say the agency has a hard time responding to the increasing number of algorithms that have been developed for clinical use. Congress could step in to define AI in healthcare and outline mandatory standards for health systems, developers and users.
“There’s going to have to be enforcement and oversight in order to ensure that algorithms are being developed with discrimination, bias and privacy in mind,” said Linda Malek, chair of the healthcare practice at law firm Moses & Singer.
Dr. John Halamka, president of Mayo Clinic Platform, a portfolio of businesses from the Rochester, Minnesota-based health system focused on integrating new technologies, including AI, into healthcare, said more policies may be on the way.
The Office of the National Coordinator is expected to coordinate much of the regulatory guidance from various government agencies including the FDA, the Centers for Disease Control and Prevention, the National Institutes for Health and other federal agencies outside of HHS, said Halamka, who has advised ONC and the federal government on numerous healthcare technology initiatives, but is not directly involved with oversight.
Halamka expects significant regulatory and subregulatory guidance within the next two years.
DIY guardrails
In the absence of robust regulation or legislation, larger health systems at the leading edge of deploying algorithms are creating their own safety and efficacy standards.
“I think we’re at a time where you’re going to see early adopters move forward with an understanding regulation is to come,” Halamka said.
HCA Healthcare, a Nashville, Tennessee-based for-profit health system operating in 19 states, has developed AI models for administrative tasks, physician documentation and sepsis detection.
Dr. Michael Schlosser, senior vice president for care transformation and innovation, said the system does not prioritize what he sees as higher-risk AI applications: true clinical decision support, in which algorithms determine therapies and diagnoses.
“I’m not saying that we’re not interested in that at all, but we’re very much focused on what I would say is the lower-hanging fruit,” Schlosser said.
“So, use [AI] to eliminate redundancy: to automate tasks that are not those high-level decision-making tasks, but simple, administrative tasks to make the hospital run more efficiently,” he said.
HCA employs a dedicated staff member to advise the system on ethical ramifications. It has also worked with Deloitte and Google to develop best practices and ensure algorithmic quality.
“This is a relatively new space for healthcare, but we can learn a lot from others that have gone before us,” Schlosser said.
He resisted the idea of governmental oversight, saying norms will be developed over time.
“I don’t know that we need additional entities coming in and then providing us additional support,” Schlosser said. “I think that the industry groups … coming together, combined with the regulation we already have, gives us a lot of guardrails.”
Dr. Emily Webber, chief medical information officer at Indianapolis-based Indiana University Health and Riley Children’s Health, noted best practices are still being developed.
“If you look in the literature about where AI tools have been validated for use in clinical areas, it’s not universal” in terms of proven applications, Webber said.
Webber’s team has adopted what she called a “do no harm approach,” which includes consulting with physicians on the appropriate use cases before deploying models.
The health system has implemented tools, such as sepsis detection, intended to augment decision-making rather than alter the patient-provider relationship. Its software can flag a patient who has not yet received a flu shot, for example.
“I think a lot of us would appreciate, maybe not a lot of restrictive rules, but I think some guardrails and some universal standards,” Webber said.
A step further
While many providers are focused on developing models that prompt care teams to consider addressing certain conditions, others are taking the technology a step further.
According to Halamka, Mayo Clinic has developed diagnostic algorithms that can more effectively detect polyps on colonoscopy images.
Nationally, he said the rate of physicians misreading results is around 20%. But he said the model his team developed improved the miss rate to around 3%.
A peer-reviewed study published earlier this year in the journal Gastroenterology found the use of AI at eight facilities halved the miss rate of colorectal neoplasia, such as polyps. The authors, who included a gastroenterologist at Mayo Clinic Jacksonville, supported the use of AI in reducing human errors for detection of small lesions.
“A human performing a colonoscopy without augmented intelligence, I wouldn’t call it malpractice, but you know … maybe in the future, [it] won’t be standard of care,” Halamka said.
The health system has also developed cardiology algorithms to predict future heart disease progression.
Mayo is working with other providers to develop AI sstandards, Halamka said. He pointed to its partnership to share de-identified patient information with Mercy Health, announced this summer.
According to Halamka, the patient populations served by St. Louis-headquartered Mercy—which has hospitals in Missouri, Oklahoma, Arkansas and Kansas—are different from Mayo’s, enabling algorithms to be trained on richer and more diverse data sets.
Other health systems have also attempted to bring AI further into care delivery. Phoenix Children’s Hospital developed a model using patients’ medications, lab results and visit history, in addition to their body mass index, to help detect malnutrition and order a consult with a nutritionist.
“Our algorithm was just as effective in spotting those patients that a physician was ordering the consult by themselves,” said David Higginson, executive vice president and chief innovation officer at Phoenix Children’s Hospital.
After running the model in stealth mode, in the background away from physicians, Higginson found it was between 60% and 80% effective at identifying patients presenting with malnutrition. Clinicians, other executives and the hospital’s legal team agreed it should be deployed, so long as it continued producing similar results.
“Let the AI … order the consult as if it was the physician,” Higginson said. “The nutritionist doesn’t know either way, they just show up. And we’ve been retaining that 60% to 80% accuracy.”
Higginson said the implementation has led to malnutrition diagnoses for an additional six to eight children per week, out of an average 25 to 30 weekly cases.
“Malnutrition is one of 5,000 diagnoses we make at the hospital,” Higginson said. “How could it always be top of mind?”
Higginson and other experts said smaller hospitals will have a hard time attracting talent and resources to replicate this kind of AI implementation.
“I would say for the big to medium-sized hospitals, this is tenable. For our small community hospitals or our small practices, this is not ready yet,” he said. “I don’t know that it will be for a while yet.”
Talent and trust
Over the next 10 years, Halamka predicted clinical and administrative AI will happen in the background, without hurdles, at many health systems.
But the road there could be complicated. Providers seeking to expand their AI capabilities are the same ones stricken by rising labor costs. Many find themselves in a battle over expensive employees who can work to implement high-quality models and ensure patient safety.
“We don’t have the luxury of having data scientists on staff,” said Higginson, who is the only employee dedicated to developing and implementing AI at Phoenix Children’s.
“It’s very hard to recruit and retain those people. If you can get them, they tend to last for 18 months before they move on to something else,” he said.
Higginson has largely given up searching for prospective employees who possess both technical skills and an understanding of healthcare.
“I don’t even know if that mythical unicorn exists,” Higginson said. “So, we’re faced with either taking on someone with hospital knowledge and teaching the data science, or [taking] a data scientist and [teaching about] the hospital.”
Some leaders say partnerships with larger systems or technology companies can help with staffing challenges.
“Hiring and retaining the machine learning professionals in a healthcare setting is hard, so that’s why it’s so important to partner with industry and startups that can hire and retain these experts,” Halamka said.
Partnerships can also help providers understand any legal risks for implementing clinical AI, experts said. Although laws and regulations in the space are still in progress, Malek said a hospital should form a joint venture with any of the vendors it’s using to develop AI and contractually ensure the system is protected from legal liability in case of errors.
Beyond finding talent and forming partnerships, experts said building support for technology—by explaining its tangible benefits to the teams who will be working with a model, for example—is paramount to deploying an AI tool that can improve outcomes.
“The biggest overarching challenge is gaining provider trust, or clinical trust,” said Suchi Saria, an assistant professor at Johns Hopkins University and CEO at Bayesian Health, a healthcare AI platform.
“Ultimately, you can have the best technology in the world, but if [care teams] don’t trust it, they won’t use it, and you can’t see any benefit,” Saria said.