HEALTH
Regenstrief VP co-authors National Academy report on AI's potential to improve health
Seminal report focuses on hope, hype, promise, and peril of AI use in medical arena
A Regenstrief Institute research scientist and vice president is a key contributor to a groundbreaking new publication exploring opportunities, issues and concerns related to artificial intelligence and its role to improve human health.
Eneida Mendonca, M.D., PhD, an expert in natural language processing, machine learning, predictive analytics and AI adoption, is a co-author of "Artificial Intelligence in Health Care: The Hope, The Hype, The Promise, and The Peril," a National Academy of Medicine (NAM) Special Publication. Other authors are from Harvard University, the Mayo Clinic, Johns Hopkins, Stanford University, Columbia University, Vanderbilt University and the Gates Foundation.
Dr. Mendonca is a co-lead in the seminal report that considers the potential tradeoffs and unintended consequences of AI (chapter 4) and co-author on the chapter that explores deploying AI in clinical settings (chapter 6). Each chapter includes recommendations.
Regenstrief Institute President and Chief Executive Officer Peter Embí, M.D., who participated in November in a high profile NAM conference on artificial intelligence and health care, said, "The potential for AI is enormous and we are optimistic about AI's promise to bring improvements in health, but we must also be cautious, thoughtful and act wisely. As Dr. Mendonca and co-authors write in the NAM report, 'though there is much upside in the potential for the use of AI in medicine, like all technologies, implementation does not come without certain risks.' "
To mitigate these risks, Dr. Mendonca and her NAM report co-authors call for transparency in the collection and use of data that drive AI solutions as well as in the design of the complex computational processes that make AI possible. For example, the report suggests consideration for establishing review bodies, to oversee AI in medicine.
The authors also underscore workforce issues, noting that advanced technologies will almost certainly change roles. "Instead of trying to replace medical workers, the coming era of AI automation can instead be directed toward enabling a broader reach of the workforce to do more good for more people, given a constrained set of scarce resources," they write. Turning to security issues, the report encourages development of AI systems resistant to misuse by bad actors. {module INSIDE STORY}
In a section titled "Net Gains, Unequal Pains" the authors caution that "if high tech healthcare is only available and used by those already plugged in socioeconomically, such advances may inadvertently reinforce health care disparity."
Turning to AI deployment, Dr. Mendonca and co-authors believe that AI will play a major role in the image interpretation processes of radiology, ophthalmology, dermatology and pathology as well as in the signal processing used in ECG, audiology and EEG tests. They also note AI's potential to "assist with prioritization of clinical resources and management of volume and intensity of patient contacts, as well as targeting services to patients most likely to benefit." They see great opportunity for AI in areas outside the point of care including population health as well as managing administrative tasks.
"We need to focus on clinical safety and carefully monitor uses and outcomes after implementation as we integrate AI within our electronic medical record systems," said Dr. Mendonca. "As we wrote in the National Academy report, 'Virtually none of the more than 320,000 health apps currently available and which have been downloaded nearly 4 billion times, has actually been shown to improve health.' "
"While being beneficial, AI has the potential to create unintended consequences so must be subject to regulation and be ethically implemented. A regulatory framework would be better established proactively, rather than in response to specific issues," said Dr. Mendonca. "Health systems must take steps to ensure the technology is enhancing care for all patients. System leaders must make efforts to avoid introducing social bias into the use of AI applications, which includes demanding transparency in the data collection and algorithm evaluation process. General IT governance structures must be adapted to manage AI and, if possible, the technology should be used in the context of a learning health system so its impact can be constantly evaluated and adjusted to maximize benefit."
"We are in the early developmental stages with many hurdles to overcome, but AI clearly holds immense potential in the healthcare arena," said Dr. Embí. "Leveraging AI to focus on clinical safety and effectiveness as well as stakeholder and user engagement are of paramount importance as are continual monitoring and evaluation. The bottom line is humans need to be intelligent about artificial intelligence."
NAM noted in a press statement, "AI has the potential to revolutionize health care. However, as we move into a future supported by technology together, we must ensure high data quality standards, that equity and inclusivity are always prioritized, that transparency is use-case-specific, that new technologies are supported by appropriate and adequate education and training, and that all technologies are appropriately regulated and supported by specific and tailored legislation."
The NAM special publication on AI is viewed as a reference document for all stakeholders involved in AI, health care or the intersection of the two. It prioritizes caution in implementation of this technology, prioritization of human connections between clinicians and patients, and an unwavering focus on equity and inclusion. NAM, established in 1970 as the Institute of Medicine, is an independent organization of eminent professionals from diverse fields including health and medicine; the natural, social, and behavioral sciences; and beyond. The new publication and associated resources can be downloaded at http://www.nam.edu/AIPub.