The challenge of validating AI for healthcare

The challenge of validating AI for healthcare

There is a lot of excitement in healthcare about the use of artificial intelligence (AI) to improve clinical decision making.

Pioneered by IBM Watson’s choice for healthcare and DeepMinds Healthcare, AI promises to help specialists diagnose patients more accurately. Two years ago, McKinsey co-produced a report EIT Health of the European Union To explore the potential of AI in healthcare. Among the key opportunities the report’s authors found were healthcare operations: diagnostics, clinical decision support, triage and diagnosis, care delivery, chronic care management and self-care.

“First, the solutions have the potential to address the low-hanging fruit of routine, repetitive and largely administrative tasks, which absorb significant time of doctors and nurses, optimizing healthcare operations and increasing uptake,” they wrote. “In this first phase, we will also include AI applications based on imaging, which are already being used in specialties such as radiology, pathology and ophthalmology.”

The world of healthcare AI has not stood still and the European Parliament revealed in June Artificial Intelligence in Healthcare, Focusing on applications, risks, ethical and social implications. The paper’s authors recommend that risk assessment of AI should be domain-specific, as clinical and ethical risks vary in different medical fields such as radiology or pediatrics.

The paper’s authors wrote: “In future regulatory frameworks, the validation of medical AI technologies should be harmonized and strengthened so that multifaceted risks and limitations can be assessed and identified not only by assessing model accuracy and robustness, but Algorithmic justification, clinical safetyClinical Acceptability, Transparency and Traceability.”

Validation of medical AI technology is the main focus of research conducted by the Erasmus University Medical Center in Rotterdam. Earlier this month, Erasmus MC, University Medical Center Rotterdam, began working with health technology company Qure.ai to launch its AI Innovation Lab for Medical Imaging.

The initial program will run for three years and conduct detailed research on abnormality detection by AI algorithms for infectious and non-infectious disease conditions. The researchers hope to understand the potential use cases of AI in Europe and guide clinicians on best practices for adopting technology specific to their needs.

Jacob Visser, radiologist, chief medical information officer (CMIO) and assistant professor of value-based imaging at Erasmus MC, said: “It is important to realize that we have big challenges, an aging population and we have a lot of technology that is needed. will be used in a responsible manner. We are investigating how we can bring value to clinicians and patients using new technologies and how we can measure those advances.

Visser’s role as CMIO serves as a bridge between the medical side and technologists. “As a medical professional, the CMIO wants to steer IT in the right direction,” he said. “Physicians are interested in the possibilities that IT can offer. New technological developments trigger medical people to see more opportunities in areas such as precision medicine.”

Erasmus MC will run the laboratory, conducting research projects using Qure’s AI technology. The initial research project will focus on musculoskeletal and chest imaging. Visser says that when evaluating AI models, “it’s easy to verify that a fracture is correctly identified”.

This makes it possible to evaluate how well the AI ​​copes, allowing researchers to gain meaningful insight into how often the AI ​​mistakenly misses an actual fracture (false negative) or incorrectly classifies an X-ray scan as a fracture (false positive).

Discussing the level of scrutiny placed on the use of AI in healthcare, Vissier said: “Medical algorithms need to be approved, e.g. Federal Drug Administration [FDA] Achieve CE certification in the US and Europe.”

Looking at the partnership with Qure.ai, he added: “We see AI adoption in healthcare at a critical juncture, with clinicians seeking expert advice on how to best evaluate technology adoption. In Qure’s work to date, it is clear that they have gathered detailed insights into the effectiveness of AI in healthcare settings and together we will be able to evaluate effective use cases in European clinical environments.”

But there are plenty Challenges in using AI for healthcare Determine the cause. Even if an algorithm is approved by the FDA or CE certified, that doesn’t mean it will work in local clinical practice, Visier said. “We need to ensure that the AI ​​algorithm meets the needs of our local practice,” he added. “What are the clinically relevant parameters that may be affected by the results produced by AI?”

The challenge is that the data used to develop a healthcare AI algorithm uses a specific dataset. Consequently, the resulting data model may not be representative of actual patient data in the local community. “When you externally validate an algorithm you see performance degradation,” Vissier said.

This is analogous to pharmaceutical trials, where side effects may vary between populations. The pharmaceutical sector monitors usage, which feeds into the product development cycle.

Looking ahead to his desire for research to come out of the new lab, Visier said: “I hope, within a year, the algorithms work, to prove the accuracy of their diagnoses, and I hope we start evaluating how these algorithms work. Daily Clinical Practice.”



Source link

Leave a Comment

Watch the Dior Spring Summer’23 fashion show #DIORSS23 Basketball Wives star Brooke Bailey announces daughter passes away in car accident Fortnite Season 3 Ends And Season 4 Begins Chrissy Teigen: I Didn’t Have A Miscarriage, I Had An Abortion ‘To Save My Life’ Don’t Worry Darling movie review
%d bloggers like this: