Skip links

Establishing a Practical Way to Monitor AI in the Field

Assessing the value of artificial intelligence in healthcare to increase trust and open conversations by Elad Walach, CEO, Aidoc.

The International Data Corporation (IDC) has predicted that the compound annual growth (CAGR) rate for global spend on artificial intelligence (AI) will reach $US52.2 billion by 2021. The cognitive AI use cases that will see the largest total spending increase will be in the medical diagnosis and treatment systems, along with the telecommunications industry and work automation. We’re at the cusp of a revolution that is about to affect both the way people work and their quality of life thanks to AI innovation in healthcare. The value of medical AI lies in its ability to analyze vast amounts of data to support more efficient diagnoses, increase the quality of care, and reduce costs.

It is commonly accepted that medical AI  can have tremendous benefits. However, realizing its full potential hinges on the creation of trust between all the parties involved ( practitioners, patients, payers and regulators). This, in turn, requires reliable assessment and measurement of the AI-based applications.

Defining collaboration in AI

AI has been making significant strides in the field of radiology, where detailed data analysis is essential for a successful diagnosis. For example, according to the study published in Radiology, AI was trained to identify normal chest radiographs with a 73% positive predictive value and a 99% negative predictive value. The question is how such a result would affect clinical practice. In this case, the overarching result was a significant reduction in average reporting delays for critical or urgent cases. However, this progress would have been impossible without close collaboration between physicians, AI vendors and service providers.  

At a recent expert panel,  hosted at the Aidoc booth at RSNA, in 2018, Jacques Gilbert, Senior Director of Strategy at Nuance Healthcare, Dr Bibb Allen Jr, Chief Medical Officer, and Dr Axel Wismuller, Director of the AI Radiology Laboratory at URMC, and myself discussed the need to validate AI, in the clinical environment  at various medical centers. This is essential to establish a high level of trust.

Dr. Bibb Allen pointed out that “as physicians, we want to be a part of the process that provides opportunities for developers to get their tools into clinical practice in a safe and effective way .” This is one of the reasons why the American College of Radiology Data Science Institute (ACR DSI) has introduced the Assess-AI initiative. It is designed to monitor algorithm performance in the clinical practice by capturing real-world data during the clinical use.  Assess-AI provides reliable longitudinal algorithm performance data.

It can be used by developers to enhance their offerings. Indeed, as a developer, it is essential for me to know if our AI software fails somewhere (or sometimes) and if we need to train it on more data using a specific setting.

In addition,  it opens the door for regulatory monitoring (including FDA post-market requirements). This, in turn, may enable a more streamlined regulatory process reducing time to market.

Finally, it allows clinicians to assess the value of the AI solutions and to devise optimal ways to integrate them into their clinical practice.  

As Dr. Allen suggests, this gives us an “opportunity to drive the progress through the  metadata collected by our registry programmes.”

The infrastructure is already in place – reporting programmes such as Nuance’s Powerscribe 360 and M*Modal’s Fluency can provide the information to the registries and allow for seamless monitoring of AI in the clinical practice. Reports that are sent to the Assess AI registry, can be compared to the actual patient outcomes, allowing for constant, real-time validation of AI systems. What’s exciting about this, is the network effect – every institution and AI company adding their data to the validation process is making it more robust and trustworthy, thus enabling adoption of AI at a faster pace.

AI in radiology: a use case

The University of Rochester Medical Center is one of the early adopters of the Aidoc AI solution and Dr. Axel Wismuller, Director of the AI Radiology Laboratory emphasized the importance of collaboration in making this solution work.

“It is clear that radiology is going to change and nobody knows which products will be clinically useful in the long run so it has to be an interactive process of testing and discussion that takes effective things forward,” he said. “I started my career in AI in radiology 25 years ago and there has been no way to deploy the developments that we’ve published in academic papers. Now, this small startup, Aidoc, has opened the window to bring all these academic advances into the field.”

Dr. Wismuller has established several new studies investigating the turnaround times and integration of in-house development to reinforce the value of learning and improving accuracy. “We want to know how these new technologies will affect our clinical workflow – will we get better or faster? This is something that has not been explored in the field, yet.”

Can AI be trusted across multiple data acquisition systems and across diverse patient populations? Is it really generalizable? The goal is to ensure that AI doesn’t just work in a specific setting but in a variety of settings across imaging centers, emergency rooms and more. Collaboration across AI assessment, real-world monitoring and real-time performance is exactly what the industry needs to increase trust and open dialogues.  Naturally, this data transparency will be a two-way street. Some AI applications may (and will) fail. On the other hand, the AI industry, as a whole, will blossom and become an integral part of the standard radiology workflow.

Another intriguing idea is to use Assess AI databases for the new generation evaluation of the scientific publications  Obviously, there are many more hurdles to overcome. However, we believe that crucial first steps have been taken on the road to the creation of a universal, comprehensive and objective system for clinical evaluation of AI systems. Given a wide range of potential benefits, one can only hope for fast, widespread adoption of the Assess AI approach.

More on this on American College of Radiology Blog.

Join the Discussion