As a decision maker, you’re faced with an interesting dilemma when introducing computer-vision tools into your imaging department.
On one hand, the imaging department is often the ideal testing ground for new and innovative technologies and understanding the full picture when it comes to potential value, whether it be improved clinical outcomes or reducing physician burnout.
On the other hand, there’s always the lingering question of how novel technologies may impact your standard of care. As such, the introduction of new technologies is a decision you can’t take lightly.
Based on my experience working with hundreds of hospitals and healthcare providers around the world of all shapes and sizes, here are three things you ought to consider when thinking about introducing AI-based tools into your environment:
Coming from a technical or analytical background, we tend to focus on numbers; understanding that AI is all about statistics, we then fixate on metrics like sensitivity, specificity or PPV of an algorithm. However, having such a narrow focus is missing the forest for the trees.
For instance, an algorithm applied on Head CTAs and an algorithm applied on Chest XRs may have similar specificity, making for a solid value add to the workflow and the latter being a huge burden yielding dozens of false alerts on a daily basis.
Instead, ask what’s the value add that you’re looking to get after implementation. Is it triaging? Do you want to measure before and after turnaround times of positive cases? Do you hope to use AI to limit the usage in subspecialty reads during late night shifts? Asking specific questions surrounding the value you hope to find from your AI solutions is the perfect jumpoff point.
One of the greatest challenges of AI implementation is gaining the trust of physicians, and one of the most common reasons for a lack of trust is inconsistency. While any AI-based tool will occasionally include false alarms and missed detections, what we often find to be more frustrating for the clinicians are cases in which AI didn’t run at all.
At the same time, ensuring that an algorithm’s coverage is optimal can be a tricky task, as data standardization remains one of the greatest challenges facing medical imaging today. You want to address this question with the AI partner in question, and try to be as inclusive as possible when selecting the protocols that will be sent to the AI algorithm. A solid AI partner will have tools to select images that qualify for analysis by their algorithm, and not leave the work of selecting the right images for analysis to the physician.
Having a solid AI algorithm is not enough to be able to introduce the value it can bring into the physician’s workflow in a helpful way. A classic example is the transfer times of images – sending data to the algorithm only after the physician needs the results – which can render an amazing algorithm completely useless.
Use your AI partner to understand what the setup needs to look like for the tool to succeed and consider how these metrics will be measured. A few setup quality metrics worth consideration: