An often overlooked value proposition of AI in radiology is its potential to mitigate bias in interpreting medical images. Bias, whether unconscious or systemic, has long been a challenge in the medical profession, including in radiology.
Medical errors are a substantial contributing factor to patient morbidity and mortality. Radiology, as a diagnostic tool, plays a pivotal role in modern healthcare, offering the ability to provide precise diagnostic information for the treating clinical team.
However, radiologists are susceptible to diagnostic errors, defined as an incorrect or missed diagnosis. To understand the complex nature of diagnostic errors in radiology, one needs to examine human decision-making processes within the context of heuristics and biases.
In a paper published in Radiographics, Lindsay P. Busby, Jesse L. Courtier and Christine M. Glastonbury shared that it’s important to understand the two mental frameworks set forth by Amos Tversky and Daniel Kahneman in 1974. Kahneman, who won the Nobel prize in 2002, posited that humans process information using two systems:
With this framework in mind, we can explore the mechanics behind a missed diagnosis in the reading room. As an experienced neuroradiologist analyzes a traumatic brain CT, their mind kicks into Type 1 thinking. Due to their familiarity with the subject matter, and the repetitive muscles used in analyzing this type of scan, their brain runs on near autopilot, empowering them to quickly reach a diagnosis and move onto the next scan on their list.
Within the framework of Type 1 thinking, the radiologist is prone to systematic errors and cognitive bias due to heuristics, loss of pattern and more intuitive reading versus analytical reading. In stark contradistinction, inexperienced radiologists reading a similar scan may require a longer duration to complete their search pattern for a traumatic brain CT, due to their application of Type 2 thinking when reaching a diagnosis.
Granted, the complexities involved in creating a radiologic diagnosis requires a combination of Type 1 and Type 2 thinking. Nevertheless, mistakes are made within the reading room when physicians aren’t aware of their inherent biases through using individual heuristics.
Another form of cognitive bias within radiology comes in the form of a satisfaction of search. Through residency, radiologists are trained to develop a specific search pattern for every imaging modality and body part, such that their mind adheres to a strict formula when examining any patient. However, once a radiologic finding is made, a radiologist may unintentionally decrease their hyper-attentiveness during their search pattern, relying on the notion that they already made the pertinent finding in the scan. Unbeknownst to them, there are additional important findings within the scan that they have missed due to their cognitive bias.
Imaging data is enormous. As the radiologist sifts through slices of an MRI, they’re searching for an answer to a clinical question. Anchoring bias occurs when the radiologist remains allegiant to the first diagnostic assumption they conceived while reading the scan, ignoring subsequent pertinent radiologic information presented later on in the imaging sequences. Consequently, imaging data presented early on in the search process may sway the radiologist toward a given diagnosis.
This form of error occurs when a radiologist continues to formulate a similar diagnosis as reflected on the previous report. The bias/error results in the radiologist perpetuating the same clinical framework and diagnosis, without entertaining a novel interpretation of the images they’re currently analyzing. In a study published in the American Journal of Roentgenology, Young W. Kim and Liem T. Mansfield reported that the fifth most common cause of diagnostic errors are the result of alliterative error.
Bias in radiology can manifest itself in a variety of forms, and for a more academic analysis on this topic I recommend the review article in RSNA journals by Dr. Lindsay Busby and colleagues.
For more than a decade, AI has revolutionized the way radiologists interpret medical images. By leveraging deep learning algorithms, AI systems help identify patterns and abnormalities in images that might not be immediately apparent to the human eye.
Beyond diagnostic accuracy, AI offers the ability to reduce the impact of cognitive biases by providing an objective second opinion, improving standardization in radiological assessments, and eliminating human limitations like fatigue.
At the core of diagnostic errors generated by Type 1 thinking is human decision fatigue. Radiologists need a form of metacognition to pull themselves back from remaining solely within the Type 1 thinking framework. One manner by which AI can help prevent bias within radiology is ensuring a consistent element of Type 2 thinking in each and every scan.
In addition, the analysis of pixels by an AI algorithm, though susceptible to its own form of bias, may alleviate some of the aforementioned forms of cognitive bias. Through engineering an AI solution trained on large and diverse datasets, the radiologist can trust the processing power of the algorithm is keenly and exclusively Type 2 thinking. Algorithms with a near 100% negative predictive value, and rooted in hundreds of thousands of previous case data, may offer radiologists with an objective second opinion that is free of cognitive bias.
Bias in radiology is a well-documented issue that can lead to incorrect diagnoses and healthcare disparities. However, AI has the potential to mitigate these biases by providing radiologists with enhanced tools to consistently leverage Type 2 thinking for every scan analyzed, at any time of the day or night. While challenges remain in data quality, the integration of AI into radiology holds promise for reducing bias and improving the diagnostic process that’s so important to providing great patient care.
Interested in learning how AI can support your health system?
Set up a demo today.
Aidoc experts, customers and industry leaders share the latest in AI benefits and adoption.
Explore how clinical AI can transform your health system with insights rooted in real-world experiences.
Learn how to go beyond the algorithm to develop a scalable AI strategy and implementation plan.
Explore how Aidoc can help increase hospital efficiency, improve outcomes and demonstrate ROI.