13326
Blog

How to Ensure AI is Not Discriminating Against Your Patients

Imagine a woman denied a crucial diagnostic test because an algorithm incorrectly flagged her as low-risk simply because of her gender. Or a patient with a disability missing out on life-changing therapy because an AI tool didn’t account for their unique needs. These are not hypothetical scenarios; they’re the very real risks of AI bias in healthcare.

Healthcare has a responsibility to ensure the AI tools they use don’t contribute to discriminatory practices, even if they didn’t develop the technology themselves. This means actively engaging with AI partners and asking the right questions to verify their commitment to fairness and compliance.

Choosing the right AI partner isn’t only about features and price — it’s also about aligning with ethical and legal responsibilities. By asking the right questions, healthcare organizations can ensure equitable care, foster trust in the AI solutions they adopt and mitigate legal risks.

Which nondiscrimination laws apply to your software?

  • Why this matters: It’s important to confirm that your AI partner understands the ethical and legal requirements around nondiscrimination. This question helps you determine if they’ve considered how their software might be used in these dynamic situations. An ONC/ASTP footnote in the full version of HTI-1 really drives this home: 

“However, we note it would be a best practice for users to conduct such affirmative reviews in an effort to identify potentially discriminatory tools, as discriminatory outcomes may violate applicable civil rights law.” 

  • In simpler terms: You’re basically asking, “Are you aware of the laws against discrimination in healthcare, and have you made sure your software doesn’t contribute to that?”

What steps has your company taken to ensure compliance with these nondiscrimination laws or guidance?

  • Why this matters: You want to know that your AI partner takes nondiscrimination seriously and has actively worked to prevent bias in their software.
  • In simpler terms: You’re asking, “Show me how you’ve built fairness and equity into your software throughout its lifecycle.”
  • Why this matters: It’s crucial to understand what information the AI is using to make decisions. If it directly considers factors like race or ethnicity, or even receives them, there’s a higher risk of unintended bias.
  • In simpler terms: You’re asking, “Does your software – intentionally or not – make decisions based on a patient’s race, gender or other personal characteristics that could lead to unfair treatment?”

What measures have you implemented to mitigate potential biases in your software?

  • Why this matters: It’s not enough to simply avoid using obviously biased information. You need to know that your partner has a proactive strategy for identifying and addressing hidden biases that could creep into their AI.
  • In simpler terms: You’re asking, “How do you make sure your software doesn’t unintentionally discriminate against certain groups of patients?”

How do you ensure the ongoing fairness and equity of your AI solutions?

  • Why this matters: While the vast majority of currently implemented AI solutions do not iterate in real time that doesn’t mean their performance is static; nor are the patients and data it works with. You need assurance that your partner is committed to keeping their AI fair and unbiased over time, even as the AI and environment in which it operates changes.  
  • In simpler terms: You’re asking, “How do you make sure your software doesn’t become discriminatory in the future, even after it’s been released?”

Do you monitor in-production performance to ensure it doesn’t inadvertently discriminate against protected groups? If yes, what is the frequency and criteria of such audits?

  • Why this matters: Even with the best intentions, biases can still sneak into AI. Regular audits are like checkups to make sure the AI is still working fairly for everyone.
  • In simpler terms: You’re asking, “Do you have a system in place to catch and fix any unfairness in your software, and how often do you check for problems?”

How do you ensure transparency around nondiscrimination compliance?

  • Why this matters: You need to be able to trust your AI partner, and that means they need to be open about how they’re ensuring their software is fair and unbiased.
  • In simpler terms: You’re asking, “What are you doing to prove to me that your software isn’t discriminatory, and how can I verify that for myself?”

Do you provide training to your staff and users on nondiscrimination and best practices in healthcare software?

  • Why this matters: Even the best AI can be accidentally misapplied or misused. Training ensures that everyone involved understands how to use the software responsibly and ethically.
  • In simpler terms: You’re asking, “Do you educate your own team and your clients on how to use your software in a way that’s fair and doesn’t discriminate?”

Why These Questions Matter

Asking these questions helps healthcare organizations:

  • Promote equity in patient care: By choosing AI partners committed to nondiscrimination, you can help reduce the risk of biased outcomes.
  • Ensure compliance: These questions help you verify that your partners are meeting necessary requirements to protect your organization from legal risks.

How Aidoc Approaches the Risk of Bias

At Aidoc, we’re committed to building AI that’s fair, unbiased and promotes equitable care. Here’s how we approach compliance:

  • Bias mitigation is built in: We address potential bias at every stage of development, from design and training to validation and monitoring.
  • Diverse data: We use data from a wide range of sources and patient populations to train our AI, reducing the chance of it favoring one group over another.
  • Continuous monitoring: We constantly track how our AI performs for different patient groups and retrain models as needed.
  • Regular audits: We conduct frequent audits to identify and address any potential bias, ensuring ongoing compliance.
  • Transparency: We’re open about our compliance processes, providing detailed documentation and explainable AI outputs.
  • Training and support: We provide training and resources for both our staff and our clients to promote responsible and equitable AI use.
  • Regulatory approvals and reviews: where appropriate we secure the necessary regulatory compliance for our AI models, such as FDA clearance in the U.S. and CE marking (and soon AI Act) in the EU. The FDA’s rigorous review process, for example, includes verification of bias assessment and mitigation strategies, providing external validation of our internal processes and ensuring compliance. Even for those solutions not cleared by the FDA we bring the same “Safety by Design” principles to bear. 

Ultimately, navigating AI bias is about upholding the fundamental principle of equitable care for every patient. By asking the right questions, healthcare organizations can ensure their AI partners share this commitment and help build a future where technology serves the needs of all.

Note: This blog post is intended to provide general information and should not be construed as legal advice. Please consult with legal counsel for specific guidance on your organization’s obligations under local regulations and laws.

Explore the Latest AI Insights, Trends and Research

Amalia Schreier
Amalia Schreier serves as the Senior Vice President of Regulatory Affairs and Legal at Aidoc, guiding our company and product’s regulatory strategies and ensuring alignment with AI-focused medical device compliance requirements. Since her tenure began, she has streamlined our FDA clearance processes, emphasizing a meticulous approach that underscores our commitment to product and clinical quality. With a solid foundation from her legal background and leadership role in AI startup regulatory departments, Schreier brings invaluable insights and expertise to our regulatory framework. Prior to her tech world experience, she worked as a human-rights lawyer and legal policy scholar, with a BA and LLM in law from the Hebrew University of Jerusalem.
Amalia Schreier
Senior Vice President, Regulatory Affairs and Legal
Demetri Giannikopoulos
Demetri Giannikopoulos brings expertise in healthcare technology implementation, clinical workflow and care coordination optimization, and interdisciplinary team development to his role as Chief Transformation Officer at Aidoc. Over two decades in the field of healthcare technology have shaped his highly collaborative and creative approach. He's active in the healthcare community and has served as an industry and patient representative on committees, boards and councils. These include the Coalition for Imaging & Bioengineering Research (CIBR) Executive Steering Committee (2018-2020), American Board of Artificial Intelligence in Radiology (ABAIR), American College of Radiology (ACR) Patient and Family Advisory Council and the ACR Committee on Appropriateness Criteria. He also serves as a Patient-Centered Outcomes Research Institute (PCORI) Ambassador. Giannikopoulos draws on his technical background in computer science as well as professional and personal experiences as he works with the Aidoc team and partners to optimize care pathways. He graduated cum laude with a bachelor's degree in computer science from Florida State University.
Demetri Giannikopoulos
Chief Transformation Officer
Elinor Kaminer Goldfainer
Elinor Kaminer Goldfainer is the Regulatory Affairs Team Lead at Aidoc, where she oversees regulatory strategy and compliance for AI-driven medical imaging solutions. With a robust legal background—including an LL.B. and an LL.M. from Tel Aviv University—and experience in human rights law, she brings a unique perspective to the intersection of technology and healthcare.
Elinor Kaminer Goldfainer
Team Lead, Regulatory Affairs