With state laws emerging and global regulations evolving, healthcare organizations must take a proactive approach to AI governance. What does a strong governance framework look like, and how can health systems navigate compliance challenges?
In the halcyon days of yore, relying on well-worn paths for software and device purchasing offered teams looking to onboard new solutions a relatively straightforward — even if often grueling — acquisition pathway. Then AI came crashing onto the scene and all of the sudden questions about “Will this replace me?,””How do you mitigate bias” and “Tell me about your training datasets” became the new norm.
The great news is that many of the initial use cases and point solutions fell into the existing federal FDA framework, so at least you had a source to do some of the lifting for you. They even require those vendors to publish useful summaries on an FDA website! But again, out of left field a couple years ago large language models (LLMs) and generative transformers changed the world’s day-to-day life. With this incredible innovation and widespread interest came questions, and, in many cases, concerns.
We now find ourselves in a world where many of the potential healthcare use cases for AI are in an unregulated space at the federal level, and many states are stepping in to fill the void.
In 2024 alone, there were over 700 AI bills proposed in every state that had their state congress in session, except one (45 states in total). Of those bills, 113 were enacted into law.
Do you know if your state is one of the ones that passed one of those 113 bills? Do your vendors?
Two key positions have emerged:
Regulations like HTI-1 — ONC/ASTP’s implementation of specific requirements from the 21st Century Cures Act — have a very narrow scope of coverage and don’t apply to the vast majority of healthcare IT/AI vendors.
The FDA’s oversight is broader in scope, while still not applying to every AI use-case, and the summaries serve as superlative sources of information for items such as safety, intended use and scope of users.
Even with scope limitations for the federal regulatory approaches, they’ve been key in shaping the behavior and preferences of healthcare institutions consuming AI and fostered growth in the adoption of FDA-cleared use cases.
This brings us back to the states.
Recognizing the wide adoption of AI (not just in healthcare), they’ve taken it upon themselves to define the boundaries of acceptable deployment and use in ways that are both impactful and secondary to models developed for the healthcare industry, which already has a high standard for regulation relative to other sectors. In many instances, AI solutions that don’t clearly fall into the ONC/ASTP or FDA scope will be clearly governed by the emerging state AI laws.
Some examples of AI bills already enacted into law include:
The speed of iteration among states is only increasing.
Virginia’s governor just struck down a law regulating AI, citing the potential for “harm [to] the creation of new jobs, the attraction of new business investment and the availability of innovative technology in the Commonwealth of Virginia.”
This is happening at the same time that — even before its implementation, slated for 2026 — the Colorado legislature is reviewing potentially changing their AI regulatory statute to update, for example, what constitutes a “consequential decision.”
Texas also overhauled their proposed AI legislation out of a similar concern about creating a constricting environment for tech innovators.
First, know everything there is to know about the AI your institution is deploying.
A nutritional facts label about any AI model you’re deploying is a great place to start, but it’s only a tool to leverage as part of a wider AI governance strategy. An AI model which doesn’t have a model card or otherwise easily referenceable information about the training data, training methods, bias and risk mitigation methods used in development and more, will leave you in the dark about how to address potential risks in using that solution. Consider a model card as table stakes. If your vendor can’t supply a 3- to 4-page explanation using an industry standard card, like the Coalition for Health Alliance (CHAI) or Health AI Partnership’s (HAIP), they just saved you a whole lot of time.
Model cards are just the start. Like any other evaluation of a new solution that touches your clinical workflows and patients’ lives, reference calls are often the most valuable way of actually understanding real-world performance.
Speak with other entities that have already leveraged a specific solution, and learn how their experience matches your expectations as well as those enumerated on the model card. With this information, your organization can put a prospective model through a robust governance process like any other piece of software under review to be deployed within your system.
The information gained from peer institutions is key to scenario planning and communicating to the developer any additional efforts that might be required to comply with specific state regulations. However, as state governments continue to enact their own AI regulations, certain trends have begun to emerge.
None of them are so ubiquitous as the requirement for transparency. Regardless of jurisdiction, transparency is vital for the safe, responsible deployment of any model — but is not solely a function of information shared by AI developers, but also of a developer’s ability to work closely with deployers, and even end-users, to proactively address issues and suboptimalities throughout the lifecycle of a model’s deployment.
An iterative AI model is almost akin to a living thing that must adapt and operate in a changing environment. In this rapidly evolving state regulatory landscape, transparency becomes a precipitously salient factor. It must be curated as such by those with the knowledge to do so.
All of this must be considered up front in order to effectively fold AI — with its new and unique challenges — into the governance processes we already know work for evaluating software and other medical devices.
In the process of Choose, Integrate, Adopt and Govern, the first step, Choose, can only be undertaken with those AI models that enable your institution to make a choice that’s informed. Otherwise, given the rate of change in the regulatory landscape as well as the number of AI models available to consume, it will be near-impossible to filter out which models and model developers can be effectively tailored to meet the regulatory requirements of a given jurisdiction.
Aidoc experts, customers and industry leaders share the latest in AI benefits and adoption.
Explore how clinical AI can transform your health system with insights rooted in real-world experiences.
Learn how to go beyond the algorithm to develop a scalable AI strategy and implementation plan.
Explore how Aidoc can help increase hospital efficiency, improve outcomes and demonstrate ROI.