Keeping up with new AI terms can be tricky, especially when attempting to separate meaningful innovations from marketing hype. One emerging technology expected to dominate the conversation in 2025 is the “foundation model,” but what exactly is it?
Although the concept isn’t entirely new – the term was introduced by Stanford’s Institute for Human-Centered Artificial Intelligence in 2021 – interest in foundation models surged in 2023. Why? Because these models will transform the way AI is developed and applied across various industries, including healthcare.
For clinical leaders and practitioners, understanding foundation models is key to shaping the future of AI-driven solutions and crafting the right strategy for your health system. Let’s explore what foundation models are and what they can do for healthcare.
In simple terms, foundation models are a new class of AI technology trained on massive datasets from diverse sources. Unlike traditional AI, which is designed for a single task (e.g., identifying fractures in X-rays), foundation models can adapt to perform many tasks.
But what does that actually mean? Imagine a child says they want to learn every sport. You wouldn’t start by teaching them basketball, soccer, football and baseball one by one – it would take forever. Instead, it’s far more effective to focus on foundational skills like running, jumping, throwing and teamwork. Once they’ve mastered those basics, they can quickly learn new sports because they already understand the core principles.
Foundation models work the same way. They’re a type of AI that learns a wide range of general knowledge – like how to understand language, recognize images or process patterns – before being fine-tuned for specific tasks. Think of it as a model with strong “general knowledge” that can adapt to different problems, like writing essays, translating languages or analyzing medical imaging data, with just a little extra training.
This is different from traditional AI models, which act more like specialists. Traditional AI is trained to do one thing well, like detecting spam emails, recommending movies or finding a single pathology in an imaging scan, but it can’t easily learn a new task without starting from scratch.
So how does this become relevant in healthcare? Imagine a universal tool capable of identifying tumors, measuring organ sizes or improving image quality with minimal additional training. That’s a foundation model. This versatility and adaptability distinguish foundation models from their predecessors – and makes them more powerful.
Foundation models differ significantly from traditional machine learning (ML) and deep learning systems. Here’s how:
While foundation models are a significant advancement in AI, there are a few misconceptions worth clarifying:
Reality: Although size is a factor, foundation models fundamentally differ in their ability to generalize across tasks and domains. They’re specifically designed for broad adaptability, unlike task-specific traditional models.
Reality: Foundation models serve as a flexible starting point but often require fine-tuning or prompt engineering to excel at specific tasks. They’re not inherently perfect out of the box.
Reality: Foundation models are a component of an AI platform, not the entire system. An AI platform is the infrastructure to develop, deploy and manage AI applications, while a foundation model is one of those applications. The distinction is important because while foundation models enable powerful AI-driven use cases, an AI platform ensures its integration and usability in real-world settings. In other words, you need a platform to truly realize the potential of a foundation model.
Healthcare has no shortage of data – an incomprehensible 2.3 zettabytes, the equivalent of 2.3 trillion DVDs (remember those?) – and 97% of it goes unused. Because foundation models are trained on massive and diverse datasets (e.g., imaging scans, EHRs, lab reports), they offer a way to leverage untapped information more effectively.
These models learn broad patterns and representations, making them adaptable to a wide range of clinical tasks, from diagnosing diseases to predicting patient outcomes and assisting with treatment planning.
What sets foundation models apart from other AI systems, including Aidoc’s current algorithms, are their versatility. Today, algorithms are trained on specific datasets (e.g., radiology images for identifying intracranial hemorrhages), but foundation models can be tailored for multiple applications with minimal effort, representing a significant leap forward for clinical AI.
Though still in its early stages of adoption, applications are emerging, such as Aidoc’s Clinical AI Reasoning Engine, Version 1 (CARE1™) – a groundbreaking clinical-grade foundation model for CT imaging and the first step in a multi-year investment.
As with every new technology, foundation models are not without challenges:
Foundation models are paving the way for faster, more efficient, accurate and adaptable AI solutions. Their ability to quickly adapt to new tasks and analyze complex datasets has the potential to significantly accelerate clinical workflows, enabling quicker decision-making and helping to reduce the time from diagnosis to treatment. As this technology matures, its potential will only grow. Now is the time to understand foundation models and the implications for the future of medicine.
Aidoc experts, customers and industry leaders share the latest in AI benefits and adoption.
Explore how clinical AI can transform your health system with insights rooted in real-world experiences.
Learn how to go beyond the algorithm to develop a scalable AI strategy and implementation plan.
Explore how Aidoc can help increase hospital efficiency, improve outcomes and demonstrate ROI.