12941
Blog

Introduction to Foundation Models: What Are They and Why Do They Matter?

Keeping up with new AI terms can be tricky, especially when attempting to separate meaningful innovations from marketing hype. One emerging technology expected to dominate the conversation in 2025 is the “foundation model,” but what exactly is it?

Although the concept isn’t entirely new – the term was introduced by Stanford’s Institute for Human-Centered Artificial Intelligence in 2021 – interest in foundation models surged in 2023. Why? Because these models will transform the way AI is developed and applied across various industries, including healthcare. 

For clinical leaders and practitioners, understanding foundation models is key to shaping the future of AI-driven solutions and crafting the right strategy for your health system. Let’s explore what foundation models are and what they can do for healthcare. 

What Are Foundation Models?

In simple terms, foundation models are a new class of AI technology trained on massive datasets from diverse sources. Unlike traditional AI, which is designed for a single task (e.g., identifying fractures in X-rays), foundation models can adapt to perform many tasks. 

But what does that actually mean? Imagine a child says they want to learn every sport. You wouldn’t start by teaching them basketball, soccer, football and baseball one by one – it would take forever. Instead, it’s far more effective to focus on foundational skills like running, jumping, throwing and teamwork. Once they’ve mastered those basics, they can quickly learn new sports because they already understand the core principles.

Foundation models work the same way. They’re a type of AI that learns a wide range of general knowledge – like how to understand language, recognize images or process patterns – before being fine-tuned for specific tasks. Think of it as a model with strong “general knowledge” that can adapt to different problems, like writing essays, translating languages or analyzing medical imaging data, with just a little extra training.

This is different from traditional AI models, which act more like specialists. Traditional AI is trained to do one thing well, like detecting spam emails, recommending movies or finding a single pathology in an imaging scan, but it can’t easily learn a new task without starting from scratch.

So how does this become relevant in healthcare? Imagine a universal tool capable of identifying tumors, measuring organ sizes or improving image quality with minimal additional training. That’s a foundation model. This versatility and adaptability distinguish foundation models from their predecessors – and makes them more powerful.

How Are Foundation Models Different From Traditional Machine Learning?

Foundation models differ significantly from traditional machine learning (ML) and deep learning systems. Here’s how: 

Scale of Training Data:

  • Foundation Models: Trained on enormous, diverse datasets, enabling them to learn general-purpose patterns that make them adaptable to various tasks.
  • Traditional Models: Limited to smaller, task-specific datasets.

Versatility:

  • Foundation Models: Adaptable to a wide range of tasks through fine-tuning or prompts without retraining. For example, a foundation model trained on medical images can be adapted to multiple pathologies. 
  • Traditional Models: Purpose-built for a single task (e.g., identifying pulmonary nodules) and require retraining for new applications.

Self-Supervised Learning:

  • Foundation Models: Learn patterns from data without needing large amounts of labeled examples. For instance, they can be trained to detect pulmonary nodules in CT scans by analyzing patterns in radiology reports, enabling greater scalability.
  • Traditional Models: Usually trained with supervised learning, relying heavily on labeled data for specific tasks.

Common Misconceptions About Foundation Models

While foundation models are a significant advancement in AI, there are a few misconceptions worth clarifying:

Misconception 1: “Foundation models are just bigger versions of traditional models.”

Reality: Although size is a factor, foundation models fundamentally differ in their ability to generalize across tasks and domains. They’re specifically designed for broad adaptability, unlike task-specific traditional models.

Misconception 2: “Foundation models can do everything perfectly without additional training.”

Reality: Foundation models serve as a flexible starting point but often require fine-tuning or prompt engineering to excel at specific tasks. They’re not inherently perfect out of the box.

Misconception 3: “Foundation models are the same thing as an AI platform.”

Reality: Foundation models are a component of an AI platform, not the entire system. An AI platform is the infrastructure to develop, deploy and manage AI applications, while a foundation model is one of those applications. The distinction is important because while foundation models enable powerful AI-driven use cases, an AI platform ensures its integration and usability in real-world settings. In other words, you need a platform to truly realize the potential of a foundation model. 

What Do Foundation Models Mean for Clinical AI?

Healthcare has no shortage of data – an incomprehensible 2.3 zettabytes, the equivalent of 2.3 trillion DVDs (remember those?) – and 97% of it goes unused. Because foundation models are trained on massive and diverse datasets (e.g., imaging scans, EHRs, lab reports), they offer a way to leverage untapped information more effectively.

These models learn broad patterns and representations, making them adaptable to a wide range of clinical tasks, from diagnosing diseases to predicting patient outcomes and assisting with treatment planning. 

What sets foundation models apart from other AI systems, including Aidoc’s current algorithms, are their versatility. Today, algorithms are trained on specific datasets (e.g., radiology images for identifying intracranial hemorrhages), but foundation models can be tailored for multiple applications with minimal effort, representing a significant leap forward for clinical AI. 

Though still in its early stages of adoption, applications are emerging, such as Aidoc’s Clinical AI Reasoning Engine, Version 1 (CARE1™) – a groundbreaking clinical-grade foundation model for CT imaging and the first step in a multi-year investment.

What Are the Limitations and Challenges of Foundation Models?

As with every new technology, foundation models are not without challenges:

  • Resource-Intensive Training: Building these models requires significant computational power and energy, raising concerns about cost and environmental impact.
  • Regulatory Hurdles: Meeting stringent standards, like FDA clearance, is more complex due to its potential broader capabilities and the fact that this technology is also new to the regulators. 
  • Workflow Integration: Adopting foundation models may require shifts in how clinicians interact with AI outputs, necessitating careful interface design.
  • Overselling and Underdelivering: Some AI systems may be marketed as foundation models but lack the broad adaptability that define true foundation models. For example, a model trained exclusively on chest CTs to detect suspected pneumonia may perform well for that specific task but can’t generalize to other imaging tasks, such as identifying suspected fractures or brain anomalies. Similarly, decision-support tools relying on fixed clinical guidelines or rule-based logic might seem comprehensive but are inherently rigid.
  • Data Accessibility and Scale: Developing truly robust foundation models demands access to massive, diverse datasets – often requiring millions of clinical cases spanning various imaging modalities, patient demographics and disease profiles. This creates a significant barrier to entry, as only organizations with access to extensive and varied data can realistically develop and train these models. Claims of developing a foundation model based on a relatively small dataset (e.g., 500,000 exams) should be met with healthy skepticism as it might lack the breadth to perform well across a wide range of clinical scenarios.

The Future of Foundation Models in Healthcare

Foundation models are paving the way for faster, more efficient, accurate and adaptable AI solutions. Their ability to quickly adapt to new tasks and analyze complex datasets has the potential to significantly accelerate clinical workflows, enabling quicker decision-making and helping to reduce the time from diagnosis to treatment. As this technology matures, its potential will only grow. Now is the time to understand foundation models and the implications for the future of medicine.

Explore the Latest AI Insights, Trends and Research

Headshot of Idan Bassuk
Idan Bassuk
Idan Bassuk is the Chief R&D and AI Officer at Aidoc. Idan has extensive experience in AI, software engineering and management of large-scale technological projects. Since joining Aidoc at its inception, Bassuk led the company’s efforts to develop and scale algorithms for detection of life-threatening findings in medical images. He's responsible for the architecture of Aidoc’s unique infrastructure and methodologies for medical AI at scale. This infrastructure is the engine that enables Aidoc to build solutions for new pathologies every two months. Bassuk began his career in the elite Israeli Defense Force technology program “Talpiot”, which he graduated with highest honors. Prior to Aidoc, he led international cutting-edge technological projects from the stage of a science fiction idea, and into successful products that won the highest Israeli technological award.
Idan Bassuk
Chief R&D and AI Officer
Tamar Lin
Tamar Lin is Vice President of Product Management, Radiology. With a strong background in product management and research science, she excels at integrating research insights into product development to drive innovation in digital healthcare and pharma.
Tamar Lin
Vice President, Radiology Product Management