12970
Blog

How Did We Get Here? The Evolution of Foundation Models in Healthcare

Built on decades of advancements in machine learning (ML) and neural networks, foundation models stand to address long-standing AI data and training limitations and introduce unmatched adaptability. 

While foundation models are still emerging in healthcare, its principles are rooted in earlier successes across other industries, making the transition to healthcare a natural progression.

Foundation Models in Action: Real-World Examples

Foundation models have already made a significant impact in areas like language processing, paving the way for a new generation of versatile, AI-driven solutions. Here are three examples that show how foundation models are being used today:

Large Language Models

  • GPT-4 (OpenAI)
    • Why it’s a type of foundation model: Trained on vast datasets, GPT-4 adapts to diverse tasks, such as summarization, question answering and content generation.
    • Example use: Summarizing complex research papers or medical policy documents to save clinicians and researchers valuable time.
  • Gemini (Google)
    • Why it’s a type of foundation model: Gemini is designed for complex reasoning and multilingual tasks, leveraging diverse datasets for rapid adaptation to specialized use cases.
    • Example use: Supporting multilingual patient communications or powering real-time clinical decision support in global healthcare settings.

Multi-Modal Model

  • CLIP (OpenAI)
    • Why it’s a type of foundation model: CLIP bridges text and images, processing multimodal data to link visual and textual information effectively.
    • Example use: Can serve as the basis for improved image analysis algorithms identifying inappropriate or harmful content to assist with content moderation. 

It’s important to note that the examples above are not specific to healthcare but do have possible healthcare applications. However, foundation models have significant promise. Aidoc’s CARE1™ (Clinical AI Reasoning Engine, Version 1) is a clinical-grade foundation model specifically designed for CT imaging. 

Trained on millions of cases and anatomies, CARE1™ will soon enable comprehensive, real-time detection of suspected critical conditions across various medical imaging modalities, opening new possibilities for diagnostics. Notably, to our knowledge, it is the first foundation model submitted for FDA clearance.

The Pathway to Foundation Models

Foundation models didn’t emerge overnight; their development reflects a series of interconnected advancements in AI, each building upon the previous era’s achievements:

  • Machine Learning (ML): Early ML systems were task-specific, using structured datasets and rule-based algorithms. While effective for narrowly defined problems, they lacked adaptability for complex or variable contexts.
  • Neural Networks: Neural networks introduced the ability to process unstructured data, such as medical imaging and clinical notes. However, they remained constrained by single-task design and required developing new models for new applications.
  • Foundation Models: The leap to foundation models came through breakthroughs in transformer architectures and most importantly large-scale training on vast amounts of raw data. The capacity for generalization across domains – while retaining domain-specific expertise – positions them as a transformative tool in healthcare.

The Innovations Powering Foundation Models

As noted above, the current rise of foundation models is underpinned by two transformative advancements that have reshaped the landscape of AI:

Transformers:
The transformer architecture, with its attention mechanisms, allows models to focus on the most relevant elements within large datasets, including electronic health records (EHRs) and diagnostic images. This capability makes foundation models particularly adept at identifying complex patterns, enabling precise and context-aware predictions.

Data and Computational Power:
Advances in computational power enable foundation models to learn at scale through self-supervised techniques, extracting insights without the need for manual annotation. This capability allows it to process vast amounts of unannotated healthcare data while developing a nuanced understanding of diverse patient populations and clinical scenarios.

What Sets Foundation Models Apart in Healthcare?

The adaptability of foundation models is what sets them apart from traditional AI solutions. Here’s how it happens:

  • Rich Representations: Foundation models are trained on massive, diverse datasets, allowing them to develop a broad understanding of medical knowledge. This enables them to tackle complex, nuanced tasks, such as synthesizing data from imaging, clinical notes and lab results, which would overwhelm narrower AI systems.
  • Predictable Performance Growth: Foundation models improve as their size and training data grow, making them inherently scalable. This ensures their utility across a wide range of specialties, from radiology to population health management, while maintaining consistent, reliable performance.
  • Multitask Capabilities: Unlike traditional AI, which is often limited to single-task applications, foundation models can perform multiple functions simultaneously. For example, they can identify medical pathologies, interpret diverse imaging modalities and generate clinical reports – all with minimal fine-tuning. This versatility accelerates deployment and reduces the resources needed for new applications.

Transformative Potential in Healthcare

Though still in early stages within healthcare, foundation models hold immense promise:

Short-Term Impact (1-3 Years): Foundation models will see broader use in imaging and diagnostics, helping clinicians manage workloads more efficiently and accurately. It will achieve this by enhancing image analysis for greater precision, automating routine tasks, offering evidence-based decision support, adapting to various medical specialties and streamlining workflows through integration with systems like PACS and EHRs.

Long-Term Impact (5-10 Years): Over the long term, foundation models will drive breakthroughs in personalized medicine by helping to tailor treatments to individual patients, advance predictive analytics to foresee health trends and risks, enhance clinical decision support with real-time insights and enable integrated healthcare systems that leverage AI to deliver comprehensive, patient-centered care across specialities and departments.

By addressing the limitations of earlier systems, bridging data silos and scaling across diverse medical applications, these models offer unparalleled versatility and are poised to accelerate AI adoption.

Explore the Latest AI Insights, Trends and Research

Headshot of Idan Bassuk
Idan Bassuk
Idan Bassuk is the Chief R&D and AI Officer at Aidoc. Idan has extensive experience in AI, software engineering and management of large-scale technological projects. Since joining Aidoc at its inception, Bassuk led the company’s efforts to develop and scale algorithms for detection of life-threatening findings in medical images. He's responsible for the architecture of Aidoc’s unique infrastructure and methodologies for medical AI at scale. This infrastructure is the engine that enables Aidoc to build solutions for new pathologies every two months. Bassuk began his career in the elite Israeli Defense Force technology program “Talpiot”, which he graduated with highest honors. Prior to Aidoc, he led international cutting-edge technological projects from the stage of a science fiction idea, and into successful products that won the highest Israeli technological award.
Idan Bassuk
Chief R&D and AI Officer
Tamar Lin
Tamar Lin is Vice President of Product Management, Radiology. With a strong background in product management and research science, she excels at integrating research insights into product development to drive innovation in digital healthcare and pharma.
Tamar Lin
Vice President, Radiology Product Management