Healthcare AI Blog: AI and ML in Healthcare

ExplainerAI - Transforming Healthcare with AI Transparency

Written by Boudey Aasman | Mar 6, 2025 9:15:04 PM

Explainable AI in Healthcare: Making AI Transparent, Ethical, and Actionable

Explainable AI (XAI) is a rapidly evolving field within artificial intelligence that strives to make AI systems more transparent, interpretable, and comprehensible to humans. This approach is particularly crucial in the healthcare domain, where transparency not only fosters stakeholder trust but also proves instrumental in gaining deeper insights into a patient’s medical journey. By demystifying AI decision-making processes, XAI paves the way for more informed, ethical, and patient-centered healthcare practices.

What does it mean to have XAI in Healthcare?

Explainable AI refers to a set of techniques, methods, and processes that allow humans to comprehend and trust the results and outputs generated by machine learning algorithms. It seeks to open the “black box” of AI decision-making, providing insights into how and why AI systems arrive at particular conclusions or recommendations. In general, we try to abide by the four core principles put forward by The National Institute of Standards and Technology (NIST):

Core Principles of Explainable AI

  • Explanation: AI systems should provide clear explanations for their actions and decisions.
  • Meaningful: Explanations must be understandable and relevant to humans, especially non-experts.
  • Explanation Accuracy: The explanation should accurately reflect the AI system’s processes.
  • Knowledge Limits: The system should operate only under conditions for which it was designed and when it reaches sufficient confidence in its output.

My colleague Christos Kritikos expanded on the importance of AI explainability, transparency, and ethics just a few weeks ago. At Cognome, we’ve woven these principles into the fabric of every workflow, making them a cornerstone of our customer-facing applications. Our flagship tool, ExplainerAI, embodies these ideas and values, bringing them to life in a practical, user-friendly format.

What does ExplainerAI do, and how does it impact ML in healthcare?

Above, you can see our innovative dashboard that brings transparency and interpretability to healthcare ML models. Some of the key features include:

Key Features of ExplainerAI

  • Real-Time Patient Insights: Live model outputs, prediction severity, and feature importance rankings for individual patients.
  • Bias Explorer: Analyzes model performance across demographic subgroups to ensure fairness.
  • Model Development Insights: Provides feature importance, data dictionary, and performance metrics.
  • Drift Monitoring: Tracks model and data drift to maintain reliability.

These features provide impact in all facets of AI-empowered healthcare by:

  • Enhancing trust in AI-assisted tools
  • Improving personalized patient care
  • Ensuring fairness across diverse populations
  • Facilitating continuous model improvement
  • Empowering all stakeholders with relevant insights
  • Promoting responsible and ethical AI use

How does it work: The Technical Framework of ExplainerAI

ExplainerAI’s powerful capabilities are built on a robust technical foundation that combines cutting-edge XAI techniques with efficient data pipeline management and seamless integration into existing healthcare systems. Here’s a glimpse into how it works:

XAI Computation Engine

At the core of ExplainerAI is our proprietary library built around SHAP (SHapley Additive exPlanations) and other XAI frameworks. This engine computes critical metrics for each feature in each model, providing detailed insights into how different factors contribute to predictions and decisions.

Automated ML Pipeline Management

We leverage Dagster, a modern data orchestration platform, to automate our ML pipelines. This framework manages everything from data ingestion to model deployment, ensuring a streamlined and reproducible process. By adhering to a set of predefined interfaces, we maintain consistency across all models, making it easier to scale and maintain our system.

Centralized Data Storage

Our Tapestry database, built on PostgreSQL, serves as a central repository for analytical data, metadata, and other critical information. This standardized approach to data storage enables rapid retrieval and analysis of model outputs and explanations.

Dynamic Dashboard Generation

Thanks to our standardized pipeline and data storage approach, new dashboards with all XAI components are automatically generated for each model. This automation significantly reduces the time and effort required to deploy new models and their corresponding explanations.

Seamless Epic Integration

Clinicians can access ExplainerAI dashboards directly within Epic, their familiar EHR environment. This integration allows healthcare providers to view all relevant explainer metrics alongside patient data, facilitating informed decision-making at the point of care.

ExplainerAI in Action at Montefiore Hospital

The true value of ExplainerAI becomes evident when we look at its real-world application. At Montefiore Health System, we deployed a case cancellation model for surgical appointments through ExplainerAI, with remarkable results.

Immediate Intervention and Reduced Cancellations

One of the most impactful features of ExplainerAI is its ability to provide real-time updates. This capability allowed the Montefiore team to intervene immediately when patients were flagged as high-risk for cancellation. By engaging directly with these patients, the team was able to address concerns, provide support, and significantly reduce cancellation rates.

Targeted Interventions through Feature Importance

ExplainerAI’s clear insights into feature importance proved invaluable for the operational staff. Instead of relying on broad, one-size-fits-all approaches, the team could create targeted interventions based on the specific factors most likely to lead to cancellations. This data-driven approach resulted in more effective strategies and better resource allocation.

Rapid Stakeholder Buy-In

One of the most notable outcomes was how quickly the operational staff embraced the model. ExplainerAI’s transparency allowed them to understand not just the impact on patients, but also the reasoning behind the model’s decisions. This clarity fostered trust and encouraged active engagement with the AI system.

Empowering Staff with Performance Insights

The ability to see a breakdown of performance rates empowered the operational staff to fine-tune their own processes. They could adjust alert thresholds and flag rates based on real-world performance data, leading to a more refined and effective system over time.

Accelerated Adoption through Epic Integration

The seamless integration of ExplainerAI into Epic, the hospital’s existing EHR system, dramatically sped up the adoption process. New staff members could quickly learn and utilize the tool within their familiar work environment, significantly reducing the typical learning curve associated with new technology implementation.

This deployment at Montefiore demonstrated to us more than just theoretical benefits, but delivers tangible improvements in patient care, operational efficiency, and staff empowerment. By making AI truly explainable and actionable, we’re not just predicting outcomes – we’re changing them for the better.

Conclusion

ExplainerAI is not just a tool; it’s a paradigm shift in healthcare AI. By making machine learning models transparent, actionable, and integrated into existing workflows, we’re empowering healthcare professionals to deliver more personalized, efficient, and ethical patient care – truly transforming healthcare one explanation at a time.