Skip to content

Cognome and Intel Partner to Bring Clinically Validated AI to Healthcare at Scale

Come see us at the Hewlett Packard Enterprise booth 12138 at HIMSS26, March 9–12 in Las Vegas. We will be demoing a live deployment that's already transforming value-based care for a health system spanning 50+ hospitals.

I've been in technology for a long time in a lot of different contexts (ad tech, media, gaming, healthcare). I've seen a lot of partnership announcements that amount to a logo swap and a press release. This one is different and I'm excited!

Cognome has officially partnered with Intel. We've spent months optimizing our AI ensemble: the models that power AutoChart and ExplainerAI - specifically for Intel's Xeon 6 processor architecture. The results are real, benchmarked, and in some configurations, they beat an A-100 GPU. I'll get to the numbers in a moment.


Why Intel, and Why Now

Healthcare AI has a deployment problem. Most health systems don't want their patient data leaving their walls and they shouldn't have to. Cognome was built from day one on the principle that intelligence comes to the data, not the other way around. Everything we do runs in the hospital's environment. No PHI ever leaves. 

Intel's Xeon 6 platform fits that model perfectly. It's infrastructure that health systems already have or can deploy on-premise. And with our optimization work, it's now capable of running our full LLM ensemble at production scale. You don't need a GPU farm to get Cognome-grade AI. That's a big deal for accessibility, for budget, and for the vast majority of health systems that aren't sitting on cutting-edge GPU infrastructure.


The Performance Numbers

We did rigorous benchmarking across multiple models and configurations. Three results stand out:

  • Llama 3.1 8B (2 nodes): Xeon 6 hit 94.0 tokens/second vs. 72.0 on an A-100 GPU — that's 130% of A-100 performance.
  • Qwen3 14B (2 nodes): Xeon 6 achieved 103.8% of A-100 performance with comparable total runtime.
  • Qwen3 32B (2 notes): Xeon 6 hit 31 tokens per second which is 54.6% of the A-100 performance. 

These aren't edge cases, they are production-relevant model sizes for clinical AI workloads. We have found that with an ensemble using models of these sizes we can get 99.8% accuracy out of our AutoChart model. What that means in practice: health systems can run our AI models on Intel Xeon infrastructure they already own or can readily procure, without sacrificing the performance needed for real-time clinical applications. That changes the economics of healthcare AI deployment significantly.


What We're Actually Demoing: From Retrospective to Prospective Care

Benchmarks are great. Live deployments are better. At HIMSS, we'll be showing a real application already running at a health system that serves a network of more than 50 hospitals.

The Problem

This health system was participating in a value-based care program that required ongoing quality measurement and reporting. Before Cognome, they could only do historical reporting, looking back at what happened, and that only happened once/year - which made it incredibly hard to increase performance. They couldn't see what was happening right now, and they certainly couldn't act on it in time to improve outcomes for the current measurement period.

The Solution

Our solution connects directly to their Epic EHR system, ingests and normalizes clinical data into a structured common data model, and then layers AutoChart and ExplainerAI on top. AutoChart surfaces the AI-driven insights; ExplainerAI makes every recommendation transparent and auditable and you can see exactly why the model flagged a patient or a care gap.

The Result

The health system can now see where they stand against quality benchmarks in real time, identify patients who need intervention before the measurement period closes, and actually move the needle on program performance. That's the shift from "here's what happened" to "here's what you can still do." For value-based care, that's everything.


Find Us at HIMSS26 — HPE Booth #12138

We're thrilled to be joining Intel's partner Hewlett Packard Enterprise at HIMSS26 in Las Vegas, March 9–12. You'll find us at HPE booth #12138, running live demos of this value-based care application on Intel Xeon 6 infrastructure.

If you're attending HIMSS and working on value-based care, quality improvement, or AI governance in healthcare come find us. We'd love to show you what this looks like in practice, not just on slides.

This partnership with Intel is about making clinical AI genuinely deployable: in your environment, on infrastructure you control, with results you can explain and defend. That's what we've been building at Cognome since day one. We're just getting started.


About Cognome

Cognome is a healthcare AI company built on 15+ years of R&D from leading academic medical centers. Our platform — including AutoChart and ExplainerAI — runs entirely within the client's environment, so patient data never leaves. We specialize in turning complex clinical data into actionable, explainable AI for health systems navigating value-based care, quality improvement, and AI governance. Learn more at cognome.com.