Healthcare AI Blog: AI and ML in Healthcare

A view from the trenches: The top 5 AI adoption trends in healthcare for 2026

Written by James Green | Jan 9, 2026 3:09:33 PM

For the last couple of years at Cognome, I feel like I have been watching everyone’s AI strategy from the inside out. Like watching the engine bay of a car rather than seeing it speeding around the track. This is because the majority of our work happens within a healthcare system’s data “trenches”. We’ve seen what internal systems look like and watched as organizations have begun to leverage AI. Sometimes with great success, and sometimes not. And it’s with this lens that we’ve put together our list of the Top 5 observations on the current and near term state of AI adoption. Given the dizzying pace of AI adoption, we're going to update this quarterly.

1. AI becomes universal - even for those without a strategy

By end of 2026, the vast majority of health systems will have meaningful GenAI use cases whether they planned them or not. The only question is whether the are within the governed or the “Shadow AI” portfolio as clinical and administrative staff use private accounts to get work done. The “strategy gap” won’t prevent adoption; it will just shift adoption into higher-risk channels. And even for those with AI observability capabilities, Shadow AI will remain a real force to be reckoned with.

So what: Assume GenAI is already in your organization; the risk is unmanaged adoption. Invest now in sanctioned tools, guardrails, governance, and detection of Shadow AI.

2. Governance shifts from committees to instrumentation

Most health systems that have started their AI journey will establish a best practices-based committee structure will stop debating principles, finish writing their policies and start measuring and monitoring. Everything will be monitored including: hallucinations (yes this is possible!), model drift and performance, prompt + output logging, PHI egress detection, model/version provenance, human-in-the-loop proof and other features that support responsible AI use. “If it isn’t observable, it isn’t governable” becomes the operating doctrine. Especially as cyber pressure rises.

So what: Governance that isn’t instrumented won’t scale. Prioritize observability, audit trails, and continuous monitoring over more committees and policy documents.

[Note: we have a dog in this fight - ExplainerAI™ adoption continues to grow.]

3. Data reality check: your EHR isn’t the source-of-truth, and neither is your data warehouse!

Teams will discover a patient’s story diverges across EHR, claims, imaging, labs, patient-reported data, CRM, and rev-cycle tools and then learn that “just stitch it together” fails without normalization and provenance. Most will still apply band-aids rather than building a true common data model for the training and operational data feeding your AI solutions. The band-aids and stop gap solutions will create downstream AI weirdness which will feature inaccuracies, unintended consequences, governance and, possibly, compliance and cybersecurity headaches.

[Note: we do data transforms as well which is why we are so aware of this failing.]

So what: If your data isn’t normalized and traceable, your AI will be confidently inconsistent. Fix provenance and standardization before you scale high-stakes use cases.

4. Your data or my data? Multi-tenant hosting wins deals and accretes value but at what cost to the health system?

Most leaders understand that keeping their data local is safer and more secure. However, the convenience of AI-as-a-Service solutions, including vendor-hosted copilots, will continue driving PHI into shared environments - raising the blast radius when things go wrong. This concern applies to Big Tech as much, if not more, than it does to AI startups. A major vendor compromise affecting a large number of health systems is no longer hypothetical; the Oracle/Cerner incident that is unravelling as I type this is the kind of template that will stay on repeat in 2026.

So what: Convenience-driven centralization increases breach blast radius. Demand clear isolation, contractual controls, and exit/incident terms before moving sensitive workloads to multi-tenant AI.

5. Data rights, consent & secondary use become increasingly contentious in contract negotiations

As vendors expand network datasets and cross-tenant learning, scrutiny will shift from “how good is the model?” to “who has the right to use this data — and for what?”

Expect 2026 to bring sharper debate (and tougher contract language) around consent scope, de-identification claims, retention, audit rights, and whether “operational use” quietly became “product improvement” at scale. Legal departments won’t be arguing edge cases, they’ll be defining new red lines.

So what: The biggest AI risk may be contractual, not technical. Lock down consent scope, de-identification standards, reuse rights, and auditability before your data becomes someone else’s product.

I hope you found these observations informative and worthy of your time. One of our New Years resolutions it to produce regular newsletters like this. We look forward to sharing more thoughts and concepts throughout the year ahead. Click on the link if you want to learn more about how Cognome works with health systems to deliver Smarter, Safer and Secure AI for healthcare, or email us at info@Cognome.com