The Question Nobody in Healthcare AI Wants to Answer
Nobody wants to be the person holding the bag when AI in healthcare screws up.
Especially when the stakes involve patients, clinicians, and courtrooms.
I met a VP of Product who builds AI tools for clinicians. I asked him one question: when the AI gets it wrong, who owns it?
He looked at me, smiled, and said something I was not expecting.
“Everyone thinks they know the answer. Almost no one does.”
We talked for a while. I learned a lot. And I think it is worth sharing, because if you are building anything with AI in healthcare, this conversation will matter to you eventually.
Governance Comes Before the Model. Every Time.
“Where do you even start with this?” I asked.
“Governance,” he said. No hesitation. “Before the model. Before the roadmap. Before anything. You define who is responsible for every decision the AI touches.”
Not the model. Not the vendor. A human being with a name, with a title, and someone who can walk into a room and explain exactly what happened. To a regulator, to a patient’s family, to a judge. Without using the phrase “the algorithm decided.”
“Because that phrase?” he said. “It doesn’t hold up anywhere.”
The Liability Problem Nobody Talks About Openly
“So what happens when a vendor’s AI gets something wrong?” I asked.
“Most vendors disclaim liability in their contracts,” he said. “Aggressively.”
Which means the hospital holds the bag. I sat with that for a second.
“And the clinician who followed the AI recommendation?”
“Potentially liable.”
“And the one who ignored it?”
“Also potentially liable.”
“So everyone is exposed.”
“Welcome to healthcare AI in 2026,” he said.
This is not a theoretical problem. It is, in fact, the default state for most teams who have not thought carefully about governance before they ship.
How Accountable Teams Build Differently
“How do the good teams handle this?” I asked.
“They build like accountability is a product requirement,” he said. “Not a legal team problem. Not an afterthought. A core design decision from day one.”
He walked me through four things those teams do differently.
First, they establish a hard policy on where humans must be in the loop. Not “ideally” a non-negotiable checkpoint. A licensed professional who reviews, approves, and signs off before the AI’s output influences a real decision. “In Illinois,” he told me, “this is not even optional anymore. The law requires a licensed human in the loop for AI used in therapy. The guardrail became legislation.”
Second, they document every time a human overrules the model. “That sounds tedious,” I said. “It’s how you catch model drift before it becomes a crisis,” he replied. If the AI is quietly getting worse over time and nobody tracks the overrides, you find out the hard way.
Third, they build an audit log that could survive a courtroom. Every input, every output, every downstream action, timestamped, traceable, and complete. “Not because we expect to get sued,” he said. “Because if we ever are, the audit log is the difference between a bad week and the end of the company.”
Fourth, they treat the model like a new hire on probation. “You wouldn’t give an unsupervised junior employee access to your most critical patient decisions,” he said. “Then why would you do it with a model?”
The Mistake Most Product Teams Make
“What’s the thing most product teams get wrong?” I asked.
He didn’t have to think about it.
“They treat governance like a checkbox. Something you do at the end. Something the compliance team handles. Something that lives in a document nobody reads.”
And then, inevitably, the model surfaces a bad recommendation. A clinician acts on it. Suddenly everyone is looking at each other trying to figure out whose problem it is.
“And the answer is?” I asked.
“Everyone who didn’t build the governance structure to prevent it.”
The Upside Is Real. So Is the Risk.
“Does the tech still excite you after all this?” I asked.
“More than ever,” he said. “Clinicians are catching things earlier. Patterns that took years to spot are surfacing in seconds. Kids with complex conditions are getting better, more consistent care.”
The upside, in other words, is very real. But so is the risk. The teams doing this right hold both at the same time. They don’t let excitement about the potential make them sloppy about the downside.
I asked him how he’d sum it all up.
“Most teams ask: can we ship this?” he said. “The good ones ask: can we defend this? There’s a big difference.”
Everything I Took Away From That Conversation
- Governance is not the last thing you think about, it is the first
- AI vendors disclaim liability; the deploying organization holds the bag
- Clinicians are exposed whether they follow the AI or ignore it. There is no safe default
- “The algorithm decided” is not a sentence that survives a courtroom
- Hard rules about human review are not a constraint on AI. They are the design
- Document every override; that is how you catch drift before it becomes a headline
- Build your audit log like it will face legal scrutiny, because it might
- Treat the model like a new hire: watch it, give it feedback, don’t leave it unsupervised
- The best product teams in healthcare hold excitement and responsibility simultaneously
- “Can we ship this?” is the wrong question. “Can we defend this?” is the right one
Healthcare AI is genuinely changing lives. However, the people doing it right are not just moving fast. They are building carefully, with someone’s name on every decision and a paper trail that could hold up anywhere.
And that is not slowing them down. That is what makes the whole thing work.
