The most commonly held thesis in healthcare investing is that the system is bloated. Too much admin, too many phone calls, faxes, coding errors, paperwork. Twenty-five cents of every healthcare dollar is waste. Just automate it.
That thesis is right. Healthcare is fat. But it’s a different kind of fat than most people think.
Subcutaneous fat is the kind you can see. Think of belly fat. It collects in obvious places, separate from vital organs. You can point at it, measure it and remove it. That’s the mental model most people bring to healthcare waste. Identify the bloat, deploy AI, cut costs. Liposuction.
Healthcare waste isn’t subcutaneous fat. It’s visceral fat, or simply put organ fat. It’s distributed across every critical function in the system, wrapped around the things that keep patients alive and payments flowing. Each piece of waste is woven into workflows that are complex, sequential, conditional, and full of embedded human judgment. It’s not sitting in a pocket waiting to be suctioned out. It’s attached to something that matters.
This sounds like a semantic distinction. It isn’t. It changes what you build, how you deploy it and whether it works.
I’ll give you an example I lived through.
At Circle Medical, a company I co-founded that grew to $114M in revenue run rate on $12M in capital, one of the most obvious automation targets was medical billing coding. This is where clinical encounters get translated into billing codes that determine provider reimbursement. It’s repetitive, error-prone, time-consuming, and expensive. A textbook case for AI.
So you build an AI coder. Except here’s what you find when you start pulling the thread.
Accurate billing codes require high-quality provider notes. If the note is incomplete, the code is wrong. So before you can automate coding, you need to ensure note quality upstream. You start building AI-assisted note quality checks, first for completeness, then for clinical accuracy.
But not every gap in a note is an error. In telemedicine, a provider can’t perform a physical exam, so certain fields are legitimately empty. A patient who didn’t take their measurements or follow instructions creates a note that looks incomplete but accurately reflects what happened. Now your AI needs to distinguish between genuine omissions and legitimate absences. That’s not a data problem. It’s a clinical judgment problem embedded in a product design problem.
You push forward. You build the coding model. To deploy it safely you need benchmarks and evaluation infrastructure with a human in the loop. You need to know whether the model is coding accurately.
This is where it gets uncomfortable.
Your ground truth isn’t clean. Different teams coding the same encounters produce different results. At scale, it becomes hard to determine what the “correct” code even is, because coding involves interpretation. There are conditionals around time-based and complexity-based billing where reasonable professionals disagree.
Then you discover product issues feeding into the noise, like system defaults inconsistently being overridden. Fixing those defaults adds manual steps for providers (more work, not less) but is necessary for accuracy and compliance.
So you have a model that codes faster. But your benchmarks are uncertain, your ground truth has noise from human variation and product artifacts, and early analysis suggests the model’s outputs differ from existing patterns in ways you can’t fully explain. Is that because the model is wrong? Or because existing patterns had errors the model is now exposing? Without a trustworthy baseline, you can’t confidently answer that.
That’s organ fat.
Every one of those layers (note quality, clinical judgment, product defaults, benchmark infrastructure, ground truth reliability) is real work. None of it is visible from the outside. From the outside, billing coding looks like one problem: automate it. From the inside, it’s six interlocking problems, each wrapped around clinical, legal or operational functions you can’t disrupt carelessly.
Billing coding is one example. Multiply it across charting, referrals, scheduling, insurance verification, prescription management, prior authorizations, RCM and you see the full picture.
I don’t think any of this means healthcare waste is unsolvable. The opposite, actually. AI, and particularly agentic systems with the right data, context, tools and permissions, is uniquely capable of navigating this kind of complexity. But deploying it requires an aligned foundation, clear scope, precise instruction, careful monitoring and rigorous measurement.
The technology isn’t the bottleneck. The diagnosis is.
Before you pick up the scalpel, you need imaging. You need to understand what the fat is attached to: the dependencies, the conditional logic, the upstream gaps, the judgment embedded in what looks like rote process. The thesis is right. The fat is real. The part that’s easy to miss is that you can’t just cut it out.
Healthcare’s problem is organ fat. Start with the imaging.
Brent LaRue is a 2x Y Combinator founder and co-founder of Circle Medical (YCS15), which grew to $114M revenue run rate on $12M in capital serving 400,000 patients. He’s currently exploring & building at the intersection of healthcare and AI from Zurich.