Last quarter, I watched a health plan celebrate capturing $30 million through retrospective risk adjustment. The CEO praised the team. The board approved bonuses. Everyone felt great. Then someone asked the $50 million question nobody wanted to hear: “What about the codes we didn’t find?”
That silence? That’s the sound of money left on the table. Because for every health plan celebrating what they found, there’s an invisible ledger of what they missed. And based on industry benchmarks, it’s usually bigger than what they captured.
The Denominator Problem
We’ve gotten really good at measuring what we find through retrospective risk adjustment. Charts reviewed: 10,000. HCCs identified: 5,000. Revenue captured: $30 million. Success! Except that’s only measuring the numerator. What’s the denominator? How many HCCs actually existed in those charts? Nobody knows.
This isn’t a small gap. When plans conduct deep-dive audits using multiple vendors and advanced technology, they consistently find 30-50% more HCCs than their standard retrospective review captured. Think about that. Your successful program that found $30 million missed another $15-25 million. Every. Single. Year.
The denominator problem exists because we measure activity, not coverage. We track how many charts we reviewed, not what percentage of opportunity we captured. We celebrate finding codes without knowing how many we missed. It’s like measuring sales without knowing market size. The metric becomes meaningless.
I consulted for a plan that discovered this reality the hard way. They’d been reporting 95% coding accuracy for three years. Then a new analytics vendor revealed they were only capturing 60% of legitimate HCCs. The accuracy was real, on the codes they found. But they weren’t finding 40% of the codes that existed. Accurate but incomplete is still failure.
The Invisible Patterns
The codes we systematically miss follow patterns. Once you know these patterns, the invisible becomes obvious. But most organizations never look for patterns in what they didn’t find. How could they? You can’t analyze what you don’t know exists.
Specialty spillover represents the biggest miss category. When endocrinologists document diabetic complications, but your retrospective review focuses on primary care, those HCCs vanish. When nephrologists identify chronic kidney disease progression, but you’re not reviewing specialist notes systematically, that revenue disappears. The missing codes aren’t hidden; they’re in places you’re not looking.
Time-decay blindness creates another massive gap. Conditions documented early in the year fade from review focus by December. Your retrospective process might catch fourth-quarter diagnoses while missing first-quarter conditions that weren’t re-documented. The $10,000 HCC from January gets forgotten in the December rush.
Cross-venue gaps multiply the problem. Hospital admissions generate incredibly rich documentation for risk adjustment. But if your retrospective review focuses on outpatient records, you’re missing the most valuable clinical documentation available. Each admission typically contains 3-5 HCCs. Missing those is like ignoring hundred-dollar bills on the sidewalk.
The Sampling Delusion
Here’s the uncomfortable truth about sampling strategies: they’re designed to make you feel good about incomplete reviews. “We reviewed a statistically significant sample” sounds rigorous. But statistical significance doesn’t equal comprehensive capture.
When you sample 20% of charts, you’re not finding 20% of all HCCs. You’re finding 100% of HCCs in 20% of charts and 0% in the rest. The conditions in those unreviewed charts don’t statistically distribute to reviewed ones. They’re simply lost. Forever.
The economics of sampling made sense when manual review was the only option. Reviewing everything was impossible, so we sampled and extrapolated. But technology has eliminated the economic constraint. AI can review 100% of charts for less than the cost of manually reviewing 20%. Yet we stick with sampling because that’s what we’ve always done.
Even worse, our samples are biased toward easy wins. We review high-dollar members, recent encounters, complete documentation. We systematically avoid complex cases, old encounters, fragmented records, exactly where the missed opportunities concentrate. Our sampling strategy guarantees we miss the hardest-to-find value.
The Measurement Revolution
Leading organizations have stopped measuring what they found and started measuring what they’re missing. This requires a completely different analytical approach, but the insights transform program performance.
Start with coverage analytics. What percentage of your members had ANY retrospective review? What percentage of encounters got reviewed? What percentage of providers had their documentation analyzed? These coverage metrics reveal gaps that activity metrics hide.
Implement redundant discovery processes. Run different vendors on the same population. Use multiple AI engines on identical charts. Compare outputs to identify what each approach misses. The discrepancies reveal systematic blind spots in your primary process.
Create miss-pattern databases. When you do find previously missed codes, analyze why they were missed. Which specialties generate misses? Which time periods? Which documentation types? Build systematic understanding of your blind spots.
The reconciliation exercise is painful but necessary. Take 100 charts you’ve already reviewed. Have a completely different team, vendor, or technology review them again with no knowledge of prior findings. Compare results. The delta between first and second review? That’s your true miss rate.
The Incremental Revolution
You don’t need to solve the entire denominator problem immediately. Start by acknowledging it exists. Stop celebrating incomplete success. Start questioning what you’re not finding.
Pick one specialty, maybe cardiology, and do a deep dive. Review every cardiology note for your Medicare population. Compare what you find to what your standard process caught. The gap will shock you. Now multiply that across all specialties.
Or choose one month and review it exhaustively. Every encounter, every provider, every venue. Compare that to your sampled approach for the same period. The difference represents millions in missed revenue that your current process will never find.
The organizations that acknowledge and address the denominator problem don’t just capture more revenue. They build systematic capabilities that compound over time. They stop measuring success by what they found and start measuring it by what remains to find. Because the real $50 million question isn’t “What did we capture?” It’s “What are we still missing?”

 
			 
			 
			