Healthcare Organizations (and providers) Liable for Inaccurate Coding, not Technology Vendors. Not fair, but a reality.

By Brian Murphy
Who will watch the vendor?
Will vendors be held accountable for not just accurate coding, but inaccurate coding and errors and repayment, when machines they manufacture are doing more and more of the work?
Tesla has delivered self-driving cars, but these incredibly sophisticated machines are not without fault. In 2019 two pedestrians were killed when one of Tesla’s automobiles failed to recognize them or the crosswalk they were navigating.
In October of last year Tesla won the first U.S. trial over allegations that its Autopilot driver assistant feature led to a death, a major victory for the automaker. And it won an earlier trial with a strategy of saying that it tells drivers that its technology requires human monitoring, despite the “Autopilot” and “Full Self-Driving” names.
The lesson? You still need your hands on the wheel.
But machines keep getting better, and the manufacturers (in a desire to separate from the competition) proclaim more from them. A simple Google search renders coding products that describe themselves as fully autonomous with no human intervention.
Who is accountable for their inevitable errors? The device or the operator?
For now, it’s you.
If an NLP solution prompts clarification on a borderline low sodium reading, the provider still must sign off on a hyponatremia diagnosis. Which makes the provider and/or your healthcare organization liable for recoupments or false claims action.
But the act of prompting, again and again, is a steady drip that can’t be ignored. It influences behavior. Even something as primitive as drop-down menus that put a desired code (CC or MCC) at the top lead to a higher incidence in reporting.
As we further trust artificial intelligence to augment our work, it stands to reason that we will outsource thinking, and even decision-making, to machines.
It’s not just hospitals and providers at risk either. I read report after report of Medicare Advantage organizations getting hit with Office of Inspector General (OIG) audits for inaccurate coding, principally for reporting acute conditions in office settings when a “history of” code is warranted. Undoubtedly these organizations are using some version of NLP or other products that elevate or auto-suggest diagnoses based on documentation in the record. No one codes from books alone anymore.
Who is accountable?
Right now it’s still the operator. But is that fair?
We all need these tools to do the work, and as the symbiosis of men (and women) and machine continues to blend, it seems reasonable that liability be spread out as well.
We are seeing the formation of responsible AI frameworks in healthcare that may trickle down to the mid-revenue cycle. Take a look at your contracts and see what they say.
Would love to hear your perspective on this issue and in particular vendors.
I welcome your comments and insights. Send me an email at brian.murphy@norwood.com
Related News & Insights
The work of mid-revenue professionals always comes down to values and character
By Brian Murphy A reminder that in the end it really is all about values and character….
Qui Tam Confidential: An Insider’s Look at Whistleblowing with Mary Inman
Listen to the episode here. We hear about lawsuits and false claims acts regularly, but did…