
By Brian Murphy
A recent study blaming upcoding as the cause of high acuity DRG reporting has me thinking about data more broadly—and its limitations.
A few weeks ago Health Affairs recapped a study in which researchers, using all-payer discharge-level data from five states, claimed that “upcoding” was the cause of a 41% increase in discharges with the highest MS-DRG coding intensity.
Upcoding. Not CDI efforts, not standardization of organizational diagnostic definitions, not the use of assistive technology. It was laid at the feet of upcoding, and the fraudulent or at least questionable practice this term implies.
Well-intentioned though it may (or may not be), this study misses some important context.
Data does not and cannot tell the story.
I like data. We need data. We make decisions based on data. My scale offers me important data that I track, and adjust my diet on.
But data is by definition a discrete piece of evidence, and it does not always tell the truth. It requires human interpretation to make sense of it.
A high BMI is a piece of data … until you see the jacked bodybuilder it belongs to, and your assumption of obesity crumbles.
Football teams can and often do win despite getting outgained in yardage and first downs.
In 2018 a data analytics firm called IntegraMed filed lawsuits against two large healthcare organizations, Providence Health and Baylor Scott and White. Using publicly reported claims data, IntegraMed claimed these organization’s higher acuity diagnoses and higher reimbursement than their peers was clear evidence of fraud. Their case was built almost entirely on data.
Thankfully IntegraMed lost. In his dismissal, a judge said that educating physicians and CDI professionals to document in a way that allows accurate coding of CCs/MCC is “not in and of itself one to submit false claims and is equally consistent with a scheme to improve hospital revenue through accurate coding of patient diagnoses in a way that will be appropriately recognized and reimbursed by CMS commensurate with the type and account of services rendered.”
I don’t think most CDI and coding professionals realize how important a win these were.
Painting outliers as problems through data sets a terrible precedent. It flattens out high achievers. It ignores the pareto principle, which states that for many outcomes, roughly 80% of consequences derive from 20% of causes.
It’s rather insulting as well. Good CDI and coding professionals should not be penalized. Imagine penalizing physicians for having better outcomes than their peers.
One hospital may fail to capture certain conditions due to a lack of CDI or coding expertise. Those same conditions will be captured by a peer hospital with a robust CDI program or solution. Does that make the latter guilty of upcoding? I hope your answer is no.
We need data to support our work, or to get us aimed in the right direction. But in isolation, it’s insufficient.
References
- Integra Med Analytics LLC v. Providence Health & Servs: https://casetext.com/case/integra-med-analytics-llc-v-providence-health-servs
- Note from the ACDIS Director: $188.1M Providence Health lawsuit has profound implications for CDI, healthcare: https://acdis.org/articles/note-acdis-director-1881m-providence-health-lawsuit-has-profound-implications-cdi
- MS-DRG “upcoding” results in 2/3 of growth of high-intensity discharges, study says—but there’s more to the story: https://www.norwood.com/ms-drg-upcoding-results-in-2-3-of-growth-of-high-intensity-discharges-study-says-but-theres-more-to-the-story/
Related News & Insights
Prepare for April 1 OCG changes to COVID-19, obesity, and sepsis
By Brian Murphy An update to the ICD-10-CM Official Guidelines for Coding and Reporting (OCG) is out….
Cracking the Code: Victoria Vo’s Path to Entrepreneur, Social Media Stardom
Listen to the episode here. I’ve been in the coding and CDI worlds since 2004, and…