AI arms race escalates battle of healthcare denials

By Brian Murphy

 

Are we headed for a healthcare AI arms race? A hellish AI dystopia?

 

Some days I wonder.

 

I was reading a breathless interview with the chief technology officer of a revenue cycle management technology company. The CTO states:

 

“Think of genAI as giving coders superpowers. It’s helping them work faster and perform better. Now, imagine that kind of potential across an entire revenue cycle.”

 

I agree with the sentiment, and have seen first-hand amazing technologies that dramatically improve CDI and coding. These allow mundane encounters to be largely automated so more focus can be paid to complex cases.

 

But the CTO here is leaving out the other half of the equation. 

 

Payers are using the same technology to add jet fuel to their denials. ChatGPT generated letters by the thousands, for example. I fear that very advance in AI might be a net wash as a result. 

 

Providers then enter the fray. I recall a recent MedPage Today article where an enterprising young rheumatologist “typed a prompt into ChatGPT requesting that the chatbot write a letter to a medical insurance company to explain why a patient with systemic sclerosis should be approved for an echocardiogram. Seconds later, on camera, the program started writing a full letter, complete with appropriate heading and formatting.”

 

A win! But the battle intensifies.

 

We’ve now got a class action lawsuit in which the attorneys of the families of two deceased UHC Medicare Advantage plan policyholders say that the company uses AI to systematically deny skilled nursing facility (SNF) claims and shirk its responsibility to adhere to Medicare’s coverage determination standards. The policy saves money by not outsourcing length of stay determinations to doctors, at the expense of patient care.

 

Maybe we should just let the machines fight it out, as we sit on the sidelines eating popcorn, placing bets on the outcome.

 

Except that of course the patient is in the middle.

 

I suppose this is what happens when revolutionary new technologies are introduced. It’s part of the process. We’ll see how it all shakes out.

 

Aside: Is anyone else getting annoyed by the obvious generative AI written comments on their Linkedin posts, or emil responses? I’m sure some of these are well-meaning, but they’re often stilted and occasionally utterly non-sequitur.

 

I think I may start replying using ChatGPT. Let the AIs talk to each other, I suppose. 

 

In the meantime, feel free to send me your ChatGPT generated comments. The more awkward, the better.

 

Agree or disagree with this column? Email Brian at brian.murphy@norwood.com 

Related News & Insights

Eight frequently misdiagnosed conditions can be rectified with good CDI, coding practices

By Brian Murphy   What conditions are most frequently misdiagnosed, leading to patient harm?   A study…

Read More read more

Remote patient monitoring sees huge utilization increase, but corresponding regulatory spotlight

By Brian Murphy   Remote patient monitoring has incredible potential to improve the health of our population……

Read More read more