The Norwood Staffing Blog

News & Insights

Artificial Intelligence a Great Tool, but no Replacement for Human Intelligence

Is artificial intelligence (AI) coming for your jobs, CDI and coding professionals?

The answer is no. Not in any foreseeable future, at least.

AI has been billed as a cure-all or a panacea, by some. It’s not. Smart, clinically minded CDI and coding professionals with critical thinking skills are still needed to ensure full and complete documentation, and the accurate reporting of medical codes that reflect hospital quality and accurate reimbursement.

That’s not to say AI-powered tools like prioritization, computer assisted coding, computer assisted physician documentation, etc. are worthless. Far from it. These tools:

  • Improve efficiency by picking up clinical indicators of a more specific diagnosis from the record and highlighting them for review by the CDI professional or physician.
  • Prioritize cases, allowing CDI professionals to review records with more financial opportunity or quality impact, and spend less time on well-documented records.
  • Auto-suggest codes, allowing coding professionals to get close to their target—and sometimes right on the bullseye.
  • Provide at-the-finger references, including quality numerators and denominators, to help with accurate reporting of risk.

“It gives you that big picture. It allows you to go in and give you your set diagnoses, tell you where they’re located, and allow you to say, ‘Oh yeah, I do agree this is met here, let’s go ahead and put it in the encoder,’” says Sandra Love, BSN, RN, CCDS, CCDS-O, CPC, senior manager of CDI for Norwood, of AI. “You don’t always need to read straight from the medical record.”

But, more often than not, AI-powered support tools only land a CDI or coding professional in a broad ballpark. While we don’t have reliable and independently verified statistical data, some users report only about a 20% accuracy rate when querying for a specific diagnosis.

AI is a pattern-detection tool that augments narrow bands of tasks typically performed by humans. AI cannot reason.* It draws inferences from data and takes actions and/or offers suggestions based on that data.

Lacking good data, lacking context, and of course lacking data altogether, AI makes mistakes.

AI gets better over time and with use (aka., machine learning). But in an era of copy and paste, outdated problem lists, and harried physicians entering non-specific diagnoses, the data these machines draw upon for inferences is often unreliable, which leads to false positives, or de-prioritization of cases that have opportunity.

New variables are not AI’s friend. Deep pools of clean reliable data are. And AI does not always have access to this.

Better to think of AI products as tools, not fire-and-forget machines, or robots you wind up and turn loose. They require oversight, and human auditing and validation of results.

Following are two tips for using these tools effectively with your (human) CDI team.

Tip 1: Reduce auto-suggested diagnoses by validating keywords

AI can elevate diagnoses for review or auto-assignment, but miss the context of surrounding language, resulting in false positives. Some examples include:

  • Post-operative, or status post (these terms are not always built into the AI)
  • Last dose (ie., AI recommends a query based on documentation of a medication, but patient was receiving a “last dose” prescribed prior to admission)
  • States no… (ie., the physical exam documentation states that a patient denies x diagnosis, or denies having certain symptoms, but AI nevertheless flags the information as a query or auto-suggested code)
  • Present on Admission (POA). AI elevates a query but does not indicate if it was in the ED notes and misses the opportunity to clarify if it was POA.

Other terms AI can fail to recognize include “resolved,” “ruled out,” and “no change.” AI might read the diagnosis that follows or proceeds these terms, but not the modifier, resulting in an unwarranted auto-query.

“We want to make sure we reduce unnecessary autosuggested query opportunities,” Love says. “When we are seeing medical diagnoses, we want to make sure they are chosen and interpreted appropriately. And when the AI asks us for a specific diagnosis, the technology should be able to query for POA status—or we (CDI) need to do it.”

Tip 2: Understand and review for opportunities bypassed by AI

AI is not always programmed to review certain areas of the medical record, and/or deprioritizes particular diagnoses that are perceived as less meaningful or of lesser impact. For example, AI might:

  • Bypass chronic conditions as part of its programmed high priority scoring
  • Miss bedside surgical procedures that can move an MS-DRG from medical to surgical
  • Fail to flag pediatric procedures or secondary diagnoses

“If you’ve got someone with congestive heart failure, and they’re coming in for a bacterial pneumonia, and your congestive heart failure is not considered for moderate to high prioritization, to me that puts you at risk, especially if the patient goes into respiratory failure due to fluid overload and might need BiPAP,” Love says. “To me, that’s very important to have those chronic conditions in there. People look at those as minor issues, but they are very important, it places the patient at a higher risk of mortality.”

Some AI does not review anesthesia notes (pre- and post-operative) and pathology results, fails to pick up surgical procedures (cardiac catheterization, cardioversions, and colonoscopies), and doesn’t recognize scanned documents (paramedic’s notes, code blue notes), or nursing and clinician telephone notes. These review gaps lead to missed CDI and coding opportunity.

“I use nursing notes all the time when I’m looking at query opportunities,” Love says. “I’ll see what a physician wrote, and then I’ll go back and look at the nursing notes, and sometimes I’ll see, ‘oh my gosh, the nurse stated a diagnosis,’ and I’ll include that information in the query.”

Case examples

Let’s bring this all home with a couple case examples that demonstrates AI’s successful use, and ultimate limitations/failures.

Example 1: AI finding of acute respiratory failure

Information from autosuggested query includes the following:

ED Report:

Physical Exam: Pulmonary effort is normal with no respiratory distress, breathing sounds normal, comments that patient is on BiPAP.

Patient has history of COPD, asthma, sleep apnea, and obesity.

Blood gas: PcO2 76.3 H

RR 17-18 with 98% sat

Information the CDI found to support the diagnosis:

Nurses Note:

Respiratory continues to decline since yesterday. Patient was placed directly on BiPAP since o2 sats 88% and using extreme accessory muscles.

Verdict?

In this example we agree with the findings and AI suggestion for CDI to query acute respiratory failure. However, even in this case the AI only flagged the BiPAP and PcO2; it did not read the nursing notes, which provided important clinical support for the respiratory failure diagnosis. It’s important to include this information as validation and to strengthen the case in the event of an audit.

Example 2: AI finding of pneumonia

Information from autosuggested query includes the following:

H&P:

Patient presents with nasal congestion, rhinorrhea and shortness of breath. Patient appears to have wheezing associated with viral lower respiratory tract infection, since there has been no fever this makes pneumonia lower on differential.

Assessment and Plan:

WARI 2/2 lower respiratory tract infection

Albuterol

Prednisone

Continuous pulse oximetry

Supplement O2 via NC <1ml

Consider creating asthma management plan prior to discharge

Verdict?

We disagree with AI suggestion; pneumonia query is inaccurate. Patient did not have pneumonia because it was not on the differential. Upon CDI review, the CXR showed no opacities and no antibiotics given. Patient is 72 years old, and had been admitted prior with a lower respiratory infection with associated wheezing. Non-smoker treated for asthma.

These two cases (real AI examples) demonstrate why your critical thinking skills as a CDI specialist are still needed today. They also demonstrate the importance of auditing your technology for compliant use.

* AI is often confused with artificial general intelligence (AGI). AGI refers to machines that are as intelligent as humans, and can perform the same intellectual tasks with the same or superior results. AGI is currently not on the horizon. If we ever develop and apply AGI, CDI and coding jobs would be in jeopardy. But fully deployed AGI would transform the entire world economy, including clinical medicine. Just know that today, your skills are needed more than ever.

About Sandra Love

Sandra Love, BSN, RN, CCDS, CCDS-O, CPC, is senior manager of CDI for Norwood. Sandra has extensive experience in CDI including but not limited to pediatrics and outpatient. Contact her at sandra@norwood.com.

About the author

Brian Murphy is the founder and former director of the Association of Clinical Documentation Integrity Specialists (2007-2022). In his current role as Branding Director of Norwood he enhances and elevates careers of mid-revenue cycle healthcare professionals. Comment on this story here!

ACDIS/AHIMA release new query practice brief, are you ready?

By Brian Murphy The new ACDIS/AHIMA Guidelines for Achieving a Compliant Query Practice are out. Why care…

Read More read more

Hospitalist, family man, CDI guru of MedTwitter… Meet Robert Oubre, MD

Twitter is an … interesting place right now, due to the purchase of the platform by the…

Read More read more

A football mindset, delivered with a nurses’ caring: Meet Robin Jones, Executive Director of Clinical Excellence, AdventHealth

What prompts a nurse working at the bedside to get into clinical documentation integrity (CDI)? Long, draining…

Read More read more