CALL 308-249-1565 TODAY FOR YOUR FREE TRIAL!

What Are The Negatives of AI in Healthcare?

8/29/2024
by Keith Grunig

Image of decision tree symbolic of AI in healthcare

What are the Negatives of AI in Healthcare?

Everywhere you turn, there seems to be something about AI.  It's everywhere.  Understandably so.  However, is AI more than just a curiosity asking for something to see if you get a good response?   Is it asking ChatGPT, Gemini, Llama, or others for information and assuming it's correct?   How does that correlate to healthcare?  What can AI do in healthcare?  How can it benefit me or my agency?  

These are all valid questions.  However, there are a lot of considerations in using AI in healthcare.  Some are good, and some are risky.  We'll outline some risks that we've seen in our internal AI R&D, what others are saying about it, what you can find from CMS about AI, and a few things we've found out in our internal efforts.  

How is AI Used in Healthcare?

This question is an iteration of the same question people have been asking forever?  How can I make my job easier so I can focus less on paperwork and more on patients?  That's really what people want.  In a time of inflationary pressure, staffing shortages, increased regulatory/administrative burden, budget cuts and a few hundred other challenges agencies are facing, these are fair questions.  What tools can we use to make life easier not only on front line staff, but back office staff? 

Technology is advancing rapidly.  A few years ago ChatGPT lit up the stage with its innovation on how it can help people.  There are benefits to it.  There are also challenges. 

AI is really broad, there are a several types of AI out there.  Machine Learning teaches a system to learn from experience to do intelligent things.  Deep learning teaches a system to use experience, be intelligent, and understand human language.  Large Language Models (LLMs) use massive amounts of data (think the whole internet) and can understand and generate human text content.  Generative AI can create diverse things including realistic images based on patterns.  Things are starting to be combined to make life "easier."  However, it isn't the panacea of ease to make every challenge go away either. 

Some of the positives that AI can do is using voice to text for scribing purposes on visits.  Home health clinicians can utilize products to do this so that the additional charting time can be reduced.  Clinicians understand well the situation of having too large of case load because of shortages, performing visits and assessments all day, trying to have a work/life balance, and then completing charting late into the night or doing it all on the weekend.  Then the process repeats itself.  There are ways that clinicians are using this to record and import the visit and turn that into clinical documentation.  

There is a risk to this.  Particularly in OASIS.  Regulations are clear.  The one clinician rule exists.  A system cannot fill out an OASIS for a clinician.  The clinician must select the answers to the OASIS questions.  Agencies need to resist the urge to sacrifice regulation for efficiency, low cost, or cost reduction.    

What are the negatives of AI in healthcare?

AI is very new on the scene.  Healthcare has been historically slower in adoption of technology.  

Some risks of AI in health care are:  

  1. Compliance.  HIPAA is non-negotiable.  With a lot of technology providers overseas, HIPAA compliance has to be top of mind for all parties.  Reminder, HIPAA regulations are only enforceable on US soil.  Offshore tech providers may not have the same emphasis on HIPAA compliance as US based entities.  We're not saying offshore agencies aren't HIPAA compliant.  We are saying that offshore HIPAA violation ultimately ends up with the agency being responsible.  (This also applies to offshore coding providers).  Home Care Answers only uses US based employees.  
  2. Hallucination.  AI, specifically LLM, uses the whole internet to effectively create content.  This can be done visually in images, or in writing in the form of articles, emails, summaries of documents, etc.  However, as Scientific American puts it bluntly, it's not all true.  It is predictive text.  We know of the lawyer who used ChatGPT to help write arguments in trial he was working.  Trouble is- it made up the case and the case law.  In healthcare, this can't happen. You can reference it here.
  3. Clinical Judgement.  In all of our R&D, we haven't been able to recreate AI making clinical judgement correctly.  There have been numerous scrubbers on the market for years.  SHP, Matrix Scrubber, Gold Book, among others.  When we use these products, we find very similar trends.  They're often wrong.  Or incomplete.  These products give suggestions based on criteria.  We believe that these get an answer.  However, we believe a person can give the most accurate answer because clinical judgement is being used.  
  4. Constantly changing rules.  LLMs use technical logic and rules.  One of the challenges is telling these to only follow the current regulation.  And it changes.  A lot.  Coding is very complex.  There are assumed relationships, coding order rules, excludes 1 and excludes 2 rules, acceptable diagnosis codes for payment, gender specific codes, and others that must be followed.  Our experience is that EMRs scrubbers are often late or delayed in applying the new rules and guidance.  As with death and taxes, CMS implements new codes and rules Oct 1 every year.  Guidance changes.  Codes are deleted.  New codes are added.  Codes are expanded.  It's tough to track.  LLM often try to combine all of that into its logic, but it can't.  Because it's against the rules.  See #1 on compliance.  

Our internal R&D and other articles are showing about 30-60% accuracy with AI coding in home health.  This article explains well some of the challenges agencies are facing here. 

CMS is noticing.  Here are some links from CMS showing how they are organizing for AI with staffing.  Read that here.  CMS has also put out a site dedicated to AI that gives some information on it.  

This video offers a good explanation at some of the limitations of AI.  You can read that here.  

Conclusion

We're not saying AI is bad.  We're saying it's not completely reliable....yet.  We're not satisfied with our current AI R&D.  We're not abandoning it.  But we're not willing to release it to the world.  There are some new companies out there offering AI coding.  We haven't seen how they handle new code sets.  New OASIS updates.  New guidance.  Even AI government regulations.  We're working down the AI path because it can be helpful- document summarization, streamlining operations, etc.   We're just not ready to release it to our partners at large.  There's too much risk.  Yes, the cost is low.  But the risk is high.  We're not willing to take that risk.

 

Further, we have worked with various AI companies that have engaged us to help with AI.  In each case, our human and clinical judgement call has "beaten" if you will the competitor.  We've found dollars left on the table due to coding and OASIS inaccuracies.  We track our changes and aren't afraid to admit if a solution is better for an agency than we are.  We believe data should drive decisions.   Complete data.  Accurate Data. 

Below are examples we have from AI agencies that have engaged with Home Care Answers to help the organization increase accuracy.  In each case, we have made more changes and increased accuracy on previously coded and "correct" charts.  

Note in ADL, HHVBP, and New OASIS E items, we made several changes to several charts.  Not just one.  

The first company (Company 1), is an AI company that is working to improve dictation and translate that to OASIS accuracy.  It is live with a few clients, but not ready to be released because accuracy wasn't up to standards.  

Here's what we changed with New OASIS Items

New OASIS E Items

N0415 High Risk Drug IS TAKING6 / 7 Changed 85.710%

N0415 High Risk Drug INDICATION6 / 7 Changed 85.710%

N0415I24 / 7 Changed 57.140%

N0415I14 / 7 Changed 57.140%

N0415E23 / 7 Changed 42.860%

N0415E13 / 7 Changed 42.860%

N0415A22 / 7 Changed 28.570%

N0415A12 / 7 Changed 28.570%

J05202 / 7 Changed 28.570%

N0415H21 / 7 Changed 14.290%

N0415H11 / 7 Changed 14.290%

O0110Z1A0 / 7 Changed 0.000%

O0110C2A0 / 7 Changed 0.000%

O0110C1A0 / 7 Changed 0.000%

O0110 Treatment ADMISSION0 / 7 Changed 0.000%

N0415J20 / 7 Changed 0.000%

N0415J10 / 7 Changed 0.000%

C01000 / 7 Changed 0.000%

 

 

PDGM OASIS Questions

M1830 Bathing 5 / 7 Changed 71.430%

M10335 / 7 Changed 71.430%

M1850 CRNT TRNSFRNG4 / 7 Changed 57.140%

M1840 CRNT TOILTG3 / 7 Changed 42.860%

M1810 CRNT DRESS UPPER2 / 7 Changed 28.570%

M1860 CRNT AMBLTN1 / 7 Changed 14.290%

M1820 CRNT DRESS LOWER1 / 7 Changed 14.290%

M1000 DC NONE 14 DA1 / 7 Changed 14.290%

M1000 DC IPPS 14 DA1 / 7 Changed 14.290%

 

HHVBP OASIS Items

M1830 CRNT BATHG5 / 7 Changed 71.430%

M1850 CRNT TRNSFRNG4 / 7 Changed 57.140%

M2020 CRNT MGMT ORAL MDCTN3 / 7 Changed 42.860%

M1840 CRNT TOILTG3 / 7 Changed 42.860%

M1845 CRNT TOILTG HYGN2 / 7 Changed 28.570%

M1810 CRNT DRESS UPPER2 / 7 Changed 28.570%

M1800 CRNT GROOMING2 / 7 Changed 28.570%

M1400 WHEN DYSPNEIC2 / 7 Changed 28.570%

GG0170I12 / 7 Changed 28.570%

GG0130C12 / 7 Changed 28.570%

M1860 CRNT AMBLTN1 / 7 Changed 14.290%

M1820 CRNT DRESS LOWER1 / 7 Changed 14.290%

GG0170J11 / 7 Changed 14.290%

GG0170F11 / 7 Changed 14.290%

 

Company 2 is a company that asked us to look at coding and OASIS but has proprietary AI that the company wants to market.  Our findings were that the charts were marked all nearly identical.  That's a red flag.  No patient is the same, therefore scoring should not be the same.  

 

PDGM OASIS Questions

M10339 / 10 Changed 90.000%

M1840 CRNT TOILTG7 / 10 Changed 70.000%

M1830 CRNT BATHG2 / 10 Changed 20.000%

M1860 CRNT AMBLTN1 / 10 Changed 10.000%

M1850 CRNT TRNSFRNG1 / 10 Changed 10.000%

M1820 CRNT DRESS LOWER0 / 10 Changed 0.000%

M1810 CRNT DRESS UPPER0 / 10 Changed 0.000%

 

New OASIS E Items

N0415 High Risk Drug IS TAKING5 / 10 Changed 50.000%

N0415 High Risk Drug INDICATION4 / 10 Changed 40.000%

N0415E23 / 10 Changed 30.000%

N0415E13 / 10 Changed 30.000%

N0415J22 / 10 Changed 20.000%

N0415J12 / 10 Changed 20.000%

N0415I12 / 10 Changed 20.000%

N0415I21 / 10 Changed 10.000%

O0110Z1A0 / 10 Changed 0.000%

O0110J2A0 / 10 Changed 0.000%

O0110J1A0 / 10 Changed 0.000%

O0110 Treatment ADMISSION0 / 10 Changed 0.000%

N0415Z10 / 10 Changed 0.000%

N0415H20 / 10 Changed 0.000%

N0415H10 / 10 Changed 0.000%

N0415F20 / 10 Changed 0.000%

N0415F10 / 10 Changed 0.000%

C01000 / 10 Changed 0.000%

 

HHVBP OASIS Items

M2020 CRNT MGMT ORAL MDCTN9 / 10 Changed 90.000%

M1840 CRNT TOILTG7 / 10 Changed 70.000%

GG0170I14 / 10 Changed 40.000%

GG0170J13 / 10 Changed 30.000%

M1830 CRNT BATHG2 / 10 Changed 20.000%

M1400 WHEN DYSPNEIC2 / 10 Changed 20.000%

GG0170E12 / 10 Changed 20.000%

GG0170D12 / 10 Changed 20.000%

GG0130C12 / 10 Changed 20.000%

M1860 CRNT AMBLTN1 / 10 Changed 10.000%

M1850 CRNT TRNSFRNG1 / 10 Changed 10.000%

M1845 CRNT TOILTG HYGN1 / 10 Changed 10.000%

GG0170F11 / 10 Changed 10.000%

GG0130A11 / 10 Changed 10.000%

M1820 CRNT DRESS LOWER0 / 10 Changed 0.000%

M1810 CRNT DRESS UPPER0 / 10 Changed 0.000%

M1800 CRNT GROOMING0 / 10 Changed 0.000%

 

Agencies are doing everything they can to try to reduce cost.  We understand.  So is Home Care Answers.  We do have to recognize that PDGM happened because of years not great data.  Do we want to leave our fate to AI to give mediocre data to determine policy?  PDGM for Dummies  Explains how we got here and the basis of how it works.  Our view is that until we're 1,000% confident that AI is accurate, we're not ready to release it for you.  Clinical judgement is what separates decent data to great data.  Ask us how we can help.  

 
Related content