CALL 308-249-1565 TODAY FOR YOUR FREE TRIAL!

How does AI impact coding and OASIS?

8/14/2025
by Keith Grunig

Image of AI with a Human

 

AI is all the rage.  We Get it.  It is cool. At a recent trade show, I was approached by more AI companies looking to partner with us than agencies looking to partner with us.   It was pretty wild.  Each of the AI providers were asked the same questions.  "Where are your servers?"  "How do you ensure accuracy with your AI program?", "Where is your team based?"  Several don't have great answers.  Some because we're still learning what AI can do (and can't do).   Some because there aren't answers.  Some answers we're (Home Care Answers) aren't comfortable with.... yet.

How Does AI Impact Coding and OASIS?

AI is really good at a lot of things.   Some things AI isn't as good at yet.  AI can't make accurate clinical judgement calls.  We're seeing that.  AI still has some risks with Hallucination.  One risk is that algorithms can indicate and correlate some diagnoses.  However, every patient is different and every patient has different diagnosis codes and OASIS performance based on each individual.   Throw in coding rules updates and new code sets, it makes AI a challenge for healthcare coding.   

AI can be very helpful in summarizing large amounts of data.  One of the concerns with that is accuracy.  If hallucinations continue to happen, a human will still have to follow an AI to validate findings.  If the human is still having to read after the AI, then how helpful is the AI?   AI can save a lot of time in certain areas.  Especially where tasks are repetitive and not subjective.   If this, then that.  Well unfortunately, coding and OASIS is not "if this, then that."  It's nuanced.  It's balancing art and science.   So, while saving time and money, but potentially sacrificing accuracy, are agencies really saving time and money?  If AI codes something for a patient that doesn't exist, that's a problem.  If AI codes something not quite right, that's a problem.   If AI misinterprets a note, that's a problem.   If a human does those things, it's also a problem, to be clear.  Some data we're seeing for AI accuracy in coding is about 60% accurate at best.   Internally, it's pretty close to that also (full disclosure, we are also exploring how AI can help us and our agency partners).   No one can afford to have 60% accuracy on claims. That means missed diagnoses, and also made-up diagnoses.  If an agency has to choose between one or the other, the choice needs to be missed diagnoses- making diagnoses up is called fraud.  

What are the risks of AI in Medical Coding?

We have written about some of the risks in AI Coding.  You can read about it here.   The state of Texas recently settled a case about hallucination (inaccuracy) in coding.  You can read about it here.  Recently, there was another case where a university hospital system was fined $23 million for AI assisted upcoding.  From the article "A recent Reuters Legal News analysis highlighted a $23 million False Claims Act (FCA) settlement by an academic medical center, related to an automated coding system improperly assigning CPT® codes to emergency department visits. The U.S. Department of Justice (DOJ) alleged that this led to overpayments from Medicare and Medicaid. The case underscores a growing regulatory theme: AI-assisted “upcoding” – when AI assigns codes that don’t match the documentation – isn’t just a software glitch. It’s a compliance risk.... Let’s start with the basic tension: AI is great at speed, but not at nuance. When applied to documentation or coding, natural language processing (NLP) and machine learning tools can misinterpret a physician’s notes or assign codes based on incomplete context. In many cases, AI may infer services or diagnoses that were implied but not explicitly documented, thereby inflating claim values.

This is especially tempting in evaluation & management (E&M) coding, where subtle language shifts – like “reviewed by physician” versus “performed by physician” – can change billing levels. If an AI system “reads between the lines” and bumps the level of service without appropriate justification in the record, it becomes textbook upcoding."

In other words, agencies and contractors continue to have risk if using a loose AI strategy.   Is saving a few bucks worth being fined $23 million later?  

Bottom line, agencies can't afford to upcode.  If AI is enabling upcoding, then there's a massive compliance risk. (Not to mention HIPAA risk) (See Texas new law requiring EHR data be stored in US here. Data and HIPAA rules continue to be a necessary part of the conversation.  Offshore work in both tech and Coding and OASIS review can be a challenge for agencies in both HIPAA compliance AND coding and OASIS accuracy.  

AI Home Health Coding and OASIS Review

Home Care Answers continues to look for ways to leverage all technology possible to help improve efficiency and accuracy.  However, we're not willing to sacrifice accuracy for efficiency (or cost).  We recognize that if we could develop AI that can give accurate summaries of long H&Ps, that would save a lot of time (and money).   Our internal development of AI hasn't yielded the results we're ok with.  And the secondary audits we're doing against other AI providers is continuing to support our data also.   It's not quite there yet.  We're not suggesting AI can't be helpful.  It can.  It can be massively helpful.   But right now, we're seeing it not be accurate enough to release to the public and risk accuracy and fines.   

There are a lot of people claiming accuracy with AI.  But most can't prove or support the accuracy with a quantifiable number.  What good is AI if a human has to follow up and re-read the information to confirm accuracy?   If that doesn't happen, then agencies can risk compliance issues and fines.  Fines we're not comfortable with yet. 

Related content