The families of two deceased UnitedHealth patients are suing the company over claims denials for extended care deemed necessary by doctors.
While claims denial is a common issue, the lawsuit alleges that the company utilized an AI-powered software tool called nH Predict that incorrectly overrode the doctor’s recommendations and denied the claims.
This kept the patients from receiving proper care and required them to pay out of pocket for the care the doctors recommended.
UnitedHealth had employed naviHealth to create the AI model that lawyers claim was known to be inaccurate in 90% of the cases.
The class action lawsuit claims that “the elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary medical care, all because [UnitedHealth’s] AI model ‘disagrees’ with their real live doctors’ determinations.”
The families claim that UnitedHealth continually denies that the AI model was incorrect, playing the long game that eventually forces families to pay out of pocket.
According to the lawsuit, only .2% of all patients actually take the time to appeal denials.
The case shines a spotlight on the use of AI for determinations in healthcare without human oversight.
As AI models become more widespread, this same danger of improperly trained models will become more and more common.
Other examples of faulty algorithms in the healthcare environment include software designed to help detect sepsis which failed to recognize the infection in 67% of people who developed it.
While AI can provide many benefits if used properly, there are too many cases of it utilizing poorly conceived algorithms without any oversight policy.
This lawsuit seeks to uncover if this was intentional neglect by UnitedHealth or if this is a case of the company moving too fast to implement a cost-saving system.