Trends in AI Insurance Coverage and Claims Handling

Marshall Gilinsky , Jamie O'Neill

|

February 19, 2025

Policyholders need to monitor developments at the intersection of AI and insurance in three main areas: 1) possible changes in risk exposures and policy wordings; 2) the development and deployment of AI tools in insurance company operations, especially claims handling; and 3) insurance regulations and legislation targeting the use of AI by insurance companies. 

The deployment of AI tools by insurance companies could bring mutual benefits for the insurance industry and policyholders. However, it could also lead to zero-sum claims handling strategies that would violate policyholders’ rights. Here are a few recent developments worth noting:

AI-Focused Policy Wordings Are Slow to Emerge and Limited to Cyberrisks

Whether they are exclusions or enhancements, new policy wordings have historically emerged in response to a change in exposures faced by policyholders. Examples include the advent of pollution and asbestos exclusions in the late 1980s, terrorism exclusions after September 11, 2001, and virus exclusions after the 2002 SARS outbreak. New products offering coverage enhancements often follow exclusions, as happened with pollution policies, sexual abuse and molestation policies and cyber policies. Occasionally, wordings change because of perceived risks that never manifest, as with the Y2K exclusions rolled out in 1999.

So far, there has not been a noticeable change in policyholders’ risk exposures across most insurance product lines based on their use of AI tools. Accordingly, insurance companies have not deployed exclusions restricting coverage for AI-related risks or losses. This is unsurprising since it would be very difficult to draft an exclusion that defines AI in a precise way that avoids sweeping away vast amounts of coverage and undermining the demand for an insurance company’s products.

In the cyber area, however, the AI risks companies face are evolving and there have been a few wording and product developments of note. Recently, AXA released a new endorsement for cyber policies that addresses generative AI risks. Coalition also released an endorsement for its cyber policies that “expands the definition of a security failure or data breach to include an AI security event” and “expands the trigger for a funds transfer fraud (FTF) event to include fraudulent instruction transmitted through the use of deepfakes” or other AI technology. 

Class Actions Target Health Insurers’ Use of Claims Handling Algorithms

More than half of Medicare enrollees are enrolled in Medicare Advantage (MA) plans—health insurance policies sold by private insurance companies as an alternative to traditional Medicare. MA plans, which are paid on a per-enrollee basis by the federal government, are required to provide all the benefits of traditional Medicare but attract customers by providing additional benefits such as dental, hearing and vision coverage. They must also offer an annual limit on out-of-pocket expenses, which traditional Medicare lacks. Most MA plans also bundle a Part D prescription drug plan at no additional cost.

MA plans all but require patients and providers to obtain prior authorization for some treatments, and they deny or limit coverage for select treatments (notably post-acute care) at much higher rates than traditional Medicare. In March 2023, a series of exposés published by STAT News reporters Casey Ross and Bob Herman revealed that in recent years, coverage denials for post-acute care and other expensive treatments for severely ill MA enrollees escalated after UnitedHealthcare’s Optum unit acquired NaviHealth, a company that deploys AI to assess coverage requests. 

In July 2023, STAT reported that UnitedHealth was using NaviHealth algorithms to set target dates for the discharge of patients based on the average hospital or rehab stay following a procedure, and refusing to stray from those targets even in situations where the patient demonstrated a need for continued medical care.

The Senate Permanent Subcommittee of Investigation (PSI) subsequently launched an investigation into the barriers seniors enrolled in Medicare Advantage face in accessing care. A subsequent report the subcommittee released in October 2024 found that UnitedHealthcare’s denial rate for prior authorization requests for post-acute care significantly increased at the same time the company was launching initiatives to automate the review process, rising from 10% in 2020 to 16.3% in 2021 and 22.7% in 2022. The investigation also found that CVS was using AI to reduce spending at post-acute facilities. 

Several class action suits alleging unlawful deployment of algorithms or AI to deny patients needed care followed STAT’s reporting. In July 2023, plaintiffs in Kisting-Leung v. Cigna Corporation sued Cigna in a California federal court, alleging an “illegal scheme to systematically, wrongfully and automatically deny its insureds the thorough, individualized physician review of claims guaranteed to them by California law.” Filed in Minnesota federal court in April 2024, Estate of Lokken v. UnitedHealth Grp. Inc. alleges that UnitedHealthcare illegally deployed AI in place of real medical professionals to deny coverage owed to elderly patients under Medicare Advantage plans. A similar suit, Barrows v. Humana, Inc., was filed later that month in federal court in Kentucky.

Notably, the abuses reported by STAT and alleged in these class action suits involve situations where humans had significant discretion and oversight regarding the computer-generated claims recommendations. It will be critical to keep a close eye on the possibility that, in the future, AI tools may have more control over claims handling decisions. In such a possible future, transparency, explainability and accountability will be essential in protecting policyholders’ rights.

New Legislation and Insurance Regulations Target AI in Claims Handling

In 2024, numerous state insurance departments adopted initial regulations based on a December 2023 bulletin issued by the National Association of Insurance Commissioners (NAIC). The regulations, which apply to all types of insurance, are notable in the following respects:

  • Recognition of risk: The regulations note, “AI systems can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability and lack of transparency and explainability.”
  • Extension of bad faith law to AI use: The regulations specify that “actions taken by insurers...must not violate the Unfair Trade Practices Act or the Unfair Claims Settlement Practices Act, regardless of the methods the insurer used to determine or support its actions.”
  • Notice to policyholders is required: The regulations require insurance companies to implement artificial intelligence system programs that “include processes and procedures providing notice to impacted consumers that AI systems are in use and provide access to appropriate levels of information.”
  • Documentation and testing of the development, training and implementation of AI tools is required: The regulations require “detailed documentation of the development and use of the predictive models;” “assessments such as interpretability, repeatability, robustness, regular tuning, reproducibility, traceability, model drift, and the auditability of these measurements where appropriate;” and “validating, testing and retesting...outputs upon implementation, including the suitability of the data used to develop, train, validate and audit the model.”

In addition, the California Legislature responded to emerging AI abuses by insurance companies when it passed the Physicians Make Decisions Act (SB-1120), effective January 1, 2025. When reviewing health and disability insurance claims, the act specifies that only a “licensed physician or a licensed health care professional...may deny or modify requests for authorization of health care services...for reasons of medical necessity.” The law implicitly acknowledges that AI tools might be used in the claims analysis process but specifies that the AI tool or algorithm:

  • Can base its analysis only on the policyholder’s medical history, individual clinical circumstances presented by the policyholder’s doctor or provider or other clinical information in the policyholder’s clinical record
  • Cannot base its analysis solely on a group dataset
  • Cannot supplant health care provider decision making
  • Cannot deny, delay or modify health care services based in whole or in part on medical necessity
  • Is accompanied by written procedures that are disclosed to policyholders, providers and the public upon request

Such regulations are a good start, but effective monitoring and enforcement are essential, along with continued regulation that anticipates and responds to problematic claims handling practices as they emerge and evolve.

Marshall Gilinsky is a shareholder in Anderson Kill’s Boston office and represents policyholders in claims involving property and business interruption, commercial general liability, errors and omissions, directors and officers, and life insurance.
Jamie O'Neill is an attorney in Anderson Kill’s New York office. She focuses her practice on insurance recovery.