Mind the Machine: The Potential Hidden Risks of Utilizing AI in Medicare Set-Asides
As Artificial Intelligence (AI) becomes more embedded in daily life, particularly in healthcare and claims administration, the use of AI in Medicare Set-Aside (MSA) allocation preparation has been increasing. Many MSA vendors are touting their use of AI in MSAs, alleging AI MSAs as being of superior quality to the preparation of MSA reports by humans. AI MSAs are also being touted as having quicker turnaround times as well as a reduced flat rate cost of the MSA report.
However, the overreliance on the usage of AI raises not only compliance red flags but also raises serious concerns about the accuracy/projection of the MSA. From predictive analytics to automated medical record review, AI offers the alluring promise of speed, consistency, and cost-efficiency. However, when it comes to safeguarding Medicare's future interests, safeguarding the Medicare beneficiary’s future care, as well as any MSP liability for the worker’s compensation payer, the risks of overreliance on AI are often overlooked. This blog explores the critical risks of workers’ compensation insurance payers relying too heavily on AI-driven tools in the MSA process, the dangers of using AI in MSAs and offers guidance for responsible integration.
What Are Medicare Set-Asides, and Why Accuracy Matters
A Medicare Set-Aside is a financial arrangement utilized primarily in workers' compensation settlements to allocate a portion of the settlement for future medical expenses that would otherwise be covered by Medicare.
The Medicare Secondary Payer (MSP) statute requires parties to reasonably recognize Medicare’s interests at the time of settlement. And while MSAs are voluntary, the Centers for Medicare & Medicaid Services (CMS) essentially require MSAs be adequately funded and accurately calculated to prevent cost-shifting to Medicare. Misestimating these costs can result in CMS not recognizing the underlying settlement, denying payment for a Medicare beneficiary’s injury-related future medical care, and potentially forcing the claimant into a spenddown of up to their total workers’ compensation settlement amount on otherwise Medicare-covered treatment and expenses.
Where AI Enters the Picture
AI is increasingly being used to: scan and summarize medical records using natural language processing (NLP); estimate future treatment costs based on historical claims data; generate automated allocation reports; and identify inconsistencies or anomalies in records received by the vendor.
While these tools can significantly reduce manual workload and expedite the MSA process, their use raises complex concerns, particularly where AI does not accurately predict an MSA amount appropriate for the specific injury/claim.
The Risks When Automation Goes Too Far
CMS Compliance Risk: CMS guidance provides that where an MSA is submitted to the Workers’ Compensation Review Contractor (WCRC) for review and approval, “all of the WCRC reviewers are licensed healthcare professionals, including registered nurses, physicians, nurse practitioners, and professional counselors” (See Section 9.4.2 of the WCMSA Reference Guide).
With an AI generated MSA, the report is insufficient to match the dynamic expertise of the WCRC reviewers. It is likely in the generation of an AI MSA that critical nuances to an individual’s medical circumstances, state law nuances as to compensability, and CMS/WCRC review trends are missed. Additionally, medical mitigation opportunities to drive a more reasonable MSA allocation are unlikely to be picked up by AI and incorporated into the MSA as well.
HIPAA Concerns: AI systems used in MSAs may process large volumes of protected health information (PHI), including medical records, treatment histories, and payment data. If these systems are not properly secured, there's a risk of unauthorized access, breaches, or data leakage. As an example, utilizing a non-HIPAA-compliant AI platform to summarize medical records could expose PHI to third parties without proper safeguards.
Further, HIPAA's minimum necessary standard requires that only the minimum amount of PHI necessary for a task be used or disclosed. AI tools might access or retain more data than necessary for their intended function. For example, an AI model trained in the summarization of full medical histories may retain extraneous sensitive information not required for the MSA. Significant HIPAA risks in using AI tools for MSAs are certainly a concern.
Legal Exposure: Over- or under-allocating MSA funds due to flawed AI logic can expose parties to liability. Again, if CMS later determines the MSA amount allocated at the time of settlement to be insufficient, CMS may refuse to recognize the underlying settlement or cover related care for the claimant.
Black Box Algorithms: Many AI tools operate as "black boxes," meaning their decision-making processes are not transparent. In the event of litigation or CMS inquiry, the inability to explain how an allocation figure was reached and thus reasonably recognizes Medicare’s future interest could become a serious liability.
Lack of Clinical Judgment: AI may miss nuances in medical records, such as off-label drug use, comorbidities, or treatment variations that significantly impact cost projections. A trained human reviewer can fully account for these variables.
Industry Concerns: While it appears no high-profile CMS enforcement actions have explicitly targeted AI-generated MSAs (yet), industry professionals have reported increased CMS scrutiny in MSA review processes. There are also growing concerns about the use of AI in vendor services that promise faster turnaround times without disclosing the extent of automation used.
Best Practices: Using AI Responsibly in MSAs
Workers’ compensation payers should demand that their MSA vendor be transparent with them about AI’s role in the MSA allocation process, should require their MSA vendor to provide documentation of how AI tools were used in the generation of MSAs, and the payers should validate these methodologies. And most importantly, payers should ensure all AI-generated outputs flow from secure HIPAA compliant sources and are at a minimum reviewed and signed off by qualified medical and legal professionals.
At Sanderson Firm, we believe in the strong value of our combined clinical and legal approach to MSA generation. We use only validated technologies which are HIPAA safe and transparent, and all of our MSAs are fully written by credentialed clinical allocators and licensed attorneys without the use of AI. We are not ruling out utilizing AI in the future, but it is our position that AI technology is not advanced enough to alleviate the aforementioned serious concerns.
In conclusion, there’s little to no doubt AI does have the potential to enhance efficiency in MSA allocations, but it should never fully replace human expertise and judgment. Compliance with the MSP statute and protection against legal exposure requires a careful, transparent, and balanced approach. In this high-stakes environment, responsible use of AI is not just best practice—it’s essential.