There’s a new twist on the old saying that people who represent themselves in court have fools for clients.
Today, more consumers, and even some lawyers, are using artificial intelligence in their legal battles, especially in insurance disputes. Plaintiffs are using AI tools to write complaints, appeal denied claims, and even argue motions in court, often to save on hiring a lawyer.
“Self-represented litigants using AI are starting to show up in insurance cases, mostly in smaller coverage disputes and claim denials,” Joseph Raetzer, an attorney and legal content reviewer at LawDistrict, told Insurify.
AI-assisted court filings look credible, with clean formatting and a professional tone, but lawyers say the content is often lacking.
“The filings look clean,” Raetzer said. “Then you read the substance, and it’s clear the person doesn’t actually understand the policy or the rules they’re arguing under.”
Small AI mistakes can have lasting consequences
Insurance disputes can be tough, especially for people representing themselves with AI. Insurance policies use strict definitions, exclusions, and tight deadlines. Even small mistakes can ruin a case.
“Miss a notice requirement or cite the wrong state regulation, and the case collapses fast,” Raetzer added.
In an oft-cited case, Mata v. Avianca Inc., a lawyer used ChatGPT to generate legal citations, and the AI platform fabricated several of them. The court sanctioned the attorney, imposing a $5,000 penalty.
“It became one of the first major cautionary tales about using AI in court without actually verifying what it spits out,” said Ron Harper, a licensed paralegal.
In Lacey v. State Farm General Insurance Co., a former Los Angeles County district attorney submitted a brief containing multiple AI-generated errors, including nonexistent cases and misquoted legal authorities. A court-appointed special master found that roughly a third of the citations were flawed and that the plaintiffs had failed to properly review the AI-assisted draft before filing it.
The court struck the filings and imposed more than $31,000 in sanctions, calling the conduct reckless and warning that reliance on AI without verification can mislead the court and waste judicial resources.
The pattern repeated in a Georgia case several months ago, when a man representing himself had his homeowners insurance dispute against State Farm dismissed. The judge noted that his legal submissions “included multiple citations to nonexistent cases” and said the plaintiff was “attempting to perpetrate a fraud on the Court” through those false citations.
The risks increase for AI use in insurance cases
Legal experts warn that insurance cases are especially risky for AI misuse. Policies are dense and technical and vary by state, making them hard to understand without legal training.
“In insurance cases, you’re dealing with complicated policy language, exclusions, and deadlines,” said Yosi Yahoudai, co-founder of J&Y Law. “Insurance companies have teams that handle this every day.”
The gap between experienced insurance companies and people representing themselves is obvious in court.
But AI can make people representing themselves in court feel more confident than they should be.
“The documents often display an illusion of legitimacy because of the consistent use of case/law references and legal quotations,” said Alan Heimlich, founder of Heimlich Law. But, he adds, these documents often miss important procedural steps, which is a common reason cases fail.
“They contain little or no reference to procedural rules,” Heimlich said. “That creates significant exposure for self-represented parties.”
AI court miscues are a failure few are watching
Many of the cases in which plaintiffs argue their cases without a lawyer fly under the radar. Results from self-represented insurance disputes are rarely published.
“A homeowner challenges a denied property claim, files AI-written motions, loses or settles, and the case never gets reported,” Raetzer said. “It just disappears into the docket.”
Since most results aren’t published, it’s hard to measure the extent to which AI is affecting legal outcomes.
However, the independent Damien Charlotin database, which tracks legal decisions involving generative AI that produced hallucinated content and other types of AI-generated arguments, has found more than 1,300 cases of judges’ reprimands for AI misuse. The total has nearly tripled from six months ago.
Attorneys say insurers are already adapting to the growing use of AI in court cases.
Heimlich predicted a rise in early motion denials and procedural dismissals instead of substantive decisions.
In practical terms, it means insurers may move faster to challenge flawed filings before cases reach deeper legal arguments.
What’s next? Experts predict people will continue to risk AI’s shortcomings to save
While the failures of AI can cause a case to be tossed immediately, experts expect the potential cost savings to continue to draw people to the solution. This could be particularly true for smaller claims cases where legal fees often exceed the desired compensation for auto repairs or minor damage.
“Hiring a lawyer can cost more than the claim itself,” Raetzer said.
And while AI is a cheaper option that can make a plaintiff feel more confident, Yahoudai says people can’t afford to blindly trust the answers AI gives them.
“AI might encourage more people to file claims because they feel like they have help,” Yahoudai said. “But some AI answers are completely made up. Make sure those links actually work.”
)
)
)
)
)
)
)
)
)