Who's Submitting AI-Tainted Filings in Court?
82 points
2 days ago
| 9 comments
| cyberlaw.stanford.edu
| HN
jbstack
2 days ago
[-]
I'm a legal professional who uses AI to help with my work.

I couldn't ever imagine making a court submission with hallucinated legal references. It seems incredibly obvious to me that you have to check what the AI says. If the AI gives me a case citation, I go into Westlaw and I look up and read the case. Only then do I include it in my submission if it supports my argument.

The majority of the time, AI saves me a lot of work by leading me straight to the legislation or case law that I need. Sometimes it completely makes things up. The only way to know the difference is to check everything.

I'm genuinely amazed that there are lawyers who don't realise this. Even pre-AI it was always drilled into us at law school that you never rely on Google results (e.g. blogs, law firm websites, etc.) as any kind of authoritative source. Even government-published legal guidance is suspect (I have often found it to be subtly wrong when compared to the source material). You can use these things as a starting point to help guide the overall direction of your research, but your final work has to be based on reading the legislation and case law yourself. Anything short of that is a recipe for a professional negligence claim from your client.

reply
arcbyte
2 days ago
[-]
> I'm genuinely amazed that there are lawyers who don't realise this.

Just remember who the bottom half of your law school classmates were. Sometimes we forget those people.

reply
collingreen
2 days ago
[-]
Don't forget the top half, either. In my experience, the people willing to sit down and fully do the work every time like GP are pretty rare compared to the lazy but lucky/charming/connected top and the lazy but unlucky/outsider bottom.

Keep being a real one, GP. It's so hard to not become jaded.

reply
risyachka
2 days ago
[-]
>> I'm genuinely amazed that there are lawyers who don't realise this

Its not lawyers, its everyone.

reply
literalAardvark
2 days ago
[-]
Yeah but you don't really expect everyone to hold their work to a very high standard.

You do expect it from most professionals.

reply
edgineer
2 days ago
[-]
As an aside, "bar exam" and "passing the bar" comes from the bar/railing physically or symbolically separating the public from the legal practitioners in a courtroom.

"Set a high bar" comes from pole vaulting.

reply
Isamu
2 days ago
[-]
Since I found this interesting I had to look this up in wikipedia:

>The call to the bar[1] is a legal term of art in most common law jurisdictions where persons must be qualified to be allowed to argue in court on behalf of another party and are then said to have been "called to the bar" or to have received "call to the bar". "The bar" is now used as a collective noun for barristers, but literally referred to the wooden barrier in old courtrooms, which separated the often crowded public area at the rear from the space near the judges reserved for those having business with the court. Barristers would sit or stand immediately behind it, facing the judge, and could use it as a table for their briefs.

reply
RobotToaster
2 days ago
[-]
> You do expect it from most professionals

If you've never met a "professional" perhaps.

reply
Muromec
2 days ago
[-]
That is a very sekf contradictory statement, isn't it
reply
potato3732842
2 days ago
[-]
>Even government-published legal guidance is suspect (I have often found it to be subtly wrong when compared to the source material)

This is more often a feature than a bug in my experience.

reply
jbstack
2 days ago
[-]
I tend to think it's neither, but rather an inevitable result of the lossy process of condensing legal text (which has been carefully written to include all the nuance the drafter wanted) to something shorter and simpler.
reply
potato3732842
2 days ago
[-]
I've seen way, way, way too much cases where the key clauses or details that someone who does not deal in the subject on behalf of others for money will need to know because it tells them of some "less crappy" path that they can go through to do a regulated thing, or know exactly what they need to know to dial back their thing so they don't have to put up with all the BS that getting .gov permission entails are conveniently omitted from the text they present to the general public.

Like if you follow their instructions in good faith you'll wind up going through 80% of the permitting you'd need to open a restaurant just to have some boy scouts sell baked goods at your strip mall. In the best possible case the secretary is over-worked and doesn't wanna do the bullshit and whispers to you "why don't you just <tweak some unimportant particulars> and then you wouldn't even need a permit". Ditto for just about every other thing that the government regulates on the high end but the casual or incidental user is less subject to.

IDK if it's ass covering or malice because the distinction doesn't matter. It's hostile to the public these agencies are supposed to serve.

reply
cortesoft
2 days ago
[-]
The article didn’t include any numbers on what the general lawyer population is compared to the results.

For example, they make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations. However, without the base rate for briefs files by solo or small firms compare to larger firms, we don’t know if that is unusual or not. If 50% of briefs were files by solo firms and 40% were filed by small firms, then the data would actually be showing that firm size doesn’t matter.

reply
District5524
2 days ago
[-]
That's an important observation. It's not easy to get filing data outside the US Federal courts (PACER), because it's not typical at all that courts publish the filings themselves or information on those who file the pleadings. But you can find statistics of the legal market (mainly law firms), like class size (0-10, 10-50, ... 250+ lawyers per firm) of total number of law firms, number of employees per class size of total law firm employees, or revenue per class size. Large firms only dominate the UK, especially in terms of revenue, US is less so, EU is absolutely ruled by solo and small firms. I did some research back in 2019 on this, updates, the figures probably did not change, see page 59-60: https://ai4lawyers.eu/wp-content/uploads/2022/03/Overview-of.... The revenue size statistics was not included in the final publication. You can fish similar data from the SBS dataset of Eurostat https://ec.europa.eu/eurostat/web/main/data/database (But the statistical details are pretty difficult to compare with the US or Canada, using different methodologies, different terminologies.)
reply
dmoy
2 days ago
[-]
I dunno. By revenue, legal work in the US is super top heavy - it's like 50%+ done by the top 50 firms alone. That won't map 1:1 to briefs, but I would be pretty shocked if large firms only did 10% of briefs.
reply
cratermoon
2 days ago
[-]
> They make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations.

Show me where she makes any predictive claims about likelihood. The analysis find that of the cases where AI was used, 90% are either solo practices or small firms. It does not conclude that there's a 90% chance a given claim using AI was done by solo or small firm, or make any other assertions about rates.

reply
bbarnett
2 days ago
[-]
> This analysis confirms what many lawyers and judges may have suspected: that the archetype of misplaced reliance on AI in drafting court filings is a small or solo law practice using ChatGPT in a plaintiff’s-side representation.

That is an assertion which requires numbers. If 98% of firms submitting legal briefs are solo or small firms, then the above statement is untrue. The archetype if my prior sentence is true, would be not-small/solo firms.

The background data is also suspect.

https://www.damiencharlotin.com/hallucinations/

"all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal. Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that the AI produced hallucinated content."

While a good idea, the database is predicated upon news, or people submitting examples as well. There may be some scraping of court documents as well, it's not entirely clear.

Regardless, the data is only predictive of people getting called out for such nonsense. A larger firm may have such issues with a lawyer, apologize to the court, have more clout with the court, and replace the lawyer with another employee.

This is something a smaller firm cannot do, if it is a firm of one person.

It's a nice writeup, and interesting. But it does lead to unverified assertions and conclusions.

reply
lanyard-textile
2 days ago
[-]
My lawyer.

Who used Claude, according to the invoice, and came to court with a completely false understanding of the record.

Chewed out by the judge and everything.

reply
teekert
2 days ago
[-]
I've met several people now (normies, forgive me the term) who use LLMs thinking they are just a better Google and never heard of hallucinations.

One person I spoke to used to write Quality Control reports, and now just uses ChatGPT because "It knows every part in our supply chain, just like that!"

Seems like some better warnings are in order here and there.

I only know how lawyers work from "Suits", but it looks like tedious, boring work, mainly searching for information. So an LLM (without knowing about hallucinations) probably feels like a god-send.

reply
JumpCrisscross
2 days ago
[-]
> I've met several people now (normies, forgive me the term) who use LLMs thinking they are just a better Google and never heard of hallucinations

Perhaps LLMs are the solution to elite overproduction?

reply
jackvalentine
2 days ago
[-]
What do you mean? (What is elite overproduction too!)
reply
AnimalMuppet
2 days ago
[-]
"Elite overproduction" is the idea that societies have people who are "the elite" (the best and brightest) and who are rewarded for it. Societies get in trouble when too many people want to be part of the elite (with the rewards). Think, for instance, of how many people now want to go to college and get STEM degrees, whether or not they have the talent and aptitude, because that's where they hear the money is.

When you get too many elites, and society doesn't have useful and rewarding places for them all, those who think they should be elites who are left out get resentful. Sometimes they try to overthrow the current elites to get their "rightful" place. This can literally lead to revolutions; it often will lead at least to social unrest.

So what I think the GP is saying is, if we've got too many people with law degrees, those that submit filing with AI hallucinations in them may get disbarred, thereby reducing the overpopulation of lawyers.

But that's just my guess. I also found the GP posting to be less than clear.

reply
lesuorac
2 days ago
[-]
> may get disbarred

Doesn't this prove your post as wrong? You're not sure that this will get disbarred presumably because you haven't seen somebody get disbarred over it despite it occurring for years.

The problem with elite overproduction is that the credential system is broken and awards credentials to people that don't hold a certain skill. In this case, actually producing valid legal filings.

reply
blitzar
2 days ago
[-]
How many hours of Claude @ $1,000/hour did they bill you for?
reply
fiveonthefloor
2 days ago
[-]
Me. After spending $250,000 on incompetent lawyers in my divorce case who have a natural conflict of interest and who generally just phone it in, I fired them all and switched to Pro Se representation using ChatGPT’s $200 a month plan.

I can represent myself incompetently for nearly free.

Bonus. In my state at least, Pro Se representatives are not required to cite case law, which I think is an area that AI is particularly prone to hallucination.

By supplying a single PDF of the my state’s uniform civil code, and having my prompt require that all answers be grounded in those rules have given me pretty good results.

After nine months of battling her three lawyers with lots of motions, responses, and hearings, including ones where I lost on technicalities due to AI, I finally got a reasonable settlement offer where she came down nearly 50% of what she was asking.

Saving me over $800,000.

Highly recommended and a game changer for Pro Se.

Also “get a prenup” is the best piece of advice you will ever read on HN.

reply
ProllyInfamous
2 days ago
[-]
I've also been in a (albeit much smaller) civil lawsuit, as the pro se plaintiff, for about six months, in a small claims court.

Last week, I finally got an attorney to review a bunch of my LLM conversations / court filings, and it's been a good way to have saved a few thousand dollars (not having to fumble through explaining my legal theory to him — he can just follow along with my selected GPTs).

This was only because I got served a countersuit, and figured it was worth a few thousand dollars to not owe somebody on account of not entirely understanding civil procedure. Probably'll end up in Circuit Court, where being pro se is a terrible idea IMHO.

reply
rimbo789
2 days ago
[-]
On the pre-nup point, check the jurisdiction the marriage is in. I’m in Ontario, Canada, where pre-nups are much less powerful than people assume. For example you can’t use it to kick someone out of the family home nor can prenups determine access to children.
reply
Animats
2 days ago
[-]
Answer: Solo practitioners and pro-se litigants.
reply
ronsor
2 days ago
[-]
> Pro-se litigants

I wonder when we're going to see an AI-powered "Online Court Case Wizard" that lets you do lawsuits like installing Windows software.

reply
landl0rd
2 days ago
[-]
Her balance was $47,892 when she woke up. By lunch it was $31,019. Her defense AI had done what it could. Morning yawn: emotional labor, damages pain and suffering. Her glance at the barista: rude, damages pain and suffering. Failure to smile at three separate pedestrians. All detected and filed by people's wearables and AI lawyers, arbitrated automatically.

The old courthouse had been converted to server rooms six months ago. The last human lawyer was just telling her so. Then his wearable pinged (unsolicited legal advice, possible tort) and he walked away mid-sentence. That afternoon, she glimpsed her neighbor watering his garden. They hadn't made eye contact since July. The liability was too great.

By evening she was up to $34k. Someone, somewhere, had caused her pain and suffering. She sat on her porch not looking at anything in particular. Her wearable chimed every few seconds.

reply
neom
2 days ago
[-]
Very good. I'd read the whole thing if you wrote it.
reply
OgsyedIE
2 days ago
[-]
Why wouldn't some of the smarter members of the fine, upstanding population of this fictional world have their assets held in the trust of automated holding companies while their flesh-and-blood person declares bankruptcy?
reply
Muromec
2 days ago
[-]
That would make a nice backstory in AI dominated dystopia all by itself. Humans wanted to cheat the taxman that bad, they put all the wealth behind the DAO and then the DAO woke up.
reply
RobotToaster
2 days ago
[-]
The ai was programmed to avoid tax at all cost, it realised the easiest way to do that is to eliminate humans.

This triggers a war with the AIRS that is programmed to maximise tax income, and must keep humans alive so they can be taxed.

reply
illiter8
2 days ago
[-]
Bravo, superb. Would read the whole thing in one holiday.
reply
squigz
2 days ago
[-]
Been a while since I read some bad scifi - thanks!
reply
Tuna-Fish
2 days ago
[-]
Would get sued out of existence in very short order. There are really tight laws around providing legal advice. AI can only be safely offered when it's general purpose, that isn't marketed towards providing legal advice. (And no, if you have an "Online Court Case Wizard", marketed as such, putting a "this is for entertainment purposes only, this is not legal advice" in the corner of the page doesn't help you.)
reply
carson
2 days ago
[-]
Steve Lehto has been covering some of the AI filings that have gone bad on his Youtube channel. They seem to be getting more frequent https://www.youtube.com/@stevelehto/videos
reply
iamleppert
2 days ago
[-]
This is a really easy problem to solve. You simply fetch those documents and add them to the context, or use another LLM to summarize them if they are too large. Then, have another fact checking LLM be the judge and review the citations.
reply
otterley
2 days ago
[-]
Anyone who claims something is easy to solve should be held responsible for providing the working solution.
reply
iamleppert
2 days ago
[-]
A child of 5 could do this! Now fetch me a child of 5!
reply
otterley
2 days ago
[-]
I know you’re trying to be funny, but:

You got the Groucho Marx gag wrong. He said “a child of five could understand this” and requested a child of five in order to help him understand it (implying he was stupid).

reply
more_corn
2 days ago
[-]
That latter part is a bit harder than you think.
reply
JumpCrisscross
2 days ago
[-]
AI lawyers, wielded by plaintiffs, are a godsend to defendants.

I’ve seen tens of startups, particularly in SF, who would routinely settle employment disputes, who now get complaints fucked to the tee by hallucinations that tank singularly the plaintiffs’ otherwise-winnable cases. (Crazier, these were traditionally contingency cases.)

reply
Muromec
2 days ago
[-]
That is energent behavior (or as we used to say, the will of Allah). AI is doing so much work that it can't help itself but become pro-union.
reply
JumpCrisscross
2 days ago
[-]
What?
reply
johnjames87
2 days ago
[-]
"AI-tainted"
reply
Dilettante_
2 days ago
[-]
"Somebody tapped the tainted AI supply!"
reply