I couldn't ever imagine making a court submission with hallucinated legal references. It seems incredibly obvious to me that you have to check what the AI says. If the AI gives me a case citation, I go into Westlaw and I look up and read the case. Only then do I include it in my submission if it supports my argument.
The majority of the time, AI saves me a lot of work by leading me straight to the legislation or case law that I need. Sometimes it completely makes things up. The only way to know the difference is to check everything.
I'm genuinely amazed that there are lawyers who don't realise this. Even pre-AI it was always drilled into us at law school that you never rely on Google results (e.g. blogs, law firm websites, etc.) as any kind of authoritative source. Even government-published legal guidance is suspect (I have often found it to be subtly wrong when compared to the source material). You can use these things as a starting point to help guide the overall direction of your research, but your final work has to be based on reading the legislation and case law yourself. Anything short of that is a recipe for a professional negligence claim from your client.
Just remember who the bottom half of your law school classmates were. Sometimes we forget those people.
Keep being a real one, GP. It's so hard to not become jaded.
Its not lawyers, its everyone.
You do expect it from most professionals.
"Set a high bar" comes from pole vaulting.
>The call to the bar[1] is a legal term of art in most common law jurisdictions where persons must be qualified to be allowed to argue in court on behalf of another party and are then said to have been "called to the bar" or to have received "call to the bar". "The bar" is now used as a collective noun for barristers, but literally referred to the wooden barrier in old courtrooms, which separated the often crowded public area at the rear from the space near the judges reserved for those having business with the court. Barristers would sit or stand immediately behind it, facing the judge, and could use it as a table for their briefs.
If you've never met a "professional" perhaps.
This is more often a feature than a bug in my experience.
Like if you follow their instructions in good faith you'll wind up going through 80% of the permitting you'd need to open a restaurant just to have some boy scouts sell baked goods at your strip mall. In the best possible case the secretary is over-worked and doesn't wanna do the bullshit and whispers to you "why don't you just <tweak some unimportant particulars> and then you wouldn't even need a permit". Ditto for just about every other thing that the government regulates on the high end but the casual or incidental user is less subject to.
IDK if it's ass covering or malice because the distinction doesn't matter. It's hostile to the public these agencies are supposed to serve.
For example, they make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations. However, without the base rate for briefs files by solo or small firms compare to larger firms, we don’t know if that is unusual or not. If 50% of briefs were files by solo firms and 40% were filed by small firms, then the data would actually be showing that firm size doesn’t matter.
Show me where she makes any predictive claims about likelihood. The analysis find that of the cases where AI was used, 90% are either solo practices or small firms. It does not conclude that there's a 90% chance a given claim using AI was done by solo or small firm, or make any other assertions about rates.
That is an assertion which requires numbers. If 98% of firms submitting legal briefs are solo or small firms, then the above statement is untrue. The archetype if my prior sentence is true, would be not-small/solo firms.
The background data is also suspect.
https://www.damiencharlotin.com/hallucinations/
"all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal. Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that the AI produced hallucinated content."
While a good idea, the database is predicated upon news, or people submitting examples as well. There may be some scraping of court documents as well, it's not entirely clear.
Regardless, the data is only predictive of people getting called out for such nonsense. A larger firm may have such issues with a lawyer, apologize to the court, have more clout with the court, and replace the lawyer with another employee.
This is something a smaller firm cannot do, if it is a firm of one person.
It's a nice writeup, and interesting. But it does lead to unverified assertions and conclusions.
Who used Claude, according to the invoice, and came to court with a completely false understanding of the record.
Chewed out by the judge and everything.
One person I spoke to used to write Quality Control reports, and now just uses ChatGPT because "It knows every part in our supply chain, just like that!"
Seems like some better warnings are in order here and there.
I only know how lawyers work from "Suits", but it looks like tedious, boring work, mainly searching for information. So an LLM (without knowing about hallucinations) probably feels like a god-send.
Perhaps LLMs are the solution to elite overproduction?
When you get too many elites, and society doesn't have useful and rewarding places for them all, those who think they should be elites who are left out get resentful. Sometimes they try to overthrow the current elites to get their "rightful" place. This can literally lead to revolutions; it often will lead at least to social unrest.
So what I think the GP is saying is, if we've got too many people with law degrees, those that submit filing with AI hallucinations in them may get disbarred, thereby reducing the overpopulation of lawyers.
But that's just my guess. I also found the GP posting to be less than clear.
Doesn't this prove your post as wrong? You're not sure that this will get disbarred presumably because you haven't seen somebody get disbarred over it despite it occurring for years.
The problem with elite overproduction is that the credential system is broken and awards credentials to people that don't hold a certain skill. In this case, actually producing valid legal filings.
I can represent myself incompetently for nearly free.
Bonus. In my state at least, Pro Se representatives are not required to cite case law, which I think is an area that AI is particularly prone to hallucination.
By supplying a single PDF of the my state’s uniform civil code, and having my prompt require that all answers be grounded in those rules have given me pretty good results.
After nine months of battling her three lawyers with lots of motions, responses, and hearings, including ones where I lost on technicalities due to AI, I finally got a reasonable settlement offer where she came down nearly 50% of what she was asking.
Saving me over $800,000.
Highly recommended and a game changer for Pro Se.
Also “get a prenup” is the best piece of advice you will ever read on HN.
Last week, I finally got an attorney to review a bunch of my LLM conversations / court filings, and it's been a good way to have saved a few thousand dollars (not having to fumble through explaining my legal theory to him — he can just follow along with my selected GPTs).
This was only because I got served a countersuit, and figured it was worth a few thousand dollars to not owe somebody on account of not entirely understanding civil procedure. Probably'll end up in Circuit Court, where being pro se is a terrible idea IMHO.
I wonder when we're going to see an AI-powered "Online Court Case Wizard" that lets you do lawsuits like installing Windows software.
The old courthouse had been converted to server rooms six months ago. The last human lawyer was just telling her so. Then his wearable pinged (unsolicited legal advice, possible tort) and he walked away mid-sentence. That afternoon, she glimpsed her neighbor watering his garden. They hadn't made eye contact since July. The liability was too great.
By evening she was up to $34k. Someone, somewhere, had caused her pain and suffering. She sat on her porch not looking at anything in particular. Her wearable chimed every few seconds.
This triggers a war with the AIRS that is programmed to maximise tax income, and must keep humans alive so they can be taxed.
You got the Groucho Marx gag wrong. He said “a child of five could understand this” and requested a child of five in order to help him understand it (implying he was stupid).
I’ve seen tens of startups, particularly in SF, who would routinely settle employment disputes, who now get complaints fucked to the tee by hallucinations that tank singularly the plaintiffs’ otherwise-winnable cases. (Crazier, these were traditionally contingency cases.)