US v. Heppner (S.D.N.Y. 2026) no attorney-client privilege for AI chats [pdf]
60 points
2 hours ago
| 11 comments
| fingfx.thomsonreuters.com
| HN
Sevii
45 minutes ago
[-]
How is this not effectively a ban on representing yourself in court? The lawyers and judge are going to be using AI. But the layman isn't allowed to use it?
reply
flkiwi
51 minutes ago
[-]
Obviously this (along with the original unwritten order a few weeks ago) is causing a stir, but this decision isn't as weird as it sounds. The defendant's assertion was essentially a retroactive application of privilege: he didn't use Claude to draft documents at his attorney's request but instead used Claude effectively in lieu of an attorney and later provided the Claude-drafted materials to his attorney (heavily paraphrasing here). Privilege is not a bandage that closes self-inflicted wounds.

I have some concerns about some of the reasoning, namely the practical implications of referencing Claude's TOS in a world where public AI features are creeping into everything, but I expect some of the reasoning is based on this particular defendant likely being more sophisticated than an average person.

reply
pvtmert
1 hour ago
[-]
people point out in sibling comments that is phone call then be out of client-attorney privileges? since it goes through a "3rd party"? maybe not the call itself but the voicemail for example. can it be "extracted" for the same purpose?

another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.

reply
asdfasgasdgasdg
1 hour ago
[-]
Well, what type of phone call? You mean a phone call between a lawyer and a client? If so, then, of course it is protected, because it is communication between the lawyer and the client. It is not a good analogy for Claude chats because those chats are not communication between a laywer and a client.

The concept of sharing the chat with the lawyer will not work, since as the ruling points out, you cannot turn a non-privileged document into a privileged one by sharing it with your lawyer after the fact.

reply
jerf
57 minutes ago
[-]
The law has a concept of a "carrier" [1], and has the ability to judge whether or not the carrier in question is responsible for what it is carrying.

I'm not making a blanket statement that that means everything is a carrier, because a good chunk of the page I linked is devoted to endless legal nuances and I defer the details of the concept to those who know better. I'm just saying that the law has a well-established concept for this sort of situation, such that it is not the case that just because a third party is involved instantly all protections dissolve. If you really want to dig into the details, that's something an AI that hits the web and digests things would be pretty good at, as long as you're not planning on legal action based on that. Sometimes the hardest part of learning about something is just finding the term for it that lets you dig in.

[1]: https://en.wikipedia.org/wiki/Common_carrier

reply
SpicyLemonZest
55 minutes ago
[-]
> another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.

This guy made the same argument, but as the court detailed, this is a misunderstanding of attorney-client privilege. Sharing an unprivileged conversation with your lawyer doesn't make it privileged. A phone call to your lawyer is privileged, but a phone call to your cousin Jimbo about what you should tell your lawyer is not.

reply
dathinab
10 minutes ago
[-]
The headline is a bit misleading.

It's not "no attorney-client privilege for AI chats" in general.

But a situation where the same would also apply if, instead of going to an chat bot, the person had gone to a random 3rd party non-attorney related person.

As in:

- the documents where not communication between the defendant and their attorney, but the defendant and the AI

- the AI is no attorney

- the attorney didn't instruct the defendant to use the AI / the court found the defendant did not communicate with the AI with the purpose of finding legal consule

- the communications with the AI (provider) where not confidential as a) it's a arbitrary 3rd party and b) they explicitly exclude usage for legal cases in their TOS

Still this isn't a nothing burger as some of the things the court pointed out can become highly problematic in other context. Like the insistence that attorney privilege is fundamentally build on a trusting human relationship, instead of a trusting relationship. Or that AI isn't just part of facilitating communication, like a spell checker, word program or voice mail box, legal book you look things up. All potentially 3rd parties all not by themself communication with a human but all part of facilitating the communication.

reply
fny
57 minutes ago
[-]
I highly recommend everyone actually read the opinion. It's such a thorough legal takedown of Heppner, you'll learn how the law works and why it doesn't apply to a lot of the made up cases in this thread:

TLDR:

- Claude told him IANAL

- Claude privacy policies say they "may disclose personal data to third parties in connection with claims, disputes, or litigation"

- Work product doctrine, does not apply in the same way to plaintiffs

- Lawyers did not direct him to use Claude (i.e. the laywers did not direct him to do research for the case using a specific tool)

My takeaway is that, as is, I should not do any work without a VPN or in plaintext. Everything else was up for grabs even before this case.

reply
asdfasgasdgasdg
53 minutes ago
[-]
Is a VPN really going to help here? I guess if you can figure out a way to pay Claude anonymously. But if you are charged with a crime and your computer is siezed, and there is some way to discover your Claude account from the contents of your computer, then you will be up a creek either way.

My takeaway is: don't do crime, and if you must do crime, don't use AI in the commission of a crime, in a similar way as it is unwise for criminals to keep recordings of their own phone conversations or what have you (a surprisingly common habit for criminals!).

reply
randallsquared
23 minutes ago
[-]
That's a great takeaway, but may not be practically achievable in the world where

> The average professional in this country wakes up in the morning, goes to work, comes home, eats dinner, and then goes to sleep, unaware that he or she has likely committed several federal crimes that day.

-- https://www.amazon.com/Three-Felonies-Day-Target-Innocent/dp...

reply
mmastrac
2 hours ago
[-]
There is no way that this state of things survives long-term. Rationally, it's really no different than any other tool involved in production of your work product.

FWIW not all cases have gone the same way, so there is likely to be a higher reckoning on this in multiple countries: https://fingfx.thomsonreuters.com/gfx/legaldocs/mypmyjwdzpr/...

reply
fny
1 hour ago
[-]
> “Plaintiff, as a pro se litigant, has a right to assert work product protection over such material.”

This just argues attorneys have this protection--which is true. Typical plaintiff's do not have the same level of protection.

reply
altairprime
1 hour ago
[-]
They’d have to pass a Senate bill modifying copyright and granting corporate-nonperson status with legal rights to hosted, certified by the bar, registered and renewed AIs only. Otherwise the work that’s markov’d as ‘legal advice’ has no origination of record from a legally-recognized entity and therefore can’t be affirmed to be legal advice (legal advice is not public domain, or else protections would be drastically weakened; and, provided by A to B test fails: no such entity A), and anyone could claim the entirety of their email as protected from discovery by ‘cc’ing AI’ for legal advice on every email for a vacation responder reply emitted by a self-hosted trepanned agent (a corrupted lawyer can still give protected legal advice).

Or, they’d have to assert that content generated by AI on behalf of a user is protected — there’s no way to tell whether it’s legal advice so it all must be treated as such (can’t trust the AI to judge this, given how hallucinatory they are in legal filings!) — at which point AI companies would be refused the right to harvest your AI conversations for further training and profit-extraction (which would subject them to prosecution for, of all things, illegal wiretap under §2511(1)(e)(i) if not others). Google would never allow that to happen, seeing as how that’s literally their entire business.

I fully expect someone to set up the equivalent of HIPAA for legal advice AIs and for that to be found acceptable for instances hosted in protected enclaves, but the big four’s main products aren’t likely to qualify for that until they solve hallucinations and earn back judges’ trust.

(I am not your lawyer, this is not legal advice. Ironically, I wouldn’t have to say this if it was AI writing. Heh.)

reply
mystraline
15 minutes ago
[-]
I'm not surprised at all. Corporate LLM chats are saved, used as training corpus, and are definite target for discovery.

Running your own LLM on your own hardware is how you can do this without getting hit with discovery.

And also, you want to run a LLM thats abliterated and larger. And if you connect to the internet, USE A VPN.

reply
rogerallen
2 hours ago
[-]
reply
MengerSponge
1 hour ago
[-]
I'm guessing a self-hosted chat remains privileged?
reply
asdfasgasdgasdg
1 hour ago
[-]
Definitely not, unless you are acting as your own advocate. Self-hosting does not offer any form of protection. Just like notes you write yourself on your PC, a self-hosted chat could be used as evidence against you.
reply
siliconc0w
1 hour ago
[-]
This is a pretty terrible decision and inconsistent with all sorts of all other standards. If I did legal research in Google docs, it'd be covered. If I went to a legal library and took notes, it'd be covered, etc
reply
bobro
34 minutes ago
[-]
Chatting with Claude strikes me as fundamentally different from writing your own notes.
reply
jeffbee
2 hours ago
[-]
Heppner's argument was dumb but it opens a field of interesting questions. If I use a document processor (like Google Docs) to compose a message to my attorney, which message itself would be privileged, but I use some sidebar feature of Google Docs/Gemini to clean up a sentence that I thought was clunky, and elsewhere I have, for whatever reason, enabled features that permit Google to use inputs and outputs to train or refine their models, has that destroyed the privilege?
reply
erikerikson
59 minutes ago
[-]
The brief linked above[0] was easy to read. IANAL but in it the author seems to say that online tools fail to meet the confidentiality "test" and explains the ruling in clear language.

[0] https://news.ycombinator.com/item?id=47779377

reply
mrhottakes
2 hours ago
[-]
Yes, you lose the privilege if your attorney-client communications are not intended to be confidential. If you agree to share those communications with a third party, you don't intend them to be confidential.
reply
robterrell
1 hour ago
[-]
But that communication is clearly intended to be confidential. Also isn't having one attorney on a multi-party communication marked confidential sufficient to create privilege?
reply
mtlynch
1 hour ago
[-]
When I worked at two different FAANG companies, both legal orientation sessions taught this specific scenario as an example of something that's not attorney-client privileged.

If you email your lawyer to ask legal questions, that's privileged communication.

If you just cc a lawyer on a thread while you talk to other people, adding the lawyer doesn't make the conversation privileged or protected.

reply
hedora
1 hour ago
[-]
That is an erosion of the social contract from the early days of SaaS.

The law in the US is based on the expectation of privacy. If companies and the US government repeatedly egregiously share private data in violation of terms of service and the law, then what expectation is there?

25 years ago, I'd say "Checking the 'do not train on my data' button in an Anthropic account would pretty clearly create an expectation of privacy." These days? OpenAI had to send all such data to the New York Times, the government has been illegally wiretapping the whole planet for decades, the US CLOUD Act exists, and companies retroactively change terms of service all the time.

Heck, Meta has been secretly capturing lewd bedroom videos and paying people to watch them, and it barely made the news, just like the allegations the WhatsApp content moderation team made where they claimed they have access to WhatsApp E2EE content (what other content could they be moderating?!?)

reply
hrimfaxi
1 hour ago
[-]
Right so calling my attorney is the same since I'm sharing the call with the phone company.
reply
altairprime
1 hour ago
[-]
Nope. The wiretapping laws precedent is known as ‘minimization’; when a legal tap is obtained of your phone lines, the expectation is that every effort will be taken not to tap attorney-client calls, lest your entire evidence packet get thrown out for failure to do so. That precedent is not automatically transitive to AI just because one thinks it ought to be; telephone lines between human beings are protected both by extensive case law and also actual law; neither yet applies between one human and a third-party corporation offering an AI, especially when at least one major AI is contractually declared in shrinkwrap to be ‘for entertainment use only’.
reply
pvtmert
1 hour ago
[-]
maybe not the call itself but the voicemail for example. can it be "extracted"?

another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication

reply
margalabargala
1 hour ago
[-]
What constitutes "Sharing with a third party" though? Using a 3rd party email service like outlook or gmail? Using a third party docs service like google docs?

It doesn't seem right that google docs would be privileged, but if you use the fancy spellcheck button, it no longer is.

reply
shimman
1 hour ago
[-]
The onus is on the companies to make this clear, if they aren't willing to tell users the dangers of using their own tools that kinda tells you everything you need to know (they don't care about their customers, only $$$).

Be upset at Google for not taking privacy seriously, they never have and never will.

reply
jeffbee
1 hour ago
[-]
Right, exactly. It is also too much to expect that if a user enabled the "personalization" button in the Gemini app, for unrelated reasons, they now can't expect to compose a privileged email to their counsel. It's a minefield.
reply
lokar
1 hour ago
[-]
Well, at Google people get legal advice from in house lawyers via Gmail. Are they not sharing that with at least some of the Gmail team (who could read the email)?
reply
jeffbee
1 hour ago
[-]
Gmail users (correctly and reasonably) do not expect the "gmail team" to read their emails, except using glass-breaking incident response privileges that leave audit trails and trigger review. Users expect that email is private. Anyway, both Google's privacy policy and American jurisprudence segregate things like emails, voice calls, and video calls into a separate "communications" category, while Google's privacy policy treats Google Docs as "other content you create", even though the difference seems immaterial if you know how these systems work.
reply
jeffbee
2 hours ago
[-]
I don't think that hot take will survive much contact with the near future, at least not without a good deal of controversy.
reply
leni536
1 hour ago
[-]
What about email?
reply