If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Leaving AI aside, what in particular makes this different from using any other cloud-based software? Does writing a Google Doc to gather my thoughts or a draft email in Gmail constituent "revealing information from a lawyer to a third party"?
What if Google have enabled AI-features on these? Feels like this area really needs clarity for users rather than waiting for courts to rule on it.
Absolutely wrong in the U.S. The police can't just break into your home and demand it, but a judge can 100% mandate discovery or a subpoena if there is reason to believe that evidence exists which is relevant to the case.
The 4th amendment prohibits UNREASONABLE search and seizure, and we let judges make that determination. You never have absolute privacy rights.
There is some protection of personal private documents for civil cases. But for a criminal case, there is no 4th or 5th amendment protection for stuff you wrote in your diary.
And consider local LLM logs no different from your txt file or command history on your computer. Could still be requested for discovery.
First off, the Fifth Amendment right to not self-incriminate is rather narrower than you might expect. With regard to document production, it only privileges you from having to produce documents if the act of producing those documents would in effect incriminate you. So if you tell people "I've got a diary where I've been keeping track of all the crimes I've committed..." the government can force you to turn over that diary.
Second, the default assumption whenever you send something to another person is that it's unprivileged communication. IANAL, but even using cloud storage for things I'd want to remain privileged is something I'd want to ask a lawyer about before relying on. Although that's also as much because the default privacy policy of most services is "fuck you."
Which is what happened here. Claude's privacy policy says that Anthropic reserves the right to share your chats with third parties for various reasons, which means you have no reasonable expectation of privacy in those communications in the first place and automatically defeats any other confidential privileges. What happened is therefore little different from the defendant texting his attorney's responses to his friends, which is a fairly time-worn way of defeating attorney-client privilege.
Seems an opportune time to remember that every day is STFU Friday. And, to quote The Wire, is you taking notes on a criminal fucking conspiracy?
Is that true? I would expect that any notes I have in any form could be requested during discovery (client-attorney priviledge being one of the few exceptions and narrower than people assume).
Seems like there might be demand for chat clients with end-to-end encryption.
'No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote'.
A local model or Venice are still platforms, just local.
Nerd smarts seldom survive real world smarts. Reminds me of this: https://xkcd.com/538/
Sure but you can delete the logs yourself.
Seems dumb, and like it will cause quite a few issues until it is overturned.
Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.
We call them "chatbots" and anthropomorphize LLMs, but, despite the name of Claude's parent company, Claude is not a person.
If so, then does a Google doc for your attorney written with Google AI auto enabled have attorney client privilege?
If so, the AI chats for figuring out what you want to say to your attorney would seem to fall under the same category. And so there is either a contradiction or an unintended widening of scope.
Chatbots are not people. They are computer programs. And there's no other realm I can think of where merely interfacing with a computer program breaks attorney-client privilege.
It is equivalent to saying an email to your lawyer breaks privilege because you communicated with gmail. And it gets turbofucked when you consider that a program may be sending your information to an LLM. Would this same judge rule that having copilot installed in Outlook also breaks privilege because they "chatted with an outside party" while drafting an email (even if they didn't intend to send it to copilot)?
I can't think of a reason this isn't about the technology.
You send a prompt to a neutral third party who then sends it to an AI model and then routes the response back to you?
Meanwhile, sensible people perform sensitive defense and prosecution related chats anonymously facilitated via local LLMs or cryptocurrency.