All Those A.I. Note Takers? They're Making Lawyers Nervous
133 points
6 hours ago
| 18 comments
| nytimes.com
| HN
Tistron
4 hours ago
[-]
reply
burningion
4 hours ago
[-]
The main point raised in the article is that these bots may void attorney client privileges.

But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.

reply
coffeebeqn
4 hours ago
[-]
Plus they are super inaccurate. Gemini gets one of its three bullet subtly or very majorly wrong almost every time. Just a few weeks ago Gemini said we’re rolling out our payment setup in Russia. You know the place where we have 20+ sanctions packages on? We were talking about France in the meeting.
reply
operation_moose
4 hours ago
[-]
We've found they're surprisingly good if everyone on the call is using a decent headset.

The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.

reply
user_7832
4 hours ago
[-]
> If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.

IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.

reply
netdevphoenix
2 hours ago
[-]
It's just a token predictor what do you expect? What we need are tools that embrace that and ping the agent to validate what it just said or double check. But the trade off is that this might hamper their capabilities to some level
reply
SlinkyOnStairs
1 hour ago
[-]
> It's just a token predictor what do you expect?

The point isn't that it's unexpected. It's that prior text-to-speech systems were much better about this particular failure mode, prone to spitting out entirely incorrect words but not rephrasing entire sentences.

This is a particularly bad failure mode because people don't notice it.

> What we need are tools that embrace that and ping the agent to validate what it just said or double check.

This is not a problem that can be fixed by throwing more AI at it. It's a shared problem to all such systems, whether they're audio-text transformers or LLMs. Agentic review would just further push the system towards creating output that looks correct, but is not.

LLM translation does the same, yielding more natural text, but generally not better translation. In several cases, especially the "easy" translation between similar languages (e.g. within a language group like Germanic or Nordic) LLM-powered translation is notably worse than more primitive "word & phrase book" systems, tending to change the meaning of the text in order to have good grammar whereas these older systems would give crude or grammatically incorrect translations that still retained the core meaning.

reply
Semaphor
24 minutes ago
[-]
I often (ish) translate between English and German, two languages I speak very well. The quality of translation is amazing and far better than what old systems did.

Maybe it depends on topics or length, for me it's usually 1-2 paragraphs of a German article to share online.

reply
ffsm8
2 hours ago
[-]
While you're correct in what tthe audio models are - at least somewhat (they're not exactly like text based llms), you seem to brush his point away too quickly before fully exploring it.

This is a solvable issue, the current model and harnesses just aren't made with that assumption - hence they're doing "best effort while guessing if unsure".

Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.

Currently there is basically only one mode - and it's optimized for conversation. The note taking is just glued on with that functionality as the backbone, and that's probably not going to stay.

reply
repelsteeltje
1 hour ago
[-]
> Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.

I'm hesitant to admit even that. Like any computational linguistics problem, accuracy relies on coverages of all levels: form morphology, through syntax and semantics to speech act and world knowledge.

I worked with state of art speech recognition in healthcare setting. The model was specifically trained on small set of languages with emphasis on covering medical terminology.

It worked great for conversations most of the time, but sometimes messed up very badly. For instance when patient would mention the name of a relative, a street address or phone number. Spelling out an email address would mess it up completely.

It's just like when you're a horrible typist and rely on spell checking: The red squibles are gone, but the story no longer makes sense. Or when you "autofix" a syntax error, but the meaning diverges from your intention.

As the technology improved the number of words decreases, but the mistakes get more severe.

reply
jghn
1 hour ago
[-]
> what do you expect?

If the prediction strength is below X, put an indicator that it couldn't make a valid prediction?

reply
freejazz
43 minutes ago
[-]
>It's just a token predictor what do you expect?

Someone tell Altman

reply
r_lee
3 hours ago
[-]
I don't think it's a training issue, it's simply that there's no inherent "I don't know" in the transformer architecture unless it's really like something completely unknown, otherwise the nearest neighbor will be chosen and that will be whatever sounds similar or is relevant, even if it might cause a problem
reply
feoren
1 hour ago
[-]
The final output of the neural network part of an LLM is a vector with weights for every token, that is then usually softmaxed and picked from. Can we not quantify the uncertainty by looking at the distribution of weights of the top 10 options? Like we expect for a note-taking app that the top choice would be something like 98% certain, and if we see that the model gives a weight of 60% to "Russia" and 30% to "France", that's just not enough certainty to simply output "Russia". That's exactly when it should say "<uncertain>" or something instead.
reply
aspenmartin
3 hours ago
[-]
Not inherent in transformer architecture, we do try to ingrain a sense of uncertainty but it’s difficult not only technically but also philosophically/culturally. How confident do you want the model to be in its answer to “why did Rome fall”?

Lots of tools in our toolbelts to do better uncertainty calibration but it trades off against other capabilities and actually can be rather frustrating to interact with in agentic contexts since it will constantly need input from you or otherwise be indecisive and overly cautious. It’s not technically a limitation of transformer architecture but it is more challenging to deal with than other architectures/statistical paradigms.

Like you can maintain a belief state and generate conditional on this and train to ensure belief state is stable and performant. But evals reward guessing at this point, and it’s very very hard to evaluate the calibration in these open ended contexts. But we’re slowly getting there, just not nearly as fast as other capabilities.

reply
fluoridation
2 hours ago
[-]
>How confident do you want the model to be in its answer to “why did Rome fall”?

The confidence level can be any, as long as it's reported accurately often enough. "This is my conjecture, but", "I'm not completely sure, but", and "most historians agree that" are all perfectly valid ways to start a sentence, which LLMs never use. They state mathematical truth, general consensus, hotly debated stances, and total fabrication, with the exact same assertiveness.

reply
user_7832
2 hours ago
[-]
The thing is, if LLMs are stochastic parrots predicting the next word (aka, a partially decent auto complete), there's no reason it can't complete <specific question it can't answer> as "I don't know" - as that's a perfectly valid sentence too.

That's why I'm still cautiously optimistic about LLMs somewhere being good enough. I don't know if or when someone will manage to do it, but I'm hopeful.

reply
moffkalast
2 hours ago
[-]
It's a benchmark and eval issue. Guessing gets them the right result sometimes and the models rank better in error rate than they'd otherwise. We need the kind of benchmarks that penalize being wrong WAY more than saying "I don't know".

Of course there's a secondary problem that the model may then overuse the unintelligible option, but that's something that's a matter of training them properly against that eval.

You could also try thresholding the output based on perplexity to remove the parts that the model is less sure about, but that's not going to be super accurate I think.

reply
user_7832
2 hours ago
[-]
Yeah I broadly agree with you. I've tried by explicitly adding a prompt to "ask questions and clarify", and even fairly decent models like Gemini pro (2.5 or 3) tend to make question for the sake of it.

Which reminds me that that's another big issue with LLMs - they'll blindly do whatever you ask them to, without pushback. (Again, I miss 3.5/3.6 era Sonnet which actually had half a spine. Fuck anthropic for blindly chasing coding benchmarks at the cost of everything else.)

I've engaged in several "CMVs" (or "tell me why X is bad") with LLMs, and very often it's clear it's just saying stuff to say it, giving very terrible points on unjustifiable positions that collapse the moment I counter argue even slightly rationally.

reply
steveBK123
8 minutes ago
[-]
> The problems start when using conference room audio

RTO problems

reply
pjc50
4 hours ago
[-]
Given how financial services can impose silent inexplicable lifetime bans for using the wrong words in the "what is this transaction for" field, I'm wondering at what point the AI automatically reports people for sanctions violation based on its mishearing.
reply
camdenreslink
1 hour ago
[-]
The AI note summaries in meetings I'm in are frequently totally inaccurate. They are actually inaccurate in two ways: they fabricate things that were never said (but always kind of close to something that was said), and they emphasize the totally wrong thing (e.g. acting like the entire conversation was about one topic when that was just a very small part).

I sincerely hope these aren't used in court.

reply
rayiner
1 hour ago
[-]
They will be discovered and used in litigation, and the results will be hilarious. Think about how much lawyers pick apart language (like statutes or the constitution) that was written deliberately by humans and subject to review and revision. Now we're going to have lawyers, e.g., seizing on word choice in AI notes that might have a sinister connotation when the original wording was innocuous.
reply
stego-tech
3 hours ago
[-]
This. The fact LLMs can also amplify existing closed-set research means even smaller shops can now search through a flood of documents to find smoking guns or critical evidence, much faster.

I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).

Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.

reply
mock-possum
2 hours ago
[-]
What are you trying to get away with I wonder?
reply
stego-tech
10 minutes ago
[-]
Chillax Palantir, your pro-surveillance throwaway incidentally makes such large data harvesting companies a larger target.

Limiting data retention doesn't mean hiding bad things, it means limiting exposure in general. The more of a thing - anything - that you have, the bigger a target you are to bad actors. By extension, companies holding vast sums of data beyond what's needed to process a given transaction or remain compliant with the law end up placing themselves at risk of being targeted and said data used as leverage against them.

You don't limit data to hide bad shit you're doing, you limit it to avoid others using it to do bad shit against you or your customers. If someone or something is engaged in bad shit, there will always be evidence somewhere regardless of data retention policies.

reply
LanceH
3 hours ago
[-]
> But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.

I would add that their is no guarantee their are correct as well.

reply
mock-possum
2 hours ago
[-]
You’d use a computer generated transcript as a guide, not as proof - the proof is the recording of the person actually saying the thing, not the LLMs best guess of what it imagined the person saying.

“At timestamp X, person Y said Z” says the robot, and then you dutifully scrub the audio to timestamp X to verify.

reply
LanceH
2 hours ago
[-]
Is audio always kept in addition to transcripts? (genuine question, I rarely record either)
reply
papageek
1 hour ago
[-]
Never write if you can speak; never speak if you can nod; never nod if you can wink. -Lomasney (has aged well it seems)
reply
Bombthecat
1 hour ago
[-]
Not only there

Also social settings will change, when everything you say stays on record forever in every meeting...

reply
infecto
2 hours ago
[-]
The nuance here too is that just because someone has concern about materials being discoverable does not mean the company is doing something illegal. Corporate law as it pertains to legislation (US in this perspective) is a dance between company and current administration. When it comes to antitrust and other related legislation the equilibrium is shades of gray that changes between both administration changes but sometimes from the same administration. Companies look to optimize their outcomes and the government is optimizing not so much for legality but what the current administration sets as the main concern.
reply
yagizdagabak
1 hour ago
[-]
my fear exactly. same with something like Meta glasses. and i feel like we have moved quickly from the regulatory problems to "'tis a fact of life"
reply
watwut
4 hours ago
[-]
Basically, it will be harder to hide illegal and unethical stuff companies routinely engage in.
reply
nz
3 hours ago
[-]
No, that would be a strict improvement. The AI note-takers can easily "mishear" or "misreport" non-existent illegal and unethical things. It also seems to easily mess up numbers (which is big problem, because a lot of decisions hinge on precise numbers -- imagine inflating an inventory by an order of magnitude, and then imagine having to pay a tariff on something that never existed).

I have a friend who works at a large-ish company that imports and manufactures things (in one of the clerical/quantitative professions). A few years back, they had the IT department go on a kind of "inquisition", wherein they forced employees to disable the summarization function that came with MS Teams, and threatened to fire them if they did not. The resistance to this demand was surprising -- most people are clueless about the cost of their own convenience. Worst of all, people would zone out of meetings, because the AI was producing summaries, which they would then never read.

The effect of the technology was that it made meetings infinitely more expensive, because the supposed benefit of meetings was nullified by complacency, _and_ it made the meetings a liability (incorrectly summarized meetings, that could be used in the discovery process, sure, but could also be sold by MSFT as a kind of market-research-data to competitors in the space).

Nothing illegal has to happen in these meetings at all, for this tech to cause an infinity of problems for the corporation. Every employee that uses these is effectively an unwitting spy. And if that is the case, then the meetings might as well be recorded and uploaded to YouTube (or whatever people watch these days)[1].

[1]: Maybe this is the future. Which I am okay with, but only if the entire planet has to do it, and the penalties for not doing it are irrecoverably severe.

reply
kjs3
2 hours ago
[-]
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him" - Cardinal Richelieu

Be careful what you wish for. Particularly when it involves tech that often gets it very, very wrong.

reply
triceratops
2 hours ago
[-]
That's an argument for recording everyone on earth 24/7. Is that what you mean?
reply
sdellis
2 hours ago
[-]
With the level of surveillance and erosion of privacy, that is essentially what is happening. We all know that we are being watched and surveilled. There is no longer an "argument". Anything you say in public or private could potentially be used against you in the future.
reply
triceratops
2 hours ago
[-]
No there's the potential of that happening. That isn't what actually happens. If everyone's phone was continuously recording and storing everything 24/7 we'd need much bigger batteries for one thing.
reply
flir
2 hours ago
[-]
It'll just happen. Can't really fight technological progress.
reply
sdellis
2 hours ago
[-]
Actually, many people fight this kind of "progress". Just look at what is happening to Flock right now. True "technological progress" would be using technology to empower humans, not to exploit and subjugate them.
reply
triceratops
2 hours ago
[-]
Is it progress though?
reply
chvid
3 hours ago
[-]
Show me man the man and I will show you the crime.

Modernized. Industrial AI scale.

reply
SecretDreams
3 hours ago
[-]
Going to also be harder to hide completely legal, but not ideal stuff. Like randomly complaining about your boss to a colleague or casually discussing a feature you're stuck working on that you think is a bad idea.
reply
derektank
2 hours ago
[-]
>casually discussing a feature you're stuck working on that you think is a bad idea.

I’ll be honest, this is something that I hope AI note taking tools capture and incorporate into summaries of the company’s status. Especially if they act as an intermediary without revealing the specific person who said it. There’s a lot of information latent within organizations that doesn’t get properly shared due to concerns of retaliation or simply embarrassment that would benefit everyone by being communicated sooner.

reply
kjs3
2 hours ago
[-]
The people supplying this technology explicitly want it to tell them what their serf are doing. There will be no "honest but anonymous informing of upper management".
reply
SecretDreams
2 hours ago
[-]
That information is often intentionally not cascaded up the chain because the higher up you go, the more rigid the thinking gets - at least in my experience. Upstream doesn't want to hear the bad news or hear about how their idea is dumb. They want us to just do the bad idea and if the bad idea doesn't work out, they want to hang the ICs out to dry.

Maybe some smaller shops are not like this, but the bigger your company is, the more you'll find this type of thinking to persist.

In theory, I do like your idea - anonymously cascading feedback upstream. I just see no avenue for this to succeed in practice.

reply
gwbas1c
2 hours ago
[-]
Back when I was in college, in a fraternity, we always assumed that the phones were tapped. Specifically, we never spoke about alcohol or marijuana (now legal) on the phone.

Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.

The same applies to speaking with lawyers. You never know when some motivated asshole wants to twist your words out of context, and the possibility of a recording just enables that behavior.

---

I know enough about security and encryption to know that unless I've exchanged keys physically with someone else, there really is no guarantee that someone hasn't compromised a certificate somewhere. (IE, a "secure" connection on the internet is secure enough for a credit card.)

reply
xoxxala
8 minutes ago
[-]
When I say something inappropriate on the phone or over Discord, I always tell the NSA guy listening it was a joke.
reply
skinfaxi
1 hour ago
[-]
> Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.

This is horrifying. Why do you feel the necessity to self-censor? What consequences do you anticipate?

reply
BeetleB
45 minutes ago
[-]
Not the person who said this, but it's easy: Grow up in a country where the government listening in is common (without any transparent due process), and it becomes second nature.

And then add to that how easy it is to record phone conversations with today's phones (I've done it), it's easier on the brain to assume it's being recorded as opposed to wondering if it is.

But yes, I don't care about my dirty jokes being recorded :-) Illegal activity? Sure. But I solve that problem by not doing illegal things.

reply
Kirby64
1 hour ago
[-]
It's a good policy, generally. Treat anything written down, email, etc, as something that could become public later. Anything that could be recorded and saved for later can be used against you if it's taken the wrong way. A questionable joke could become an HR complaint, as an example.
reply
jkingsman
1 hour ago
[-]
Adding on to this question, do you anticipate the same people capable of tapping phones to think less of you for a dirty joke? The people whose opinion of me would lower for something off-color and the people who possess the ability to wiretap me are a disjoint set lol.
reply
dnnddidiej
1 hour ago
[-]
The point is it would be usable against you, but at the point you are wiretapped and can be used in court all bets are off you are probably in too much trouble may as well tell the joke!
reply
jkingsman
52 minutes ago
[-]
yeah exactly haha; the threat model implies a level of "you're hosed" that something private I'd say to a friend isn't moving the needle on.
reply
some_random
55 minutes ago
[-]
Have you missed the last decade and a half of people having their lives ruined by social media mobs for minor slights?
reply
EvanAnderson
1 hour ago
[-]
> Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family.

Adding to that: If you live in a one-party consent state assume you're being recorded by any of the parties in a face-to-face conversation, too.

Yeah-- it sucks that the world is this way. I deal with it. What I don't want to see are draconian controls on technology (which will ultimately be ineffective) in an attempt to put the genie back in the bottle.

reply
1vuio0pswjnm7
8 minutes ago
[-]
Alternative to archive.is

No Javascript, no CAPTCHA, no geoblocking, no DDoS directed at blog

https://static.nytimes.com/narrated-articles/synthetic/artic...

reply
swyx
7 minutes ago
[-]
you forgot to mention it requires 13 minutes of audio listening
reply
bottlepalm
24 minutes ago
[-]
How many times have you guys been in a meeting, not realized someone turned on AI notes, and said things you would rather have not been recorded and auto-emailed to everyone after the meeting? For me it's happened more than a few times and while maybe there's a silver lining of people hearing the unvarnished truth, I think it's going to change the dynamics of meetings, in that people are not talking honestly anymore, you have to put on a show for the AI note taker.
reply
AstroBen
22 minutes ago
[-]
It's not just work meetings. This is being taken to healthcare settings, also.
reply
samuelknight
1 hour ago
[-]
AI meeting notes are not transcripts. While they do cause an unprecedented amount of record creation (as the article notes), there are also challenges that a defense can use. Note takers get small details wrong all the time, they often are making notes FOR someone so it biases what is documented, their prompting is opaque, and they can't be cross examined. We will likely see situations where the note taker and a witness participating in the meeting disagree?
reply
RobotToaster
12 minutes ago
[-]
There's some irony how, in an article like this, the phrase "recorders that use A.I. to log live interactions have become a product category." is linked to an article on the best AI note takers...
reply
atonse
2 hours ago
[-]
This is where I think either realtime transcription (or just in time followed by deleting everything) will be an end state.

Especially real-time transcription where the AI actually takes notes (instead of just recording every word and has a dump of it somewhere) can be appealing. Then there isn't any record of the raw sentences, and things that aren't relevant are immediately discarded without any written record.

OpenAI's realtime whisper and other such models will become the default over time.

reply
rpaddock
4 hours ago
[-]
Some companies want no records at all, see:

"2028 – A Dystopian Story By Jack Ganssle":

http://www.ganssle.com/articles/2028adystopianstory.htm

Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.

From Cornel Law:

LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery

Rule 26. Duty to Disclose; General Provisions Governing Discovery

(a) Required Disclosures.

(1) Initial Disclosure.

(A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:

(i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;

(ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …

https://www.law.cornell.edu/rules/frcp/rule_26

reply
kjs3
1 hour ago
[-]
Much of my experience with corporate counsel is one of 2 extremes: "keep everything"[1] or "keep nothing". Keep everything, because then you can't be caught out deleting something possibly relevant, which looks very, very bad in court. Keep nothing, because then opposing counsel can't catch you out only keeping things that make you look good in court.

[1] There's actually a subset of this, which includes "...until you are legally allowed to delete it, then delete everything". This is driven by regulation (e.g. SOX in the US).

reply
djoldman
2 hours ago
[-]
This was interesting and sent me down a research hole.

General conclusion:

Corporate litigation is mostly just a series of self-investigations so that both sides can learn what both sides actually know, given that neither side knows much about themselves OR the other side. At the same time both sides are trying to stop the other side from getting the judge to order them to do more investigating.

reply
next_xibalba
4 hours ago
[-]
See also the OpenAI vs. Musk trial, where Greg Brockman's diary and Sam Altman's texts have taken center stage.
reply
redmaple892
55 minutes ago
[-]
Surprised healthcare isn't called out specifically. AI note takers have exploded in popularity in the US.
reply
skinfaxi
4 hours ago
[-]
reply
natebc
2 hours ago
[-]
reply
pfortuny
4 hours ago
[-]
Honest question:

Do these systems not share data with the AI servers? Or are they all local (on-site, not on-computer)?

I am totally baffled by the trust people put on these systems, sharing with them the most obviously private data.

reply
dsr_
3 hours ago
[-]
Most services have privacy policies that boil down to:

- we promise not to share PII (defined as narrowly as possible)

- we promise not to share payment information except with our payment system

- if you pay us, we promise not to train LLMs on your data

- you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".

reply
cj
3 hours ago
[-]
> I am totally baffled by the trust people put on these systems

The average person doesn't care about online privacy.

reply
sdellis
2 hours ago
[-]
They care, but realize that there is no such thing as privacy anymore. The amount of obsession required to maybe maintain some degree of privacy is not something most people are willing to do.
reply
cj
1 hour ago
[-]
When the average person thinks about "online privacy" they think about keeping things private from other people. They don't think about keeping their data private from the companies hosting/processing their data.
reply
daft_pink
3 hours ago
[-]
If you are in a trusted industry like finance or healthcare, the popular ones generally have industry wide privacy certification like HIPAA compliant, SOC 2 Type 2 etc.
reply
sandworm101
4 hours ago
[-]
>> Executives and corporate boards generally expect conversations with their legal team about legal matters to have attorney-client privilege. They lose that protection if they share the same information with outside parties — and it’s possible that an A.I. note taker could have the same effect.

Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.

As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.

Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.

reply
hugh-avherald
1 hour ago
[-]
It sounds like the prison recordings were compulsory, which is a different kettle-of-fish. The key phrase "if they share" implies voluntary and deliberate action, and is not much of an oversimplification imo.

> What matters is that the parties intended and expected communications to be privileged.

I would contend that your summary, not theirs, is a oversimplification. Jurisdictions will obviously differ, but privilege does not attach merely because of the intent and beliefs of the lawyer and client.

reply
sandworm101
1 hour ago
[-]
Well, I try to avoid the R word. The actual legal term would be reasonable intention. Literal expressed intention. ie putting an A-C warning on every email, won't be enough on its own.

IMHO we should just assume the R word before every verb in every legal discussion. That is how reality works. These are not spells. If I express that I intend something to be private, then announce it using a megaphone at a basketball game, my intention is no longer reasonable regardless of what magic words I have thrown into my communication. Act like an idiot and a court will treat you like an idiot.

reply
testfoobar
2 hours ago
[-]
I would be concerned about transcription error perhaps (e.g. non native speaker) where precision matters: engineering, compliance, regulation, legal, etc.
reply
jgalt212
1 hour ago
[-]
Stringer Bell would be furious.
reply
analogpixel
3 hours ago
[-]
unrealted to the article, but how do you make a page that that prevents the mouse scroll wheel from working? that's pretty impressive.
reply
bilekas
2 hours ago
[-]
It's no impressive, it's scummy hiding news behind a paywal, they simple use some CSS trickery to set the height of the content to the size of your view, so there is nowhere to scroll to.
reply
vintagedave
4 hours ago
[-]
Paywall: can anyone share what the issue is?

Inaccuracy in meeting minutes?

Leaking private info, re security of notes?

I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.

reply
WillAdams
4 hours ago
[-]
Reminds me of when I worked for a small shop which had the copier maintenance contract at a local college --- when something went wrong and wasn't properly addressed, my bosses found themselves being held to account with their own words from prior phone calls being quoted back to them verbatim --- which they were mystified by until I explained that the administrators had all come up from the clerical pool and knew shorthand.
reply
bearjaws
2 hours ago
[-]
The main risk is attorney client privilege and it's already been tested in New York, if you transcribe a call you need to turn over the transcriptions and they can subpoena the company doing the transcription for the records if you refuse.
reply
LanceH
3 hours ago
[-]
They are saying that it could invalidate attorney client privilege because the transcription could technically be available to an outside party.

I suspect what isn't being said by the lawyers is they want to keep attorney client privilege so they can outright lie.

reply
close04
4 hours ago
[-]
It's in the viewable text on the page.

> A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.

By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.

I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.

[0] https://perkinscoie.com/insights/update/federal-court-rules-...

[1] https://www.smithlaw.com/newsroom/publications/the-silent-gu...

[2] https://natlawreview.com/article/when-ai-takes-notes-protect...

[3] https://www.americanbar.org/groups/gpsolo/resources/ereport/...

reply
vintagedave
3 hours ago
[-]
> It's in the viewable text on the page.

Not for me - there was no viewable text.

reply
pjc50
4 hours ago
[-]
People opt in to the panopticon and then discover they have no more secrets. I'm surprised lawyers fall for that as well.
reply
lukewarm707
4 hours ago
[-]
the doofus lawyer probably didn't realise, i wouldn't call it opt in
reply
close04
3 hours ago
[-]
If a lawyer takes notes and puts them in a computer, or a cloud drive, or send it over email, they are still covered by attorney-client privilege, right? If they use an AI to do it, it's treated more like a third party no longer covered by the same privilege. If there's no court decision on this it only takes a bit of bad assumption to screw up with using AI.

To be fair, the attorney-client privilege should be completely technology/medium agnostic. If the intention is to have that info stay between client and attorney, nothing should change this.

reply