Today, you invite someone to a private repo and the code gets exfiltrated by a collaborator running whatever AI tool simply by opening their IDE.
Or you send someone an e2ee message on Signal but their AI reads the screen/text to summarize and now that message is exfiltrated.
Yes, I know it’s ”nothing new” ”in principle this could happen because you don’t control the client”. But opsec is also about what happens when well-meaning participants being accomplices in data collection. I used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested.
Personally I’m not ever giving keys to the kingdom to a remote data-hungry company, no matter how reputable. I’ll reconsider when local or self-hosted AI is available.
I would seriously reëvaluate my trust level in a friend or colleague who installs a non-ADA screen reader on their phone. At least to the level of sharing anything sensitive.
Is your position that anyone who's not tech savvy enough to constantly fight the onslaught of shady business practices and dark patterns that most tech companies throw at them is not worthy of their friends' trust?
For most people asking them to guarantee their own devices won't spy on them is a tall order.
Trust is a function of character and competence. Not understanding how your technology may be compromising you is, within the scope of keeping secrets, a fracture of competence.
I can’t repair a car. My friends would be correct in not trusting me to go under their cars’ hoods unsupervised. Similarly, a friend or colleague who cannot be trusted to understand the device they’re using cannot be trusted with matters of confidence in that context.
I may be a dinosaur, but I was shocked at how casual they made this look (I know, it's just an ad), but I would be fired almost instantly at $ENTERPRISE if I did this. I almost looks like it's designed for corporate espionage.
----
EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
> "In fact, if the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host," the lawsuit alleges. "What Otter has done is use its Otter Notetaker meeting assistant to record, transcribe, and utilize the contents of conversations without the Class members' informed consent."
I'm surprised the NPR article doesn't touch on the possible liability of whoever added Otter in the first place - surely the buck stops there?
IANAL but companies providing a product has certain responsibilities too, especially when they're intended to be used for a given purpose (ie. recording meetings with other people on it). Most call recording software I come across have a recording notice that can't be disabled, presumably to avoid lawsuits like this.
>EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
Note the preceding paragraph also notes that even when the integrations aren't used, otter only obtains consent from the meeting host. In all-party consent states that's clearly not sufficient.
>because the fact it's "AI" isn't really relevant here
Again, IANAL, but "recording" laws might not apply if they're merely transcribing the audio? To take an extreme case, it's (probably) legal to hire a stenographer to sit next to you on meetings and transcribe everything on the call, even if you don't tell any other participants. Otter is a note-taking app, so they might have been in the clear if they weren't recording for AI training.
And, even beyond security is their ability to hold promises made over the data in the event of a private equity takeover, a rogue employee, etc.
To that end, I've been working on opening sourcing https://dontrecord.me as a side project. Putting together a fork of Whisper that will follow the opt-out signal, too. If anyone one wants to help, please connect.
They recorded the call and sent it to all participants. Its not their fault the users are idiots.
Otters defense is that it’s up to their users to inform other participants and get their consent where necessary, the claim of the lawsuit is that Otter is deliberately making a product which does not make it obvious that the call is being recorded, and by default does not send a pre-meeting notice that it will be joining and recording.
I've never used this service so I don't know if the user was being particularly clueless or if some dark pattern was at play; I suspect it's probably a little bit of both.
The tape recorder manufacturer also doesn’t claim the right to permanently own anything it’s users record, with or without permission.
I’ve used Otter to record convos without it joining the meeting.
Notion’s AI meeting recording also works without any participants being alerted. Same with Limitless.ai (check out the limitless pendant for the most extreme example of no consent recording)
Most of these AI meeting recording services make it easy to silently/secretly record. No consent seems to be the default UX.
That's the crux of the article.
I'm sorry but this is another example of not checking AI's work. Whatever about the excessive recording, that's one thing, but blindly trusting the AI's output and then using it blindly as a company document for a client is on you.
Ex: Hop on a conference call with a group of people, Person A "leaves early" but doesn't hang up the phone, then the remaining group talks about sensitive info they didn't want Person A to hear.
I'm sorry but any conference software will make it extremely clear who is still on the call. Again I do put a lot of this scenario down to the User-fault. But the fact that this software is "always on" instead of "activated/deactivated" feels like incomplete software suite to me personally.
On internet / app based systems yes ... but on legacy telephone systems you have to remember all 16 of the '<Person> is joining the call' and mentally check them off when you get the '<Person> is leaving the call' on the way out. Of course you have no idea who joined the meeting before you arrived.
You didnt even have to make the mistake once to know not to keep talking on the call anyone can dial into after you think everyone left.
You could fix this by training people not to use booked meetings this way but I'm not sure how realistic that is to do. I think it might be that services like Otter need to be adjusted to take into account that not every part of a meeting is of equal sensitivity.
i.e. my HOA's monthly meetings have a private period for the board only and a public period for all residents. If Otter were used in this configuration, it would broadcast the exact details of those private discussions to the whole building, which might include board members discussing details that shouldn't be shared with everyone.
One would like to think that a company transcribing company meetings of varying degrees of sensitivity would have the feature you're describing built into it. If nothing else other than the auditing process that's usually involved for new software.
Maybe just those companies in a rush to adopt the latest AI tooling are not fully considering what they're doing.
I know someone who is involved in a lawsuit regarding a child, and one of the lawyers used this service to record and transcribe a very confidential meeting. Their first awareness of the illegal wiretapping by this company was when a summary email showed up at the end of the meeting. Needless to say, they weren’t happy, not just about the surreptitious recording, but also the discovery that the contents of that confidential coal will live forever in Otter’s training set. When the company was asked about this, they dismissed any kind of responsibility of their own, and noted it the responsibility of their subscribers to use the product appropriately.
As I'm a European citizen, I filed a GDPR removal request with them to remove all images of me from their servers. The email address that they list in their privacy policy [1] for GDPR requests immediately bounces and tells you to reply from an Otter.ai account (which I don't have). I was able to fill in a contact form on their website and I did receive replies via email after that.
After a few emails back and forth, their position is that
> You will need to reach out to the conversation owner directly to request to have your information deleted/removed. Audio and screenshots created by the user are under the control of the user, not Otter.
> We are required by law to deny any request to delete personal information that may be contained within a recording or screenshot created by another user under the CCPA, Cal. Civil Code § 1798.145(k), which states in relevant part
> “The rights afforded to consumers and the obligations imposed on the business in this title shall not adversely affect the rights and freedoms of other natural persons. A verifiable consumer request…to delete a consumer’s personal information pursuant to Section 1798.105…shall not extend to personal information about the consumer that belongs to, or the business maintains on behalf of, another natural person…[A] business is under no legal obligation under this title or any other provision of law to take any action under this title in the event of a dispute between or among persons claiming rights to personal information in the business’ possession.”
Which is a ridiculous answer towards a European user, as the CCPA doesn't apply to me at all. Furthermore, I don't think the CCPA prohibits them at all in deleting my face from their servers, as the CCPA merely stipulates that I can't compel them under the CCPA. Otter.ai can perfectly decide this for themselves or be compelled under the GDPR to delete data, and their Terms and Conditions make it clear they may delete any user or data if they wish to do so.
After these emails, and me threatening to file a lawsuit, "Andrew" from "Otter.ai Support Team" promised to escalate the matter to his manager, but I got ghosted after that: they simply stopped replying.
So I'm going to file that lawsuit (a "verzoekschriftprocedure" under Dutch law) this week. It's going to be a very short complaint.