EFF launches Age Verification Hub
286 points
1 day ago
| 37 comments
| eff.org
| HN
Also: We built a resource hub to fight back against age verification https://www.eff.org/deeplinks/2025/12/age-verification-comin...
pksebben
9 hours ago
[-]
This keeps coming up and we keep having the same debates about what Age Verification isn't.

For the folks in the back row:

Age Verification isn't about Kids or Censorship, It's about Surveillance

Age Verification isn't about Kids or Censorship, It's about Surveillance

Age Verification isn't about Kids or Censorship, It's about Surveillance

Without even reaching for my tinfoil hat, the strategy at work here is clear [0 1 2]. If we have to know that you're not a minor, then we also have to know who you are so we can make any techniques to obfuscate that illegal. By turning this from "keep an eye on your kids" to "prove you're not a kid" they've created the conditions to make privacy itself illegal.

VPNs are next. Then PGP. Then anything else that makes it hard for them to know who you are, what you say, and who you say it to.

Please, please don't fall into the trap and start discussing whether or not this is going to be effective to protect kids. It isn't, and that isn't the point.

0 https://www.eff.org/deeplinks/2025/11/lawmakers-want-ban-vpn...

1 https://www.techradar.com/vpn/vpn-privacy-security/vpn-usage...

2 https://hansard.parliament.uk/Lords/2025-09-15/debates/57714...

reply
zamadatix
2 hours ago
[-]
As much as you (and I as well) don't want age verification to involve discussion about kids' access to content because we're more concerned about the surveillance push riding the popularity of that, repeating "it isn't about kids" loudly 3 times doesn't make the (extremely large) group of people pushing age verification for kids disappear.

Telling that larger group their interest just isn't part of the conversation at all excludes _you_ from the conversation rather than changing the focus of the conversation to the other downsides instead of the primary interest others might have.

There are also, concerningly IMO, an extremely large amount of people willing to accept severe surveillance or privacy downsides so long as it helps achieve the goal about kids. To them, the same would in reverse would be "why are you talking about surveillance, the real issue is the kids. Say it 3 times loud, for those in the back!" and the conversation gets nowhere because it's just people saying how they won't talk to anyone who disagrees what concerns should be considered.

reply
edgineer
29 minutes ago
[-]
I'm sure those people exist, I just never happen to see anything they write online nor meet any of them in real life.
reply
thayne
3 hours ago
[-]
I would much rather have laws that require that certain kinds of websites return machine-readable headers describing what kind of content is on them, and then browsers, web proxies, etc. could be configured by parents, schools, etc. to block undesirable sites.
reply
wkat4242
3 hours ago
[-]
And really, a locally running AI could make that assessment pretty easily even if it isn't declared. No need to destroy the whole world's privacy. Unless that was the goal to begin with, obviously.
reply
knallfrosch
8 hours ago
[-]
> If we have to know that you're not a minor, then we also have to know who you are

That is untrue

reply
phyzome
8 hours ago
[-]
Are you aware of any age verification systems that do not have this property?

(This includes being robust against law enforcement action, legal or otherwise.)

reply
pksebben
8 hours ago
[-]
Like many mention in other comments on this post, it's possible to implement using ZKPs. There are likely other methods that would be effective without compromising privacy. None of them are part of the Age Verification discussion because kids are not the actual point of Age Verification.

When I say "if we have to know you're not a kid, we have to know who you are" I'm not stating an actual truth, but the argument as it is playing out politically.

reply
magicalhippo
8 hours ago
[-]
> None of them are part of the Age Verification discussion because kids are not the actual point of Age Verification.

The EU age verification solution says implementations SHOULD implement[1] their ZKP protocol[2]. Not linking it to the user is stated as an explicit goal:

Unlinkability: The goal of the solution is to prevent user profiling and tracking by avoiding linkable transactions. Initially, the solution will rely on batch issuance to protect users from colluding RPs. Zero-Knowledge Proof (ZKP) mechanisms will be considered to offer protection. More details are provided in Section 7.

[1]: https://ageverification.dev/av-doc-technical-specification/d...

[2]: https://ageverification.dev/av-doc-technical-specification/d...

reply
mzajc
7 hours ago
[-]
Is there a good explanation of how ZKPs prevent attestation providers (which presumably know your identity) from linking an issued proof back to you if, for example, the website elects to store it? I can wrap my head around RSA and ECC and PKI, but I haven't managed to make sense of this yet.

Assuming that's even a goal, of course. The cited paragraph mentions RPs (the websites, from what I understand), but makes no mention of attestation providers.

reply
MatteoFrigo
6 hours ago
[-]
This is, of course, very technical, but here is how it works at a high level.

In the non-ZKP presentation, the "holder" (phone) sends the credential to the relying party (website), and the RP executes some verification algorithm. In the ZK presentation, the holder executes the verification algorithm and sends to the RP a proof that the algorithm was executed correctly.

The "proof" has this magical property that it reveals nothing other than the check passed. (You will have to take on faith that such proofs exist.) In particular, if the check was the predicate "I have a signature by ISSUER on HASH, and SHA256(DOCUMENT)==HASH, and DOCUMENT["age_gt_18"]=TRUE", anybody looking at the proof cannot infer ISSUER, HASH, DOCUMENT, or HASH, or nothing else really. "Cannot infer" means that the proof is some random object and all HASH, DOCUMENT, ISSUER, etc. that satisfy the predicate are equally likely, assuming that the randomness used in the proof is private to the holder. Moreover, a generating a proof uses fresh randomness each time, so given two proofs of the same statement, you still cannot tell whether they come from the same ISSUER, HASH, DOCUMENT, ...

reply
pksebben
5 hours ago
[-]
the more I think about it, the more I feel like I need someone with deep knowledge to explain ZKPs to me.

So like, we've got this algorithm that gets sent our way and we run it and that provides kind of a cryptographic hash or whatever. But if we're running the algorithm ourselves what's to stop us from lying? Where does the 'proof' come from? What's the check that it's running and why do we inherently trust the source it's checking?

reply
kahnclusions
2 hours ago
[-]
I’m not exactly sure about ZKPs but for age verification the “proof” can come from the government but in such a way that the web service doesn’t know anything more than whether an assertion is true, and the government doesn’t know anything more than you wanted to verify some assertion.

This is a simplified method for age verification:

I want to buy alcohol from my phone and need to prove I’m over 18. SickBooze.com asks me for proof by generating a request to assert “age >= 18”.

My phone signs this request with my own private key, and forwards it to the government server.

The government verifies my signature against a public key I previously submitted to them, checks my age data in their own register of residents, and finally signs the request with one of their private keys.

My phone receives the signed response and forwards it back to SickBooze.com, which can verify the government’s signature offline against a cached list of public keys. Now they can sell me alcohol.

- the “request” itself is anonymous and doesn’t contain any identifying information unless that is what you intended to verify

- the government doesn’t know what service I used, nor why I used it, they only know that I needed to verify an assertion about my age

- the web service I used doesn’t know my identity, they don’t even know my exact age, they just know that an assertion about being >= 18 is true.

reply
notpushkin
1 hour ago
[-]
I would throw in Privacy Pass [1], just in case the government and SickBooze.com can exchange info.

Sadly, it‘s still hard to explain how exactly it works, but conceptually simpler than arbitrary ZKPs.

[1]: https://privacypass.github.io/

reply
shermanyo
1 hour ago
[-]
Excellent, clear example.
reply
MatteoFrigo
5 hours ago
[-]
I am someone with "deep knowledge", but HN is not the proper place for this discussion. See https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.htm... for the gory details.

Here is a hopefully simple example of how this ZKP thing may even be possible. Imagine that you give me a Sudoku puzzle. I solve it, and then I want to prove to you that I have solved it without telling you the solution. It sounds impossible, but here is one way to do it. I compute the solution. I randomly scramble the digits 1-9 and I put the scrambled solution in a 9x9 array of lock boxes on a table. I have the keys to the 81 locks but I am not giving you the key yet. You randomly ask me to open either 1) one random row chosen by you; 2) one random column chosen by you; 3) one random 3x3 block chosen by you; or 4) the cells corresponding to the original puzzle you posed to me. In total you have 28 possibilities, and assume that you choose them with equal probability. You tell me what you want and I open the corresponding lockboxes. You verify that the opened lock boxes are consistent with me knowing a solution, e.g. all numbers in a row are distinct, the 3x3 block consists of distinct numbers, etc. If I am cheating, then at least one of your 28 choices will be inconsistent, and you catch me with probability 1/28, so if we repeat this game 1000 times, and I don't know the solution, you will catch me with probability at least 1-(1/28)^1000 which is effectively 1. However, every time we repeat the game, I pick a different random scrambling of the integers 1-9, so you don't learn anything about the solution.

All of ZKP is a fancy way to 1) encode arbitrary computations in this sort of protocol, and 2) amplify the probability of success via clever error-correction tricks.

The other thing you need to know is that the protocol I described requires interaction (I lock the boxes and you tell me which ones to open), but there is a way to remove the interaction. Observe that in the Sudoku game above, all you are doing is flipping random coins and sending them to me. Of course you cannot let me pick the random coins, but if we agree that the random coins are just the SHA256 hash of what I told you, or something else similarly unpredictable, then you will be convinced of the proof even if the "coins" are something that I compute myself by using SHA256. This is called the "Fiat-Shamir transformation".

How do we implement the lock boxes? I tell you SHA256(NONCE, VALUE) where the NONCE is chosen by me. Given the hash you cannot compute VALUE. To open the lock box, I tell you NONCE and VALUE, which you believe under the assumption that I cannot find a collision in SHA256.

reply
sdwr
1 hour ago
[-]
> How do we implement the lock boxes? I tell you SHA256(NONCE, VALUE) where the NONCE is chosen by me. Given the hash you cannot compute VALUE. To open the lock box, I tell you NONCE and VALUE, which you believe under the assumption that I cannot find a collision in SHA256.

That's the bit I was missing! The prover pre-registers the scrambled solution, so they can't cheat by making up values that fit the constraints.

reply
parineum
6 hours ago
[-]
If it's not linked to an identity, why can't a kid use a parent's key?
reply
jolmg
4 hours ago
[-]
I think a parent should be able to give their kid access if they deem their kid mature enough. If the kid can handle social media without it becoming an addiction or a self-esteem issue or similar, then it would generally be a net positive. For example, social media may include YouTube which has a lot of educational content. Why hold the kid back?
reply
MatteoFrigo
5 hours ago
[-]
Excellent question. More generally, what prevents me from copying the credential and giving it to somebody else?

The currently favored approach works like this. The DOCUMENT contains a device public key DPK. The corresponding secret key is stored in some secure hardware on the phone, designed so that I (or malware or whatever) cannot extract the secret key from the secure hardware. Think of it as a yubikey or something, but embedded in the phone. Every presentation flow will demand that the secure element produce a signature of a random challenge from the RP under the secret key of the secure hardware. In the ZKP presentation, the ZKP prover produces a proof that this signature verifies correctly, without disclosing the secret key of the secure hardware.

In your example, the parent could give the phone to the kid. However, in current incarnations, the secure hardware refuses to generate a signature unless unlocked by some kind of biometric identification, e.g. fingerprint. The fingerprint never leaves the secure hardware.

How does the issuer (e.g. the republic of France) know that DOCUMENT is bound to a given fingerprint? This is still under discussion, but as a first bid, a French citizen goes to city hall with his phone and obtains DOCUMENT after producing a fingerprint on the citizen's phone (as opposed to a device belonging to the republic of France). You can imagine other mechanisms based on physical tokens (yubikeys or embedded chips in credit cards, or whatever). Other proposals involve taking pictures compared against a picture stored in DOCUMENT. As always, one needs to be clear about the threat model.

In all these proposals the biometric identification unlocks the secure hardware into signing a nonce. The biometrics themselves are not part of the proof and are not sent to the relying party or to the issuer.

reply
donmcronald
3 hours ago
[-]
> How does the issuer (e.g. the republic of France) know that DOCUMENT is bound to a given fingerprint? This is still under discussion, but as a first bid, a French citizen goes to city hall with his phone and obtains DOCUMENT after producing a fingerprint on the citizen's phone (as opposed to a device belonging to the republic of France).

Are you saying that someone goes to city hall, shows ID, and gets a DOCUMENT that certifies age, but doesn't link back to the person's identity? And it's married to a fingerprint in front of the person checking ID?

Is there a limit on how many times someone can get a DOCUMENT? If not, it'll become a new variation of fake id and eventually there's going to be an effort to crack down on misuse. If yes, what happens if I get unlucky and lose / break my phone limit + 1 times? Do I get locked out of the world? The only way I can imagine limiting abuse and collateral damage at the same time is to link an identity to a DOCUMENT somehow which makes the whole ZKP thing moot.

I'd be more worried about the politics though. There's no way any government on the planet is going to keep a system like that limited to simple age verification. Eventually there's going to be enough pretense to expand the system and block "non-compliant" sites. Why not use the same DOCUMENT to prove age to buy beer? Sanity for guns? Loyalty for food?

What happens if the proof gets flipped to run the other direction and a DOCUMENT is needed to prove you're a certified journalist? Any sources without certification can be blocked and the ZKP aspect doesn't matter at that point because getting the DOCUMENT will be risky if you're a dissenter. Maybe there's an interview. Maybe there's a background check. Has your phone ever shown up near a protest?

It's just like the Android announcement that developers need to identify themselves to distribute apps, even via side loading. The ultimate goal is to force anyone publishing content to identify themselves because then it's possible to use the government and legal system to crush dissenting views.

Big tech caused most of the problems and now they're going to provide the solution with more technology, more cost, and less freedom which is basically what they've been doing for the last 2 decades so it's not a surprise.

reply
parineum
5 hours ago
[-]
So adults are required to own a phone to prove their age?

Can I log into an age gated service at a library without a phone?

reply
MatteoFrigo
5 hours ago
[-]
Another excellent question. The current answer in the EU seems to be "you need a phone". My preferred answer (despite being one of the Google guys who designed the ZKP mechanism) would be that the government sends you some sort of plastic card with a chip that does not tie you to a phone. Still fighting that battle.
reply
parineum
2 hours ago
[-]
Thanks for answering, I appreciate it.
reply
crote
8 hours ago
[-]
If privacy is an explicit goal, why isn't it a MUST? Why even bother with the initial batch issuance phase? And what's stopping them from silently adopting a batch size of 1?
reply
orblivion
7 hours ago
[-]
Okay but then if a ZKP solution is presented, that's calling their bluff. They now have one less excuse for surveillance.

EDIT: Actually do one better - tell them that for 16+ websites, you're actually protecting teenagers by keeping them anonymous.

reply
wcarss
7 hours ago
[-]
Yeah, getting into the car with the guy holding the gun doesn't become okay because you have a great argument you're waiting to use down the road. He's already got the gun out.

We should have started arguing when he just said he had a gun, indoors, in the crowd. We shouldn't have quietly walked outside at his demand. But that all happened. Here we are now, at the car, and he's got the gun out, and he's saying "get in", and we're probably not going to win from here -- but pal, it's time to start arguing. Or better yet, fighting back hard.

Because that car isn't going anywhere we want to be. We absolutely can not get in the car right now, and just plan to argue the point later. It doesn't matter how right the argument is at all.

reply
phyzome
3 hours ago
[-]
Sure it's possible, but are there implementations in use that meet this criterion?

Because if there aren't, then it matters substantially less whether they're possible.

reply
joe_the_user
6 hours ago
[-]
The thing is that as far as I can tell, a ZKP of age involves a state or similar attestor to issue an ID/waller that can be querried for age without revealing identity.

But attestor has to have certainty about the age of the person it issues IDs to. That raises obvious questions.

What states are going to accept private attestors? What states are going accept other states as attestors? What state won't start using its issues ID/Wallet for any purpose it sees fit?

This system seems likely to devolve national Internets only populated by those IDs. That can all happen with ZKPs not being broken.

That is how states work.

reply
knallfrosch
8 hours ago
[-]
> the argument as it is playing out politically.

The law does not mandate identity, so your argument does not hold.

reply
magicalhippo
7 hours ago
[-]
> Are you aware of any age verification systems that do not have this property?

As I understand it, it's the goal of OpenID4VP[1][2]. Using it a site can request to know if the user is over 18 say, and the user can return proof of just that one claim, I'm over 18, without sharing identifying information.

The new EU age verification solution[3] builds on this for example.

[1]: https://openid.net/specs/openid-4-verifiable-presentations-1...

[2]: https://docs.walt.id/concepts/data-exchange-protocols/openid...

[3]: https://ageverification.dev/

reply
stvltvs
7 hours ago
[-]
Can't read the specs at the moment, but what prevents the age verification service and the age-gated website from coluding and de-anonymizing your porn use?
reply
magicalhippo
6 hours ago
[-]
Haven't either had time to fully wrap my head around the details.

At least in the EU solution they say there would be multiple attestation serivices the user could choose to use. So that would be technically better than nothing.

reply
knallfrosch
8 hours ago
[-]
1) Large social media companies know you better than your friends. That has been known for 10 years and they're way better now: https://www.nytimes.com/2015/01/20/science/facebook-knows-yo...

2) Cigarette vending machines accept VISA cards and government IDs and they're offline.

3) A medium-sized social media network required photos (not scans) of GovIDs, where only year of birth and validity date need to visible. The rest could be blacked out physically.

4) You can guess users' age and only request solid proof only for those you are unsure about.

The problem is that we technical users think of a one-size-fits-all technical approach that works, without a single fail, for all global users. That is bound to fail.

It is only a law and you can break it big time or small time. Reddit's approach might proof way too weak, it'll be fined and given a year to improve. Others might leave the market. Others will be too strict and struggle to get users. Others might have weak enforcement and keep a low profile forever. Others will start small, below the radar and explode in popularity and then enforcement will have to improve.

You can also request identity and then delete it. (Yes, some will fail to delete and get hacked.)

Giving Facebook a free pass is stupid. They're selling your age cohort "10-11" within 0.0037ms for 0.$0003 to the highest bidder on their ad platform.

reply
triceratops
3 hours ago
[-]
reply
orblivion
8 hours ago
[-]
reply
aidenn0
4 hours ago
[-]
GNU Taler has an age-verification extension.
reply
delusional
8 hours ago
[-]
Cool trick to tie in the libertarian idea of protecting yourself from legally sanctioned government actions.
reply
phyzome
4 hours ago
[-]
To make this more concrete: There are a lot of "legally sanctioned" government actions happening in the US right now that are pretty dubious. That includes digging up old laws and giving them spicy new interpretations that legal experts agree are an abuse of power and not in the intent of the original law.

Some of these are getting batted down by judges, so right now the category of "legal" is especially vague. That's why I phrased it like that.

But also, we see cops just straight up stalking people using government tools. So that's another reason to be concerned about "legal" government actions.

Nothing to do with libertarianism.

reply
topkai22
7 hours ago
[-]
Age verification is absolutely about kids. It’s also being used (or hijacked into) a vehicle for people who want increased surveillance.

There is a ton of evidence that there are harms to unrestricted online access for kids and teens (the book The Anxious Generation is cultural touchstone for this topic at this point). There is a real, well reasoned, and valid movement to do something about this problem.

The solutions proposed aren’t always well targeted and are often hijacked by the pro-surveillance movement, but it’s important to call out that these solutions aren’t well targeted instead of declaring the age verification push isn’t addressing a real problem and constituency.

reply
pksebben
7 hours ago
[-]
As many others have mentioned in this thread and others, there are ways - effective and straightforward ways - that we could be protecting our kids from the harms that come with the www.

The harms are real. The solution is a Surveillance Wolf wearing a dead Save The Kids Sheep(tm).

Solutions that might work - RTA headers [0]. More robust parental controls. Not this reimagining of the rules of the internet in service of a fairly vague and ineffective goal. It's like the whole AV concept was designed not to work in the current context at all - almost as if that was the point.

Perhaps I'm going a little out on a limb. I don't think I am - but quick, tell me you need to know where I'm dialing from without asking me where I'm dialing in from.

0 - https://www.rtalabel.org/index.php

reply
anon291
39 minutes ago
[-]
Yes all those things are great, but you'll notice that instead of explaining this to the non-technical crowd, technology focused privacy concerned individuals rarely attempt to educate about how these could work. Instead they simply seem to be against any sort of control on what children watch online.

Given that it's also coming from a bunch of tech males, it comes across as extraordinarily creepy. This is not hard to understand.

reply
ffuxlpff
2 hours ago
[-]
The thing is that when it starts being about the kids it means the bottom 90% has entered the internet and you should be away because it is already lost.

I wonder if there's something like internet accelerationism - push things like having friends or watching movies online off the cliff as soon as possible.

reply
pembrook
7 hours ago
[-]
Unfortunately The Anxious Generation is a very well-written house of cards built on questionable studies [1] and its success is simply a reflection of the fact it capitalizes on the trendiest moral panic of our times.

Social media is akin to violent video games in the 2000s, tv addiction in the 90s, santanic heavy metal in 80s, and even 'bicycle face' in the 1890s bicycle craze.

Jonathan Haidt seems extremely earnest and thoughtful, but unfortunately being lovingly catapulted to fame for being the guy who affirms everyones gut reaction to change (moral panic)...makes it extremely difficult financially, emotionally and socially for him to steelman the opposite side of that thing.

Even if he hadn't compiled a bunch of suspect research from pre-2010 to make his claims, the field of Psychology is at the center of the replication crisis and is objectively its worst offender. Pyschology studies published in prestigious academic journals have been found to replicate only 36% of the time. [2]

1. https://reason.com/video/2024/04/02/the-bad-science-behind-j...

2. https://en.wikipedia.org/wiki/Reproducibility_Project

reply
SilverElfin
7 hours ago
[-]
Politicians in Washington State is proposing not just age verification but also health warnings on adult websites. How is either constitutional?

https://www.xbiz.com/news/294260/washington-av-bill-jumps-on...

reply
bigstrat2003
7 hours ago
[-]
It's been illegal to sell porn to minors since approximately forever. If that is constitutional (not saying it is, but I'd be surprised if it wasn't since it's such an established practice), then I don't see how requiring age verification on porn sites wouldn't be. Requiring health warnings might be another matter, though. Not sure about that.
reply
SilverElfin
7 hours ago
[-]
Does the “sell” part matter - like is it simply that the sale to minors can be regulated? If it is free isn’t it just transmission of information?
reply
mikeyouse
3 hours ago
[-]
No, it's still illegal in most places to knowingly distribute pornography to minors - the law in my state starts;

> A person who sells, gives away or in any way furnishes to a person under the age of 18 years a book, pamphlet, or other printed paper or other thing, containing obscene language, or obscene prints, pictures, figures or descriptions tending to corrupt the morals of youth

reply
TomatoCo
6 hours ago
[-]
I wonder how it relates to the health warnings on tobacco products?
reply
thinkingtoilet
9 hours ago
[-]
I am someone who is very privacy focused. I've literally never had a social media account on any platform and I'm 42. From day one of facebook, I never wanted my information online. Like many here, I'm deeply concerned about privacy and surveillance.

In real life, we think age verification is a good thing. Kids shouldn't buy porn. Teenagers shouldn't get into bars. etc... There has to be room somewhere for reasonable discussion about making sure children do not have access to things they shouldn't. I think it's important to note, that complete dismissal of this idea only turns away your allies and hurts our cause in the long run.

reply
heavyset_go
7 hours ago
[-]
> In real life, we think age verification is a good thing. Kids shouldn't buy porn. Teenagers shouldn't get into bars. etc...

These are not equivalent, I don't have to scan my face, upload my ID and share my personal biometric data with various 3rd parties, who will sell and leak my data, every time I want to look at porn or sip a beer.

Also, there are countries where teenagers can drink and go to pubs, and society hasn't crumbled. We also have several generations of young adults with access to porn, and the sky didn't fall.

Maybe we shouldn't use the government to implement a "papers, please" process just to use and post on the internet, maybe we should instead legislate the root cause of the problem: algorithmic optimization and manipulation. That way everyone benefits, not just kids, and we won't have to scan our faces to look at memes on Reddit.

reply
raw_anon_1111
8 hours ago
[-]
In the online world you can’t make sure of anything. Florida for instance requires age verification for porn sites. Guess how many mainstream sites not based in the US are completely ignoring the law and guess how many others are easily accessible via a VPN? If you guessed the sum total of both is less than 100%, you would be wrong - and even that is tilted toward sites that just ignored it.

The one thing you can control is your childs access through their device using parental controls.

I can absolutely guarantee you that any teenager can easily get access to weed, cigarettes and alcohol despite the laws and definitely can use a VPN. It only takes one smart kid to show them how.

reply
delusional
8 hours ago
[-]
> I can absolutely guarantee you that any teenager can easily get access to weed, cigarettes and alcohol

Is you argument then that we shouldn't age gate those things in reality either? Would you suggest that teenagers smoke and drink just as much as they would have had it been legal to sell to minors?

Laws don't just exist to stop you, they also exist to shape society. They exist as signals for what we deem appropriate behavior.

reply
raw_anon_1111
7 hours ago
[-]
So we make meaningless laws that inconsistently enforced? What do you think happens when little Johnny is caught with weed in his car in a 95% White high income school district vs little Jerome in a 95% Black school district?

Also how much “shaping of society” do you expect to happen when you pass a law that no one respects?

How many kids do you think a law is going to stop from going to the porn sites that completely ignored the law?

How many kids say “I really want to smoke weed but it’s illegally so I won’t do it”?

reply
iamnothere
7 hours ago
[-]
Laws that nobody respects lead to lack of respect for the law as a whole.
reply
delusional
7 hours ago
[-]
> How many kids say “I really want to smoke weed but it’s illegally so I won’t do it”?

I think it's generally accepted that marijuana use increases after legalization. So yes.

reply
raw_anon_1111
7 hours ago
[-]
reply
pksebben
7 hours ago
[-]
You would think so, but DARE increased adolescent usage of some drugs while having little to no effect on others.

Turns out being illegal isn't as much of a disincentive as being uncool. If your parents are smoking it...

reply
raw_anon_1111
6 hours ago
[-]
Nancy Reagan: Don’t sniff glue to get high.

Kids: You can sniff glue and get high!!!

reply
reorder9695
8 hours ago
[-]
In real life the situation is different. When I buy alcohol, someone looks at my drivers licence, does not make a copy of it, forgets it quickly, and cannot tie it to other information about me. As soon as it's online and it's copies, I can't tell what happens on anyone else's servers. I don't want any company knowing my actual name and location, then that can be tied to more data, which is what Google etc have been trying to do for years but this would just completely fast track that. I would in theory be fine with something where it never leaves my computer, but that is obviously impossible.
reply
mikeyouse
3 hours ago
[-]
Not sure if you've bought alcohol lately, but at most large grocers near me, they're scanning licenses now instead of just verifying the birth date - and I'm pretty confident those scans aren't just checking the birthdate and then deleting all record of the interaction..
reply
anon291
36 minutes ago
[-]
So then this is an easy problem. Issue liquor stores a terminal. Liquor guy checks licenses. If you're an adult, the clerk presses a button. A public key is generated and uploaded to a public list. You get a private key that shows you're an adult and is not tied to you. Regular laws that apply to liquor also apply to this private key QR code... You cannot give it to a minor or sell it without a license.

To view adult content, use the code to sign a thing. Content company sees the signed code, verifies against the public list and sends the content.

Privacy preserved, no adult content to kids... Easy.

reply
delusional
8 hours ago
[-]
A lot of the proposals don't involve you sending your drivers license or "other information" to anyone. The site in question asks you to verify with a trusted third party (usually a government entity), and that trusted third party only provides then with the end result of the validation.

> which is what Google etc have been trying to do for years but this would just completely fast track that.

Excuse me? They have done that for years. There's nothing to "fast track" here. Big Tech already implemented surveillance.

reply
crote
7 hours ago
[-]
How many of those proposals do not have a government-mandated app as a spider in the middle of the web, which is aware of all the apps and websites you try to visit which ask for validation?
reply
pksebben
9 hours ago
[-]
I'm not dismissing that idea. It is a perfectly reasonable thing to think about, part of why we have age verification techniques that already work well in critical places like online vape shops.

I'm even willing to talk about the possibility that we could use more robust systems deployed more broadly. A lot of folks here are talking about ZKPs in this regard, and that's not a bad idea at all.

The issue I'm trying to sound the horn on is that the current push for AF in the US and EU has nothing to do with kids. I think you could put together a working group on ZKPs and Age Verification, write up a paper and run experiments, and when you bring it to the lawmakers they're gonna say something to the tune of:

"yeah but that's not trustworthy enough and too technical for people to understand so we're just going to serve legal notices to VPN providers instead to tell them that they can't anymore"

...or something to that tune. I'm not a mind reader, I've just read the reports (by lawmakers) mentioning VPNs as an "area of concern".

This is a political gambit and not a new one. The more we treat the current issue as having anything to do with protecting kids the more we legitimize what is an obvious grift.

reply
tzs
8 hours ago
[-]
> The issue I'm trying to sound the horn on is that the current push for AF in the US and EU has nothing to do with kids. I think you could put together a working group on ZKPs and Age Verification, write up a paper and run experiments, and when you bring it to the lawmakers they're gonna say something to the tune of:

The EU is currently doing large-scale field tries of the EU Digital Identity Wallet, which they have been working on for several years. It uses ZKPs for age verification. They expect to roll it out to the public near the end of 2026.

reply
pksebben
8 hours ago
[-]
I appreciate the mention - i had not yet heard of this EU DIW thing. That said, I can't find any resources on it that mention the use of ZKPs. Could you share a link?
reply
MatteoFrigo
8 hours ago
[-]
reply
bpt3
8 hours ago
[-]
How does age verification work for online vape shops?
reply
jajuuka
8 hours ago
[-]
I think the equivocation of online and real life is a massive mistake. When you go into a grocery store you are constantly on CCTV. Does that mean when you shop on Amazon them recording you via webcam should be considered? Obviously not. The restrictions in real life are temporary. If you try to buy port, go into a bar, etc you are asked for ID and they look at it and hand it back. They don't take your ID, your picture and store it forever and then sell information about you to other people.

The concern about children is aimed at the wrong target. Instead of targeting everyone it would make far more sense to target the platforms. With Roblox having a pedo problem the company should face punishment. That will actually get them to change their ways. However all these massive platforms are major donors to politicians so the chance of that happening is low to none.

reply
organsnyder
8 hours ago
[-]
> They don't take your ID, your picture and store it forever and then sell information about you to other people.

It would not surprise me in the least if there are brick-and-mortar businesses doing this, especially larger companies in jurisdictions (such as the majority of the United States) with weak/nonexistent privacy protections.

reply
pksebben
8 hours ago
[-]
They don't need to. If you bought something with a card they just store that - let the data brokerage handle connecting it with actual ID cards and other elements of your identity.

But yeah, walmart is for sure logging their transactions and selling the data. It's practically free money.

reply
techdmn
7 hours ago
[-]
Hate to break it to you, you're on social media right now.
reply
chriswarbo
7 hours ago
[-]
If HN is social media, then so are PHPBB, NNTP, BBS, etc. and the term loses its semantic relevance.

My heuristic is that social media focuses on particular people, regardless of what they're talking about. In contrast, forums (like HN) focus on a particular topic, regardless of who's talking about it.

reply
jolmg
7 hours ago
[-]
Doesn't matter what you want it to mean. What matters is what those in power want it to mean. It's very easy to stretch the definition to cover all sites where people can post content for strangers to see, or stretch it even wider to all digital media where people can interact with a social group.
reply
chriswarbo
7 hours ago
[-]
> Doesn't matter what you want it to mean. What matters is what those in power want it to mean.

I was replying to a discussion between two HN users, who were using conflicting definitions of the term. AFAIK they are not "those in power".

reply
jolmg
6 hours ago
[-]
> AFAIK they are not "those in power".

AFAIK nobody here is. The point is that with relevance to the current discussion on potential future age-verification laws, only the widest definition matters, because that's what's at risk.

reply
like_any_other
8 hours ago
[-]
> In real life, we think age verification is a good thing.

Ok. In real life, do we think having agents from the government and corporations following you everywhere, writing down your every move and word, is a good thing? Or rather, what kind of crime would one have to have committed, so that they would only be allowed out in public with surveillance agents trailing them everywhere?

reply
thinkingtoilet
8 hours ago
[-]
I don't, but society clearly does. We're already there.
reply
Razengan
3 hours ago
[-]
People are complacent. Even me, even you. We're not going to get off our 21st-century comforts asses and actually do anything to disrupt anything.

At best I may avoid using products from certain companies until I really have to, like Google and Microsoft's AIs, or clear cookies after signing into YouTube so it doesn't sign you into everything else, or write a comment here and there about how some Apple APIs like the iCloud Keychain allow Facebook etc to track you across devices and reinstalls, but I'm not ever going bother doing anything more that would actually challenge all this dystopian fuckiness.

reply
like_any_other
8 hours ago
[-]
> Age Verification isn't about Kids or Censorship, It's about Surveillance

We know this because, instead of putting easy-to-use parental controls on new devices sold (and making it easy to install on old ones) with good defaults [1], they didn't even try that, and went directly for the most privacy-hostile solution.

[1] So lazy parents with whatever censorship the government thinks is appropriate for kids, while involved parents can alter the filtering, or remove the software entirely.

reply
jacobgkau
6 hours ago
[-]
Parental control software has existed for decades. It hasn't worked.

Over 70% of teenagers <18 today have watched porn [1]. We all know (many from experience) that kids easily get around whatever restrictions adults put on their computers. We all know the memes about "click here if you're 18" being far less effective than "click here if you're not a robot."

Yes, there were other ways of trying to solve the problem. Governments could've mandated explicit websites (which includes a lot of mainstream social media these days) include the RTA rating tag instead of it being a voluntary thing, which social media companies still would've fought; and governments could've also mandated all devices come with parental control software to actually enforce that tag, which still would've been decried as overreach and possibly would've been easily circumventable for anyone who knows what they're doing (including kids).

But at the end of the day, there was a legitimate problem, and governments are trying to solve the problem, ulterior motives aside. It's not legal for people to have sex on the street in broad daylight (and even that would arguably be healthier for society than growing up on staged porn is). This argument is much more about whether it's healthy for generations to be raised on porn than many detractors want to admit.

[1] https://www.psychologytoday.com/us/blog/raising-kind-kids/20...

reply
wkat4242
3 hours ago
[-]
> Over 70% of teenagers <18 today have watched porn [1]. We all know (many from experience) that kids easily get around whatever restrictions adults put on their computers. We all know the memes about "click here if you're 18" being far less effective than "click here if you're not a robot."

And we all turned out fine I might add. In fact there's a lot more attention to consent and respect for women than 20 years ago.

Of course not counting the toxic masculine far right but that doesn't have anything to do with porn but everything with hate.

reply
anon291
34 minutes ago
[-]
It's also already illegal to send porn to a minor. Porn companies that transmit porn to minors are already committing a sex crime.
reply
Gormo
4 hours ago
[-]
> Parental control software has existed for decades. It hasn't worked.

How would you know whether it has worked or not? Wouldn't the relevant criteria be up to parents themselves?

reply
pksebben
6 hours ago
[-]
Is porn the biggest problem here? What I've seen points the finger at social media as the worst offender for youth mental health.

Also, access to porn isn't new with the internet. When we cleared out my grandpa's house we had to pry open a desk that was chock full of hustlers.

reply
kcplate
4 hours ago
[-]
> access to porn isn't new with the internet

“Ease of access” and “easy access to the most depraved shit you can think of that’s out there” is what changed. That is what is wrong and why many people feel we need to find some way to control that access.

The Internet didn’t come along until I was well into adulthood. Think about what porn access looked like in the late ‘70s and ‘80s. As a teen we were “lucky” if by some rare miracle a friend stole their dad’s Playboy, Penthouse, or Hustler and stashed it in the woods (couldn’t risk your parents finding it under your mattress) for us dudes to learn the finer points of female anatomy. In a week it would be washed out from the elements with nary a nipple to be seen. Those magazines (even hustler) was soft compared to what a few clicks can find today. Basically you got degrees of nudity back then, but we appreciated it.

Hardcore video was very rare to see as a horny teen kid in the ‘80s. Most porn movies was still pretty well confined to theaters, but advent of VHS meant (again by sheer luck) you had to have a friend whose parents happened to be in to it, who had rented or bought a video, it was in the house and accessible, all the adults had to be gone from the house so you could hurry up and watch a few minutes on the family’s one TV with a VCR. You needed to build in viewing time along with rewind time to hide your tracks.

Now…parents just leave the room for a few minutes and a willing kid with a couple of clicks could be watching something far beyond the most hardcore thing I saw as a teen.

reply
Eisenstein
1 hour ago
[-]
I doubt that the porn in the 70s was less bad than the porn today. Legal CSAM was being sold openly so what makes you think that it was more tame than modern stuff?

The fact is that as difficult as it was to get, you got a hold of it and watched it. Why would 'ease of access' make any difference if you didn't have easy access and got it anyway?

reply
kcplate
53 minutes ago
[-]
Are you implying that perhaps 15-25 mins worth of porn video total throughout all of someone’s teenage years due to such rare access of the material would have a similar emotional and mental impact as having the ability to see that much daily for years as is possible now?

There could have been years between the opportunities we had. I don’t think you conceptualize just how infrequent the opportunity would present itself.

reply
Eisenstein
49 minutes ago
[-]
I'm not making any claims about mental or emotional impacts, you are. What are they?
reply
aidenn0
4 hours ago
[-]
But it's been illegal to peddle porn to minors for much longer than it's been illegal to peddle social media, so it's a good proxy for how effective our current efforts are.
reply
mbg721
4 hours ago
[-]
The approximate substitute-good for porn is actual sex, which parents generally stop teens from doing. The substitute-good for social media is talking to people in person, which parents are generally happy with.
reply
rlpb
1 day ago
[-]
I'd be OK with an "I am a child" header mandated by law to be respected by service providers (eg. "adult sites" must not permit a client setting the header to proceed). On the client side, mandate that consumer devices that might reasonably be expected to be used by children (every smartphone, tablet, smart TV, etc) have parental controls that set the header. Leave it to parents to set the controls. Perhaps even hold parents culpable for not doing so, as a minimum supervision requirement, just as one may hold parents culpable for neglecting their children in other ways.

Forcing providers to divine the age of the user, or requiring an adult's identity to verify that they are not a child, is backwards, for all the reasons pointed out. But that's not the only way to "protect the children". Relying on a very minimal level of parental supervision of device use should be fine; we already expect far more than that in non-technology areas.

reply
Bender
9 hours ago
[-]
A server header exists to say something is adult and could be used for user-generated content as well. [1] It just needs legislation and an afternoon from interns at assorted companies. It's not perfect, nothing is but could easily trigger existing parental controls and parental controls that could be added back into user agents. No third parties required. I think I've beat this horse into dust [2] so I should just hire kvetchers to politely remind congress at this point.

[1] - https://news.ycombinator.com/item?id=46152074

[2] - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

reply
no_wizard
8 hours ago
[-]
I like the first part of the idea, which is the header. Heck, even enable it by default. As long as the tracking of the toggle isn't a thing its a perfect compromise. While we're at it, respecting do not track headers would also be nice.

This completely leaves it up to the families / parents to control and gives some level of compliance to make the effort worth while.

There may even be a way to generate enough noise with the request to prevent any forms of tracking. This sort of thing should really be isolated in that way to prevent potential abuses via data brokers by way of sale of the information

reply
Bender
8 hours ago
[-]
As long as the tracking of the toggle isn't a thing its a perfect compromise.

This concept does not involve any tracking if implemented as designed. The user agent detects the RTA header and triggers parental controls if enabled. Many sites already voluntarily self label. [1] Careful how far one drills down as these sites are NSFW and some may be malicious.

[1] - https://www.shodan.io/search?query=RTA-5042-1996-1400-1577-R...

reply
iamnothere
1 day ago
[-]
If we must do something like this, I think a good solution would be an optional server header that describes the types of objectionable content that may be present (including “none”). Browsers on child devices from mainstream vendors would refuse to display any “unrated” resources without the header, and would block any resources that parents deem age-inappropriate, with strict but fair default settings that can be overridden. Adult browsers would be unaffected. Legislatures could attempt to craft laws against intentionally miscategorized sites, as doing this would be intentionally targeting kids with adult content.

There is no perfect solution that avoids destroying the internet, but this would be a pretty good solution that shelters kids from accidentally entering adult areas, and it doesn’t harm adult internet users. It also avoids sending out information about the user’s age since filtering happens on the client device.

reply
ars
10 hours ago
[-]
This exists: https://en.wikipedia.org/wiki/Platform_for_Internet_Content_...

It was derided as a "system for mass censorship", and got shot down. In hindsight a mistake, and it should have been implemented - it was completely voluntary by the user.

reply
iamnothere
7 hours ago
[-]
It’s close, but I see why it failed. There’s no need to include licensing/rights management in there. Also this was before pervasive HTTPS, so it would have been possible for governments and ISPs to snoop the info and possibly block it. If it could be limited to just content ratings, and kept private behind SSL, this isn’t a bad approach.

But this also needs some kind of guarantee that lawmakers won’t try to force it on FOSS projects that want to operate outside the system. And that companies like Google won’t use EEE to gradually expand this header into other areas and eventually cut off consenting adults who want to operate outside this system. I’m not sure if it is possible to get those guarantees.

reply
ProjectArcturis
1 day ago
[-]
I'm not sure that making parents legally culpable for their kids being smart enough to download a new browser is LESS government intrusion.
reply
BobaFloutist
7 hours ago
[-]
I think the idea is that the manufacturers are culpable for making a parental restriction mode that's set-and-forget and not easily thwarted from inside the mode and parents are culpable for declining to set it.

Which I still don't love, but is at least more fair.

reply
e40
1 day ago
[-]
It could be added at the router? The child's computer could be identified and this header added, in a MITM situation... but, maybe that would be easy to defeat, by replacing the cert on the client? Not my area of expertise... really just asking...
reply
rlpb
1 day ago
[-]
There's no reason to hold the parents culpable. It would be up to the device manufacturer to ensure that this isn't possible on a system that has parental controls enabled. This is already a solved problem - see how MDM solutions do it, and see Apple's ban on alternative browsers.

It's not even necessary to block parents from giving their children Linux desktops or whatever. It'll largely solve the problem if parents are merely expected to enable parental controls on devices that have the capability.

reply
taeric
10 hours ago
[-]
My only gripe here is the idea of "perhaps hold the parents culpable." I'm not opposed to the idea, but what sucks is we are ultimately all paying the cost of it going wrong. The idea that we can shunt that away to a few irresponsible people is just demonstrably not the case.

Worse, it leads to situations where society seems to want to flat out be kid free in many ways. With families reportedly afraid to let their kids walk to and from school unsupervised.

I don't know an answer, mind. So this is where I have a gripe with no real answer. :(

reply
Gormo
4 hours ago
[-]
The presumption that it's not a matter of the parents' prerogative whether to decide whether the child's access should be restricted or not -- and treating the parents as accountable to someone else's standards of what is or is not appropriate for their own children -- is itself objectionable.

What content is appropriate for children is properly up to their parents themselves, not to the government or to some nebulous concept of "society". If parent's choose not to set such a flag on their children's devices, then that means that they're choosing to allow their children to access content without restriction, and that's what defines what is OK for their children to access.

reply
awesome_dude
9 hours ago
[-]
Add to that, clearly those "bad parents" are the result of bad parenting in the first place, so really it's the grand parents that are to blame...

Wait, those grand parents also had bad models to work with, so really it's the great grandparents that were to blame...

No, wait, it was the society that they grew up in that encouraged poor behaviour toward them, and forced them to react by taking on toxic behaviours. We all should pay because we all actively contribute to the world around us, and that includes being silent when we see bad things happening.

reply
no_wizard
9 hours ago
[-]
>Worse, it leads to situations where society seems to want to flat out be kid free in many ways. With families reportedly afraid to let their kids walk to and from school unsupervised.

I'm not seeing the correlation / causation here.

reply
saltcured
8 hours ago
[-]
Not sure, but I think the earlier post is implying a (false) dichotomy between:

A. "Your kid is not my problem"

B. "Your kid is everyone's problem"

reply
taeric
8 hours ago
[-]
Less the false dichotomy, and more the stickiness of each of those options. To your point (I think), those aren't the only options available, but people do seem to be attracted quite heavily to them.
reply
taeric
8 hours ago
[-]
I was referencing the towns that have called the cops because there were some unsupervised kids in a park. I comfort myself by saying this isn't nearly as common as the fear mongers online would have you think. That there are cases it happens still worries me.

Note that I'm not even necessarily worried about cops getting called. Quite the contrary, I am fine with the idea of cops having a more constant presence around parks and such. I do worry about people that get up in arms about how things are too unsafe for kids to be let outside. If that is the case, what can we do to make it safe?

reply
pembrook
1 day ago
[-]
> Perhaps even hold parents culpable for not doing so, as a minimum supervision requirement

Even the idea of prosecuting parents for allowing their child to access 'information,' no matter what that information is, just sounds like asking for 1984-style insanity.

A good rule of thumb when creating laws: imagine someone with opposite political views from yours applying said law at their discretion (because it will happen at some point!).

Another good question to ask yourself: is this really a severe enough problem that government needs to apply authoritarian control via its monopoly on violence to try to solve? Or is it just something I'm abstractly worried about because some pseudo-intellectuals are doing media tours to try to sell books by inciting moral panic?

As with every generation who is constantly worried about what "kids these days" are up to, it's highly highly likely the kids will be fine.

The worrying is a good instinct, but when it becomes an irrational media hysteria (the phase we're in for the millennial generation who've had kids and are becoming their parents), it creates perverse incentives and leads to dumb outcomes.

The truth is the young are more adaptable than the old. It's the adults we need to worry about.

reply
rlpb
1 day ago
[-]
> Even the idea of prosecuting parents for allowing their child to access 'information,' no matter what that information is, just sounds like asking for 1984-style insanity.

This assumes an absolutist approach to enforcement, which I did not advocate and is not a fundamental part of my proposed solution. In any case, the law already has to make a subjective decision in non-technology areas. It would be no different here. Courts would be able to consider the surrounding context, and over time set precedents for what does and does not cross the bar in a way that society considers acceptable.

reply
raw_anon_1111
8 hours ago
[-]
And surprisingly when the law makes such decisions, it seems to affect little Jerome more than little Johnny.

You have way too much faith in the fairness of the court system.

reply
pembrook
1 day ago
[-]
But what if we didn't collectively spend $billions of dollars and hundreds of thousands of hours battling with money, lobbyists, lawyers, judges and political campaigns over what is largely a moral panic?

What could humanity do instead with all that time and resources?

I know the US is a nation built by lawyers, for lawyers, but this is both its best strength and worst weakness. Sometimes it's in everyones best interest to accept the additional risks individually as opposed to bubble wrapping everything in legislation and expanding the scope of the corrupt lawyer-industrial complex.

Maybe the lawyers could use the extra time fixing something actually important like healthcare or education instead.

reply
bena
1 day ago
[-]
I am a Russian proxy site, I make requests for you without the header. I serve you the content because I don't care about following American laws.

Alternatively, just use an older browser that doesn't serve the header.

If anything, you'd want the reverse. A header that serves as a disclaimer saying "I'm an adult, you can serve me anything" and then the host would only serve if the browser sends that header. And you'd have to turn it on through the settings/parental controls.

Now, this doesn't handle the proxy situation. You could still have a proxy site that served the request with the header for you, but there's not much you can do about that regardless.

reply
rlpb
1 day ago
[-]
> I am a Russian proxy site, I make requests for you without the header. I serve you the content because I don't care about following American laws.

That's no different to a law mandating identification-based age verification though. A site in a different jurisdiction can ignore that just the same.

reply
bena
15 hours ago
[-]
Right. This isn't something we can completely solve with legislation or technology.
reply
hypeatei
1 day ago
[-]
Okay, so the HTTP header idea seems like it would have two issues:

1) Given that it just says you're a "child", how does that work across jurisdictions where the adult age may not be 18?

2) It seems like it could be abused by fingerprinters, ad services, and even hostile websites that want to show inappropriate content to children.

reply
phantasmish
1 day ago
[-]
> 1) Given that it just says you're a "child", how does that work across jurisdictions where the adult age may not be 18?

It's a client-side flag saying "treat this request as coming from a child (whatever that means to you)". I don't follow what the jurisdiction concern is.

[EDIT] Oooooh you mean if a child is legally 18 where the server is, but 16 where the client is. But the header could be un-set for a 5-year-old, too, so I don't think that much matters. The idea would be to empower parents to set a policy that flags requests from their kids as coming from a child. If they fail to do that, I suppose that'd be on them.

reply
hypeatei
1 day ago
[-]
The concern is that websites have no way to tell the actual age in this scenario so you'd be potentially inconveniencing and/or blocking legitimate users (according to the server jurisdiction's rules)

It doesn't seem sufficient, and would probably lead to age verification laws anyway.

reply
embedding-shape
1 day ago
[-]
No, it doesn't seem like that be a problem.

Say you're a parent, with child, living in country A where someone becomes an adult when they're 18. Once the child is 18, they'll use their own devices/browsers/whatever, and the flag is no longer set. But before that, the flag is set.

Now in country B or in country C it doesn't matter that the age of becoming an adult is 15 and 30. Because the flag is set locally on the clients device, all they need to do is block requests with the flag, and assume it's faithful. Then other parents in country B or country C set/unset the flag on their devices when it's appropriate.

No need to tell actual ages, and a way for services to say "this is not for children", and parents are still responsible for their own children. Sounds actually pretty OK to me.

reply
addaon
1 day ago
[-]
Except that if you're in country B, which has a law that says "you may not make information available to children that discloses that Santa Claus is made up," and the age of becoming an adult in your country is 18 -- knowing that a person accessing your site from country A is an adult in country A (which means, say, ≥ 16) is not sufficient to comply with the law.
reply
quailfarmer
21 hours ago
[-]
I’m not sure why the age of majority in the region of the server would be relevant. The user is not traveling to that region, the laws protecting them should be the laws in their own region.
reply
addaon
11 hours ago
[-]
> why

> should

I don't know if "should" is intended as a moral statement or a regulatory statement, but it's not at all unusual for server operators to need to comply with laws in the country in which they are operating…

reply
rlpb
1 day ago
[-]
> 1) Given that it just says you're a "child", how does that work across jurisdictions where the adult age may not be 18?

So namespace it then. "I'm a child as defined by the $country_code government". It's no more of a challenge than what identity-based age verification already needs to do.

> 2) It seems like it could be abused by fingerprinters, ad services, and even hostile websites that want to show inappropriate content to children.

This is still strictly better than identify-based age verification. Hostile or illegal sites can already do this anyway. Adding a single boolean flag which a large proportion of users are expected to have set isn't adding any significant fingerprinting information.

reply
Nevermark
1 hour ago
[-]
Why don't we have zero-knowledge age verification?

One actor verifies ages - and they only need to do so once. Sites give users a key tied to their user account to run by their verifier, who returns another key that attests to their verified age encoded for that specific site, to give back to the site.

The site doesn't know anything about the user, but their user login info. The verifier doesn't know anything about what sites are being visited.

This would seem to address the issues, without creating the pervasive privacy and security problems of every age verifying site creating database of people's government id's, faces, and other personal information.

It also seems like a way out of the legal/legislatorial battle. Which otherwise, is going to be an immortal hydra.

I would trust the EFF to run something like this. Open source. With only one-way encrypted/hashed personal info stored at their end.

(I am not a cryptographic expert. But I believe mechanisms like this are straightforward stuff at this point.)

reply
vbs_redlof
52 minutes ago
[-]
Because it's not about age verification, it's about setting up infrastructure to enable incremental enchroachment on privacy.

Fun fact: many ZK identity solutions run centralized provers and can be subpoenaed. Need to use something that generates proofs client-side.

reply
zmmmmm
8 hours ago
[-]
I feel like the EFF has stretched a bit far on this one. They need to be advocating for good solutions, not portraying age verification as fundamentally about surveillance and censorship.

As many are pointing out zero knowledge proofs exist and resolve most of the issues they are referring to. And it doesn't have to be complex. A government (or bank, or anybody that has an actual reason to know your identity) provided service that mints a verifiable one time code the user can plug into a web site is very simple and probably sufficient. Pretty standard PKI can do it.

The real battle to be lost here is that uploading actual identity to random web sites becomes normalised. Or worse, governments have to know what web sites you are going to. That's what needs to be fought against.

reply
quitit
8 hours ago
[-]
There are overwhelming dichotomous portrayals in this debate which gives me pause because there are entities who benefit from both sides of this debate, but neither would benefit with a sensible privacy-preserving solution.

So instead of advocating for those sensible and workable solutions, the discussions are always centred on either blocking any attempt at reform while hyperventilating about vague authoritarianism or a similarly vague need to protect the innocent.

Meanwhile in the world of smartphone data providers, social media networks, and the meta/googles of the world: they all know your personal information and identity up to the wazoo - and have far more information on every one of you than what is possessed by your own governments (well except for the governments that are also buying up that data.)

So let me be clear, the gate is open, the horse has bolted - recapturing your privacy is where attention should be focused in this debate... even if it's bad for shareholders.

reply
Seattle3503
7 hours ago
[-]
> Meanwhile in the world of smartphone data providers, social media networks, and the meta/googles of the world: they all know your personal information and identity up to the wazoo - and have far more information on every one of you than what is possessed by your own governments (well except for the governments that are also buying up that data.)

This is where I'm concerned too. We are seeing a proliferation of third party verification services that I have to interact with and that have no real obligations to citizens, because their customer is the website.

I'd like to see governments step in as semi-trusted third parties to provide primitives that allow us to bootstrap some sort of anonymous verification system. By semi-trusted, I mean trusted to provide attestations like "This person is a US citizen over the age of 18" but not necessarily trusted with an access log of all our websites.

reply
akersten
7 hours ago
[-]
> They need to be advocating for good solutions, not

No, fighting back against horrible proposals does not require suggesting an alternative proposal to the alleged problem. That only serves to benefit the malicious actors proposing the bad thing in the first place, the hope that we'll settle on something Not As Bad.

Thank god for the EFF and their everlasting fight to stop these nonsense internet laws. I'm glad they don't waste their time on "well how about this" solutions. The middle ground will never be enough for the proponents of surveillance, and will always be an incremental loss for the victims.

reply
stvltvs
7 hours ago
[-]
What good solutions are there that prevent the age verification service and the website from comparing notes (because Big Brother told them to) and figuring out who you are and what you're doing?
reply
zmmmmm
7 hours ago
[-]
If they voluntarily collude then yes, you can't avoid that. It's like third party cookies - once two parties collude it's game over. But that just outlines a situation where the user's chosen trusted service is hostile to their interests and they need to find one that isn't.

If Big Brother starts mandating the collusion - then yes, there's a hill to die on. But in some ways that's the point here. There are hills to die on - this just isn't it. And if you pick the wrong hill then you already died so you are losing the ones that really mattered. If the EFF pointed out to everyone that there is a privacy preserving answer to the core issue that is driving this, they could then mount a strong defense for the part that is truly problematic, since it isn't actually required to solve the problem.

reply
pseudalopex
6 hours ago
[-]
> If they voluntarily collude then yes, you can't avoid that.

You may accept this. Others will not.

> But that just outlines a situation where the user's chosen trusted service is hostile to their interests and they need to find one that isn't.

Just?

reply
Seattle3503
7 hours ago
[-]
This is only hypothetical for government ID's, but in theory government IDs could provide pairwise pseudonymous identifiers with services. Your ID with a single service is stable, but it is different with each service.
reply
pseudalopex
6 hours ago
[-]
They imagined a scenario where the state ordered 2 companies to identify users. How would replacing 1 company with the state improve this?
reply
Seattle3503
28 minutes ago
[-]
What would the state force you to do in this case?
reply
raw_anon_1111
8 hours ago
[-]
Age verification is about government overreach surveillance and censorship. That’s it.
reply
atonse
8 hours ago
[-]
Yep this is the first time I've disagreed with the EFF on anything civil liberties related.

My view is that there's no reason why we can't come together and come up with a rating system for websites (through HTTP headers, there are already a couple proposals, the RTA header and another W3C proposal).

Once a website just sends a header saying this is adult only content, what YOU as a user do with it is up to you. You could restrict it at the OS level (which is another thing we ALREADY have).

This would match the current system, which allows households to set their devices to block whatever they want, and the devices get metadata from the content producers.

No ID checks needed.

reply
casey2
7 hours ago
[-]
The reality is that even countries that have digital IDs like Belgium which would be 1 of the many requirements of implementing such a zero-knowledge system are pushing for surveillance heavy legislation right now.

Once a system is in place that infringes on rights nobody will modify it to give citizens more rights.

reply
mikece
1 day ago
[-]
Any time law-makers claim that a law is meant to protect children you can guarantee that the safety of children had almost nothing to do with it. This is all a push to normalize digital ID (to protect the children!); once normalized it will become mandatory.
reply
no_wizard
1 day ago
[-]
I always ask myself who wins with these laws (well, any law really). so far, the only winner seems to be the government and data collectors. It seems these laws are intended to collect leverage in the long run.
reply
kagrenac
1 day ago
[-]
The internet, with verifiable identities, is the greatest system to collect kompromat that one could ask for.
reply
guilamu
1 day ago
[-]
Well, you just answered brilliantly to your own question. You nailed it.
reply
no_wizard
1 day ago
[-]
Leaving room for someone to give me convincing evidence to the contrary. I didn't expect any, though.

It also lets someone who knows more than I to elaborate with more depth.

reply
guilamu
16 hours ago
[-]
Agreed, I just wanted to say I agree with your sentiment.
reply
AuthAuth
1 day ago
[-]
Parents? Children? Schools?

I'd argue that this is negligible for data collectors and governments. Governments already know who you are and what sites you vist for 99.99% of the population. Data collectors already know who you are and have a pretty good idea of the sites you vist.

What unique information is this going to give the government and data collectors to abuse? Lets establish one case that both affects average people and is "bad" and not waste time discussing things that only affect a tiny minority of privacy minded people.

Keep in mind the law states a platform must provide multiple ways to reasonably verify a user is older than 16. No mention of giving the specific user age or requiring govt id

reply
knallfrosch
8 hours ago
[-]
When they made smoke alarms mandatory in schools, it was only for selling smoke alarms! /s
reply
Aloisius
8 hours ago
[-]
I'm just waiting for governments to start requiring OS makers to verify identity on consumer phone/laptop/console devices before you can use them.

After all, they can legitimately claim it solves much of the issues with other verification schemes - no need to trust third party sites or apps, lower risk of phishing, easier to implement internationally and with foreign nationals, etc.

Of course, the downside (for individuals) is it would take just one legal tweak or pressure from the government to destroy anonymity for good.

reply
throwaway198846
9 hours ago
[-]
Why they don't use zero knowledge proof? Also question for the USA constitution experts, is this considered a violation of free speech? The article is not clear on this.
reply
alistairSH
9 hours ago
[-]
"Free Speech" in the American legal sense (1st Amendment to the Constitution) applies to government prohibition on speech, with a particular emphasis on political speech.

It doesn't prevent one person from prohibiting speech... I can tell a pastor to stop preaching on my lawn. But, the government cannot tell a pastor not to preach in the publicly-owned town square (generally, there are exceptions).

There are arguments that certain online forums are effectively "town squares in the internet age" (Twitter in particular, at least pre-Musk). But, I always found that analogy to fall apart - twitter (or whatever online forum) is more like an op-ed section in a newspaper, IMO. And newspapers don't have to publish every op-ed that gets submitted.

Also, the 1st Amendment does not protect you from the consequences of your speech. I can call my boss an asshole to his face legally - and he can fire me (generally, there are labor protections and exceptions).

reply
davorak
9 hours ago
[-]
> Why they don't use zero knowledge proof?

Some proposed implementation do this. Without the requirement there is no chance of your ID or age being leaked, with zero knowledge proof, there is a chance they leak but can be made small, potentially arbitrarily so. Other implementations come with larger risks.

reply
perihelions
9 hours ago
[-]
> "is this considered a violation of free speech?"

There were major Supreme Court rulings on the topic recently, see

https://news.ycombinator.com/item?id=44397799 ("US Supreme Court Upholds Texas Porn ID Law (wired.com)"—5 months ago, 212 comments)

https://en.wikipedia.org/wiki/Free_Speech_Coalition_v._Paxto...

reply
rockskon
9 hours ago
[-]
Zero knowledge proof is either trivially defeated by re-using the same credentials or doesn't have useful privacy guarantees. There really isn't an in-between here for something like age verification.
reply
vilhelm_s
9 hours ago
[-]
The idea is that e.g. the government would give you an app that lives on your phone. When you apply for the app you provide some documents to prove your age, but you don't say anything about what sites you plan to visit. When you want to visit an age-restricted site you use the app to generate a proof that you have it, but the site doesn't learn anything more than that, and the government doesn't learn that you used the app.
reply
raw_anon_1111
8 hours ago
[-]
> the government would give you an app that lives on your phone

And you don’t see a problem with this part?

reply
zmmmmm
8 hours ago
[-]
It's funny because the same "perfect is the enemy of good" argument is used both to criticize age verification in the first place (why bother if it isn't perfect) but then also to dismiss proprosals to implement it better (why bother if they don't perfectly fix the problem).
reply
Aloisius
8 hours ago
[-]
No. It's mostly that the proposed age verification schemes have fundamental problems that disqualify them from being considered "good" and none of the "better" implementations fix those problems at all.
reply
rockskon
7 hours ago
[-]
The problem is that it isn't even good. It falls squarely in the realm of "we must do something. This is something. Therefore we must do it."
reply
nostrademons
9 hours ago
[-]
Age verification in general is not intended to defend against people lying or using stolen credentials. If you’re 13 but know the password to your dead grandpa’s account and the website in question has no idea he’s dead, there’s no way to defend against that, with or without a ZKP.

What the ZKP does is let you limit the information the site collects to the fact that you are under 18, and nothing else. It’s an application of the principle of least privilege. It lets you give the website that one fact without revealing your name, birthdate, address, browsing history, and all your other private data.

reply
rockskon
7 hours ago
[-]
What prevents one kid in a friend group or in a school from sharing the same identifier?

After all - if it doesn't share anything other than a guarantee of the "age" of someone who is authenticating with the website then how would the website know there's re-use of identifiers?

reply
Aloisius
9 hours ago
[-]
- If I can do a zero knowledge proof once per day against someone who is under age, I can eventually determine their birthday.

- If I can do a zero knowledge proof with an arbitrary age, I can eventually determine anyone's birthday.

- If the only time people need to verify their age is to visit some site that they'd rather not anyone know they visit and that requires showing identity - even if it's 100% secure, a good share of people will balk simply because they do not believe it is secure or creating a chilling effect on speech.

- If the site that verifies identity is only required for porn, then it has a list of every single person who views porn. If the site that verifies identity is contacted every time age has to be re-registered, then it knows how often people view porn.

- If the site that verifies identity is a simple website and the population has been trained that uploading identity documents is totally normal, then you open yourself up to phishing attacks.

- If the site that verifies identity is not secure or keeps records, then anyone can have the list (via subpoena or hacking).

- If the protocol ever exchanges any unique identifier from the site that verifies your identity and the site that verifies identity keeps records, then one may piece together, via subpoena (or government espionage, hacking) every site you visit.

Frankly, the fact that everyone promoting these systems hasn't admitted there are any potential security risks should be like an air raid siren going off in people's heads.

And at the end of all of this, none of it will prevent access to a child. Between VPNs, sharing accounts, getting older siblings/friends to do age verification for them, sites in jurisdictions that simply don't care, the darkweb, copying the token/cert/whatever from someone else, proxying age verification requests to an older sibling/rando, etc. there are way, way too many ways around it.

So one must ask, why does taking all this risk for so little reward make any sense?

reply
raverbashing
9 hours ago
[-]
> is this considered a violation of free speech?

Not in principle

See the limits on curse words on TV. Or MPAA ratings for movies.

reply
perihelions
9 hours ago
[-]
> "MPAA ratings for movies"

(IANAL) That demonstrates the opposite: that's a voluntary system with no force of law behind it—the private sector "self-regulating" itself, if you will.

The film rating systems were created under threat of legislation in the first half of the 20th century (so, in lieu of actual legislation). The transformative 1st Amendment rulings of the Warren Court would have made such laws unconstitutional after the 1960's, but the dynamic that created these codes predates that—predates the modern judicial interpretation of the 1st Amendment.

https://en.wikipedia.org/wiki/Hays_Code (history background)

https://en.wikipedia.org/wiki/Motion_Picture_Association_fil... ("The MPA rating system is a voluntary scheme that is not enforced by law")

reply
raw_anon_1111
8 hours ago
[-]
There is only a limit of curse words on over the air TV under the theory that the airwaves belong to the public.
reply
imiric
9 hours ago
[-]
Because safeguarding user privacy is not a goal. Scoring political points with "think of the children" agendas, while getting kickbacks from companies salivating at the opportunity to gather even more personal data, is.
reply
neuroelectron
9 hours ago
[-]
Onlyfans is legal prostitution so we need to protect that. Better to regulate the entire internet with taking your rights than question why it's allowed.
reply
dragonwriter
9 hours ago
[-]
> Onlyfans is legal prostitution

No, its legal (in some jurisdictions) pornography. Prostitution on the platform, as well as whatever the legal status is in the set of jurisdictions involved, is also, from what I understand, explicitly against the platform ToS.

reply
BobaFloutist
7 hours ago
[-]
I will say that it's a weird legal distinction in many states that paying someone to have sex is illegal unlessss.... you record it and sell the recording. Then it's legal.
reply
imiric
9 hours ago
[-]
Way to split hairs. Something being against the ToS can still be legal.

Prostitution obviously cannot physically happen on an online platform, but it sure is a convenient way to advertise and attract customers, and serve as the payment processor.

reply
dragonwriter
9 hours ago
[-]
> Way to split hairs. Something being against the ToS can still be legal.

Well, no, violating a binding legal agreement is illegal.

> Prostitution obviously cannot physically happen on an online platform, but it sure is a convenient way to advertise and attract customers, and serve as the payment processor.

Which is explicilty prohibited by the law in many places OF operates, and judging from the number of people who are creators on the platform I've seen complaining about people jeopardizing their status with the platform by soliciting it on the platform, also by the actively-enforced terms of the platform. OF is simply not “legal prostitution”, and it is ridiculous to describe it that way

reply
jacobgkau
6 hours ago
[-]
> Well, no, violating a binding legal agreement is illegal.

Not touching the rest of this thread's arguments, but that isn't really true. Breaking ToS, or any other contract, is not "illegal"-- it's not a crime. It opens you up to civil (not criminal) penalties if the other party sues, but that's it.

reply
pseudalopex
5 hours ago
[-]
Illegal means not legal. Not criminal.
reply
paulvnickerson
8 hours ago
[-]
What they should do instead is invest in technology that can do age verification while protecting privacy. This is obviously a required piece of technology. It is not acceptable for children to grow up on the Internet and easily access pornography by simply going to a website. Imagine letting your children loose in a city where they can wander in and out of peep shows without friction.
reply
GuB-42
8 hours ago
[-]
While the "required piece of technology" aspect is debatable, there is certainly enough demand for it that it is going to happen in one way or another.

So I agree that instead of fighting some change that I think is inevitable, they should make it so that it works in the most privacy-conscious way possible. And I mean with real technical solutions, like an open-source app or browser extension you can download, a proof-of-concept server for age verification, etc... using the best crypto has to offer.

reply
maerF0x0
5 hours ago
[-]
It's funny to me that porn does not have more age verification than just "Yes, I promise I'm 18+" (or whatever).

If I go to the liquor store, I can't just promise "I'm 21+"

If I go to vote, I can't just promise I'm 16+

If I go to buy a lottery ticket I cant just promise...

I think I made my point.

reply
Spivak
2 hours ago
[-]
And yet this has been the system for the last 35 years and somehow the sky didn't fall.

Either you're old enough to understand what porn is and have desires to consume it in which case you won't be scarred by it and don't need protection from it, or your not in which case you won't seek it out. You need id checks for alcohol because people too young to consume it want it, and given how much teens drink and how not a problem lower drinking ages are in other countries even that claim is somewhat dubious.

reply
giancarlostoro
1 day ago
[-]
Not to mention people lose accounts because someone reported them as underage, and now they don't want to fully dox themselves over this. Who can blame them considering discord's own support ticket system was hacked which included people who had to validate their age.
reply
maerF0x0
5 hours ago
[-]
Besides Porn, what sites are doing this? It's odd to me that the EFF page does not mention "porn" or "adult content" at all.
reply
wkat4242
3 hours ago
[-]
Here in Europe they want to do the same for chat services like WhatsApp.
reply
WCSTombs
4 hours ago
[-]
YouTube.
reply
taeric
9 hours ago
[-]
I would be happy if we just moved to a way we could more realistically enable audits of information flow in our lives. I don't, necessarily, want to restrict my kids consumptions. It does worry me that I don't know how to teach them to audit all of the information that is being exposed to them. Or worse, collected about them.
reply
squigz
9 hours ago
[-]
I'm not entirely sure what you mean by 'audit', but teach them critical thinking, and show them the strategies the media uses to manipulates them. Teach them there's often more than 1 side to a story.

Things like this will give them a huge advantage in not being manipulated and lied to.

reply
taeric
9 hours ago
[-]
To explain it like budgeting. You can forward plan what you will spend money on. But you also need to be able to see where all of your money went. This is nigh impossible with data flow, nowadays.

I'd be comfortable with it having large segments of "uncategorized." But right now, if I scan over to my ISP to see how much data I have used for the month, I have little to no help in saying how much of that was what.

reply
squigz
9 hours ago
[-]
Ah okay. I think this would probably be pretty tricky, security-wise, no? One of my first thoughts that might help would be writing a simple tool that parses history from your browsers to categorize it. Other than that, there are things like https://activitywatch.net/ (which seems to have a desktop and Android version)
reply
taeric
8 hours ago
[-]
Yeah, just writing out the idea, I would imagine I should be able to see a lot of this with my router?

Again, I get that that will be a lot I have to write off as "uncategorized." I'm not even trying to drive all telemetry down to zero. I'm comfortable knowing that my HVAC may send diagnostic stuff in, as an example. But it seems kind of crazy to me that this is not something that is often discussed? Do I just miss those discussions?

reply
rolph
10 hours ago
[-]
back in the day the worst thing you could do in a blog or channel was to self identify as female, as you would get flooded

i am a child header = i am verifying myself as valid target header

has anyone realized that whatever at all the "good" guys do, the "bad" guys will abuse it.

we need canaries [bots with child header], to get a metric on any increase of attempted crimes vs a child.

reply
RGamma
7 hours ago
[-]
ITT: HN discussing whether and how to pull up the ladder on free youth.
reply
socalgal2
9 hours ago
[-]
That is an extremely poor title. Reading it I'd expect the average person to be like "yea, it's about time" and skip the article.
reply
luckys
9 hours ago
[-]
The end goal of this line of thinking is tracking every molecule in the universe. Exagerated I know, but we're moving in that direction.
reply
alkindiffie
1 day ago
[-]
Would be great if EFF also sets up a phone verification hub.

https://news.ycombinator.com/item?id=45989890

reply
H1Supreme
8 hours ago
[-]
Generally speaking, I share the HN consensus on age verification laws. But, there is a real problem with kid's unfettered internet access. Just think about all the adults who are hopelessly addicted to social media. The negative affects are amplified when it comes to developing minds.

My SO has been teaching for nearly 20 years now, and mental health in kids has fallen off a cliff in the last two decades. I could fill this page with online bullying stories. Some of which, are especially cruel. Half her students are on medication for anxiety. It's out of control, honestly.

That said, I don't know how to solve it. It's easy to put this on the parents, but that's not the answer. Otherwise, it would be solved already. Some don't care. Some don't have the time to care because they're trying to keep the lights on, and dinner on the table. And, some simply think it doesn't apply to them or their children. Parents on HN are hyper-aware of this sort of thing, but that's definitely the minority.

I know a family that would be most folks least likely candidate for something bad to happen online. Single income, relatively well off, the parent at home has an eye on the kids 24/7. And, if you met the kids, you would most likely qualify them as "good kids". Without going into detail, their life was turned upside down because one of the kids was "joking around" online.

Again, I don't know what the answer to the problem is. Clearly, age verification laws are a veiled attempt to both collect and control data. And, EFF's emphasis on advertising restrictions as a solution, seems off the mark. There's more to it than that. Idk, this shit makes me want to log off permanently, and pretend it's 1992.

reply
WCSTombs
4 hours ago
[-]
> It's easy to put this on the parents, but that's not the answer. Otherwise, it would be solved already.

I would argue it must be part of the answer, if it isn't literally the answer. You even kind of hit the nail on the head later in that paragraph:

> Parents on HN are hyper-aware of this sort of thing, but that's definitely the minority.

I would start there. Spreading awareness and social pressure is a tractable problem.

reply
forshaper
8 hours ago
[-]
Whose fault is it when a child burns their hand on the stove?
reply
iamnothere
3 hours ago
[-]
We need common sense stove control. Verify your age before using the burner please.
reply
bobajeff
1 day ago
[-]
I wonder what the psychological effect of having little or no privacy would do to people. Are we all going to be paranoid schizophrenics? How would a world of paranoid schizophrenics work? How insane are world events going to be from that point on?
reply
pyuser583
1 day ago
[-]
You think you have privacy?

At best, you go back and forth between no privacy, a heavily condition privacy. At best.

Let’s take privacy back, but that’s a big process.

If you haven’t internalized surveillance, start working on it!

reply
technothrasher
1 day ago
[-]
> Are we all going to be paranoid schizophrenics?

Paranoid, maybe. Schizophrenics? No. Firstly, "paranoid schizophrenia" is an outdated diagnosis. Paranoia is a common symptom of schizophrenia, but schizophrenics exhibiting paranoia are not considered to have separate mental illness from those who are not. Secondly, schizophrenia is not caused simply by psychological stress, and is associated with a large cluster of positive and negative symptoms, with paranoia being only one of them.

reply
burnt-resistor
9 hours ago
[-]
China is an example of this. Somewhere that, according to the UN's data, executed "undesirable" people with such gusto that it incidentally decreased the organ donor waitlist time so low that it couldn't be explained by any other factor.

"Perfect" security is only attainable with zero dissent, zero individuality, zero privacy, and zero freedom.

reply
SapporoChris
9 hours ago
[-]
"Involuntary organ harvesting[3][4][5] was once legal on criminals, but outlawed in 2015"

https://en.wikipedia.org/wiki/Organ_transplantation_in_China

reply
fsflover
10 hours ago
[-]
reply
EGreg
3 hours ago
[-]
Why is this so difficult?

Verified Credentials exist on the Web.

Drivers’ licenses exist.

Just show that you are over 18 in a zero knowledge way and be done with it. Why do they need to see your IDs?

reply
wkat4242
3 hours ago
[-]
Because I don't want to prove myself in everything I do online?
reply
k310
7 hours ago
[-]
As I have stated before, AI is freeing us to:

1. create our own porn at home and (soon)

2. have home orgasmatrons.

Parents have complete control of the Chat/Porn server and since the orgasmatron necessarily has all your desires stored in its LLM (Large Lust Model) it trivially knows your age and will lock you out.

And internet porn can be banned regardless of age. (that's only half sarcastically said).

Demand for home Large Lust Models and orgasmatrons will soar. You heard it here first. Opportunity for entrepreneurs. And these home-based products are the only way to keep porn away from kids (if parents don't care now, they never will) and to maintain privacy on the internet.

Every place where I've worked in I.T., the rule was "No porn downloading at work. Porn belongs in the home." (especially in the days of slow home modems)

And to be really enforceable, all offshore sites would have to agree to the scheme, including certain Russian ones who are glad to pollute our children's and adults' minds with porn, propaganda and conspiracy theories.

Lastly: There always was and will be media. Micro-SD cards now? If not phones, thrift store picture frames and RPi's. "Porn finds a way."

reply
dvh
1 day ago
[-]
This gives me Leisure Suit Larry flashbacks
reply
blitzar
1 day ago
[-]
Ken sent me
reply
kingforaday
1 day ago
[-]
LSL4 was my favorite.
reply
cvoss
9 hours ago
[-]
> we must fight back to protect the internet that we know and love.

This is not compelling. The internet I know and love has been dying for a long time for unrelated reasons. The new internet that is replacing that one is an internet that I very much do not love and would be totally ok to see lots of it get harder to access.

reply
futuraperdita
9 hours ago
[-]
What parts and content should be "harder to access" in your view?
reply
Avicebron
9 hours ago
[-]
The parts where traffic generates money for the kind of people who would think putting an advertisement on a screen on someone's home refrigerator is an acceptable thing to do (morally, not legally or whatever).

Extrapolate that how you will.

reply
2OEH8eoCRo0
6 hours ago
[-]
Is the EFF captured? This is a resource against misguided laws but what's a law they'd actually approve of? This entire resource is boring defense of the status quo.
reply
micromacrofoot
7 hours ago
[-]
Asking for a year of birth is the best solution and always will be. Once kids are old enough to figure that out you're not going to stop them from much.
reply
1vuio0pswjnm7
1 day ago
[-]
"SAN FRANCISCO-With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help"

The surveillance and censorship system is built, administered and maintained by Silicon Valley companies who have adopted this as their "business model". "Monetising" surveillance of other peoples' noncommercial internet use

These Silicon Valley companies have been surveilling internet subscribers for over a decade, relentlessly connecting online identity to offline identity, hell bent on knowing who is accessing what webpage on what website, where they live, what they are interested in, and so on, building detailed advertising profiles (including the age of the ad target) tied to IP addresses, then selling the subscribers out to advertisers and collecting obscene profits (and killing media organisations that hire journalists in the process)

Now these companies are being forced to share some of the data they collect and store

Gosh, who would have forseen such an outcome

These laws are targeting the Silicon Valley companies, not internet subscribers

But the companies want to spin it as an attack on subscribers

The truth is the companies have been attacking subscriber privacy and attempting to gatekeep internet publication^1 for over a decade, in the name of advertising and obscene profits

1. Discourage subscribers from publishing websites and encourage them to create pages on the company's website instead. Centralise internet publication, collect data, perform surveillance and serve advertisements

reply
rixed
1 day ago
[-]
It was bad already, so who cares if that gets worse? Is that the message?

Silicon valley uses that information to sell adds, and sometimes votes. Not great, but I can imagine much worse from a State.

reply
josefritzishere
10 hours ago
[-]
We must destroy all freedom and forsake all right to free speech and privacy... for the children!
reply
hackingonempty
1 day ago
[-]
I am disappointed to find no mentions of zero knowledge proofs or any other indications that we wont have to trust anyone with this task.

We have the technology to do age verification without revealing any more information to the site and without the verification authority finding out what sites we are browsing. However, most people are ignorant of it.

If we don't push for the use of privacy preserving technology we wont get it and we will get more tracking. You cannot defeat age verification on the internet, age verification is already a feature of our culture. The only way out is to ensure that privacy preserving technologies are mandated.

reply
wiredpancake
1 day ago
[-]
Everyone, including politicians are intimately aware of Zero Knowledge Proofs.

Google even open-sourced technology to enable it: https://blog.google/technology/safety-security/opening-up-ze...

The politicians don't want Zero Knowledge Proof because it prevents the mass-surveillance of internet users. This is all deliberate.

reply
segmondy
10 hours ago
[-]
How are you going to verify the age of someone coming in from another country?
reply
advisedwang
9 hours ago
[-]
Realistically all but the largest sites are going to contract out age verification to third parties. There will probably be verification companies that will have a wide range of verifications.
reply
Hizonner
9 hours ago
[-]
There already are, and have been for a while. And, yes, of course, they've been involved in lobbying for the requirements.
reply
Pxtl
9 hours ago
[-]
Infuriating that we get all the bad sides of digital ID without the good sides.

It's deanonymizing and intrusive and mandatory for sites to implement without protecting them from sockpuppets and foreign troll farms.

reply
orwin
1 day ago
[-]
I think sadly, this is a lost battle in public opinion. And the gambling of digital assets on Roblox and other casino-like website is also starting to get public attention, and will turn public opinion further.

The CNIL gave up 3 years ago, and gave guidelines, you can read about it here [0]. At the time it read like "How well, we tried, we said it is incompatible with privacy and the GDPR multiple times, we insist one more time that giving tools to parents is the only privacy-safe solution despite obvious problems, but since your fucking law will pass, so the best we can do is to draw guidelines, and present solutions and how to implement them correctly".

I think the EFF should do the same. That's just how it is. Define solutions you'll agree with. Fight the fight on chat control and other stuff where the public opinion can be changed, this is too late, and honestly, if it's done well,it might be fine.

If the first implementation is correct, we will have to fight to maintain the statu quo, which in a conservative society, is the easiest, especially when no other solution have been tested. If it's not, we will have to fight to make it correct, then fight to maintain it, and both are harder. the EFF should reluctantly agree and draft the technical solution themselves.

[0] https://www.cnil.fr/en/online-age-verification-balancing-pri...

reply
DeathArrow
10 hours ago
[-]
Like any wrong government initiative, mass surveillance is being justified by "think of the children" and "fighting the bad guys".
reply
devwastaken
15 hours ago
[-]
The net got too big, the 90% got in because of facebook and google, and automated bots took over from there.

Either we create the fix, or the feds take it over. we need to sever the idea of a global internet. per-country and allied nations only. anonymous cert-chain verified ID stored on device. problem fixed.

reply
motohagiography
9 hours ago
[-]
online age verification is disingenuous and a pretext to give governments the hard coded technical option to regulate speech and association.

there's a great game being played out by these users of force against the advocates of desire. everything about the bureaucracies pushing digital ID is unwanted. this isnt about age verification tech, its about illegitimate power for unwanted people who are actuated by forcing their will on others.

we should treat these actions with the open disgust they deserve.

reply
fragmede
9 hours ago
[-]
* for the US Internet. Internet access, even on cafe shop wifi, in India is trace backable to the ID of the user already.
reply
Kozmik1
9 hours ago
[-]
How would internet access in a coffee shop be traced to the specific user?
reply
greenavocado
9 hours ago
[-]
In Switzerland you are forced to receive an SMS code to your phone on every portal in every public space everywhere to establish your identity on every network. No SMS = No public wifi anywhere in Switzerland.
reply
elashri
9 hours ago
[-]
The reason is that the law in Switzerland requires identification of the user of free internet services [1]. So it is not just common practice

[1] https://www.gva.ch/Site/Passagers/Shopping/Services/Business...

reply
Kozmik1
9 hours ago
[-]
That's a funny choice, I thought Europe was done with SMS. I can see this 1-to-1 mapping with other cellphone derived messaging like Whatsapp, etc being an issue for privacy but it's certainly possible to have multiple phones.
reply
pnw
9 hours ago
[-]
How would an SMS code sent to a phone number be traced to the specific user? Anonymous VOIP numbers are plentiful.
reply
afavour
9 hours ago
[-]
I imagine they would block anonymous VOIP numbers.
reply
withinrafael
9 hours ago
[-]
I believe cyber cafes in India must verify identity via ID before allowing internet access and maintain logs, browsing history, etc. for at least one year.
reply
stackedinserter
9 hours ago
[-]
I want this practice to remain in countries like India and Russia.
reply
ActorNightly
9 hours ago
[-]
Good. Let this version of internet be locked down and censored.

If people care enough, they will build a new internet.

reply
anthk
4 hours ago
[-]
Guifi.net and the rest of meshnets. Also, Yggdrasil. Not for anonymity, but availability.
reply
rich_sasha
10 hours ago
[-]
I understand this is a technology forum, frequented mostly by liberal adults, who built a lot of their internet nous on totally free internet of 90s and 00s. I am one of them.

Equally, I think insisting that there must be no controls to internet access whatsoever is not right either. There is now plenty of evidence that eg. social media are very harmful to teenagers - and frankly, before I noticed, going on FB got me depressed each time I did it at one point. And as a parent, you realise how little control you have over your children's tech access. Case in point - my kids seem to have access to very poorly locked down iPads at school. I complained, but they frankly don't understand.

We all accept kids can't buy alcohol and cigarettes, even if that encroaches on their freedom. But or course flashing an ID when you're over 18 is not very privacy-invading.

Likewise, I think it is much better to discuss better means of effecting these access controls. As some comments here mention, there are e.g. zero knowledge proofs.

I'm sure I'll be told it's all a sham to collect data and it's not about kids. And maybe. But I care about kids not having access to TikTok and Pornhub. So I'd rather make the laws better than moan about how terrible it is to limit access to porn and dopamine shots.

reply
cwmoore
9 hours ago
[-]
That’s not the moan friend.
reply
vegadw
8 hours ago
[-]
You had me thinking "This is a reasonable argument even if I disagree" until the last line. That's completely disingenuous of the argument.
reply