For the folks in the back row:
Age Verification isn't about Kids or Censorship, It's about Surveillance
Age Verification isn't about Kids or Censorship, It's about Surveillance
Age Verification isn't about Kids or Censorship, It's about Surveillance
Without even reaching for my tinfoil hat, the strategy at work here is clear [0 1 2]. If we have to know that you're not a minor, then we also have to know who you are so we can make any techniques to obfuscate that illegal. By turning this from "keep an eye on your kids" to "prove you're not a kid" they've created the conditions to make privacy itself illegal.
VPNs are next. Then PGP. Then anything else that makes it hard for them to know who you are, what you say, and who you say it to.
Please, please don't fall into the trap and start discussing whether or not this is going to be effective to protect kids. It isn't, and that isn't the point.
0 https://www.eff.org/deeplinks/2025/11/lawmakers-want-ban-vpn...
1 https://www.techradar.com/vpn/vpn-privacy-security/vpn-usage...
2 https://hansard.parliament.uk/Lords/2025-09-15/debates/57714...
Telling that larger group their interest just isn't part of the conversation at all excludes _you_ from the conversation rather than changing the focus of the conversation to the other downsides instead of the primary interest others might have.
There are also, concerningly IMO, an extremely large amount of people willing to accept severe surveillance or privacy downsides so long as it helps achieve the goal about kids. To them, the same would in reverse would be "why are you talking about surveillance, the real issue is the kids. Say it 3 times loud, for those in the back!" and the conversation gets nowhere because it's just people saying how they won't talk to anyone who disagrees what concerns should be considered.
That is untrue
(This includes being robust against law enforcement action, legal or otherwise.)
When I say "if we have to know you're not a kid, we have to know who you are" I'm not stating an actual truth, but the argument as it is playing out politically.
The EU age verification solution says implementations SHOULD implement[1] their ZKP protocol[2]. Not linking it to the user is stated as an explicit goal:
Unlinkability: The goal of the solution is to prevent user profiling and tracking by avoiding linkable transactions. Initially, the solution will rely on batch issuance to protect users from colluding RPs. Zero-Knowledge Proof (ZKP) mechanisms will be considered to offer protection. More details are provided in Section 7.
[1]: https://ageverification.dev/av-doc-technical-specification/d...
[2]: https://ageverification.dev/av-doc-technical-specification/d...
Assuming that's even a goal, of course. The cited paragraph mentions RPs (the websites, from what I understand), but makes no mention of attestation providers.
In the non-ZKP presentation, the "holder" (phone) sends the credential to the relying party (website), and the RP executes some verification algorithm. In the ZK presentation, the holder executes the verification algorithm and sends to the RP a proof that the algorithm was executed correctly.
The "proof" has this magical property that it reveals nothing other than the check passed. (You will have to take on faith that such proofs exist.) In particular, if the check was the predicate "I have a signature by ISSUER on HASH, and SHA256(DOCUMENT)==HASH, and DOCUMENT["age_gt_18"]=TRUE", anybody looking at the proof cannot infer ISSUER, HASH, DOCUMENT, or HASH, or nothing else really. "Cannot infer" means that the proof is some random object and all HASH, DOCUMENT, ISSUER, etc. that satisfy the predicate are equally likely, assuming that the randomness used in the proof is private to the holder. Moreover, a generating a proof uses fresh randomness each time, so given two proofs of the same statement, you still cannot tell whether they come from the same ISSUER, HASH, DOCUMENT, ...
So like, we've got this algorithm that gets sent our way and we run it and that provides kind of a cryptographic hash or whatever. But if we're running the algorithm ourselves what's to stop us from lying? Where does the 'proof' come from? What's the check that it's running and why do we inherently trust the source it's checking?
This is a simplified method for age verification:
I want to buy alcohol from my phone and need to prove I’m over 18. SickBooze.com asks me for proof by generating a request to assert “age >= 18”.
My phone signs this request with my own private key, and forwards it to the government server.
The government verifies my signature against a public key I previously submitted to them, checks my age data in their own register of residents, and finally signs the request with one of their private keys.
My phone receives the signed response and forwards it back to SickBooze.com, which can verify the government’s signature offline against a cached list of public keys. Now they can sell me alcohol.
- the “request” itself is anonymous and doesn’t contain any identifying information unless that is what you intended to verify
- the government doesn’t know what service I used, nor why I used it, they only know that I needed to verify an assertion about my age
- the web service I used doesn’t know my identity, they don’t even know my exact age, they just know that an assertion about being >= 18 is true.
Sadly, it‘s still hard to explain how exactly it works, but conceptually simpler than arbitrary ZKPs.
Here is a hopefully simple example of how this ZKP thing may even be possible. Imagine that you give me a Sudoku puzzle. I solve it, and then I want to prove to you that I have solved it without telling you the solution. It sounds impossible, but here is one way to do it. I compute the solution. I randomly scramble the digits 1-9 and I put the scrambled solution in a 9x9 array of lock boxes on a table. I have the keys to the 81 locks but I am not giving you the key yet. You randomly ask me to open either 1) one random row chosen by you; 2) one random column chosen by you; 3) one random 3x3 block chosen by you; or 4) the cells corresponding to the original puzzle you posed to me. In total you have 28 possibilities, and assume that you choose them with equal probability. You tell me what you want and I open the corresponding lockboxes. You verify that the opened lock boxes are consistent with me knowing a solution, e.g. all numbers in a row are distinct, the 3x3 block consists of distinct numbers, etc. If I am cheating, then at least one of your 28 choices will be inconsistent, and you catch me with probability 1/28, so if we repeat this game 1000 times, and I don't know the solution, you will catch me with probability at least 1-(1/28)^1000 which is effectively 1. However, every time we repeat the game, I pick a different random scrambling of the integers 1-9, so you don't learn anything about the solution.
All of ZKP is a fancy way to 1) encode arbitrary computations in this sort of protocol, and 2) amplify the probability of success via clever error-correction tricks.
The other thing you need to know is that the protocol I described requires interaction (I lock the boxes and you tell me which ones to open), but there is a way to remove the interaction. Observe that in the Sudoku game above, all you are doing is flipping random coins and sending them to me. Of course you cannot let me pick the random coins, but if we agree that the random coins are just the SHA256 hash of what I told you, or something else similarly unpredictable, then you will be convinced of the proof even if the "coins" are something that I compute myself by using SHA256. This is called the "Fiat-Shamir transformation".
How do we implement the lock boxes? I tell you SHA256(NONCE, VALUE) where the NONCE is chosen by me. Given the hash you cannot compute VALUE. To open the lock box, I tell you NONCE and VALUE, which you believe under the assumption that I cannot find a collision in SHA256.
That's the bit I was missing! The prover pre-registers the scrambled solution, so they can't cheat by making up values that fit the constraints.
The currently favored approach works like this. The DOCUMENT contains a device public key DPK. The corresponding secret key is stored in some secure hardware on the phone, designed so that I (or malware or whatever) cannot extract the secret key from the secure hardware. Think of it as a yubikey or something, but embedded in the phone. Every presentation flow will demand that the secure element produce a signature of a random challenge from the RP under the secret key of the secure hardware. In the ZKP presentation, the ZKP prover produces a proof that this signature verifies correctly, without disclosing the secret key of the secure hardware.
In your example, the parent could give the phone to the kid. However, in current incarnations, the secure hardware refuses to generate a signature unless unlocked by some kind of biometric identification, e.g. fingerprint. The fingerprint never leaves the secure hardware.
How does the issuer (e.g. the republic of France) know that DOCUMENT is bound to a given fingerprint? This is still under discussion, but as a first bid, a French citizen goes to city hall with his phone and obtains DOCUMENT after producing a fingerprint on the citizen's phone (as opposed to a device belonging to the republic of France). You can imagine other mechanisms based on physical tokens (yubikeys or embedded chips in credit cards, or whatever). Other proposals involve taking pictures compared against a picture stored in DOCUMENT. As always, one needs to be clear about the threat model.
In all these proposals the biometric identification unlocks the secure hardware into signing a nonce. The biometrics themselves are not part of the proof and are not sent to the relying party or to the issuer.
Are you saying that someone goes to city hall, shows ID, and gets a DOCUMENT that certifies age, but doesn't link back to the person's identity? And it's married to a fingerprint in front of the person checking ID?
Is there a limit on how many times someone can get a DOCUMENT? If not, it'll become a new variation of fake id and eventually there's going to be an effort to crack down on misuse. If yes, what happens if I get unlucky and lose / break my phone limit + 1 times? Do I get locked out of the world? The only way I can imagine limiting abuse and collateral damage at the same time is to link an identity to a DOCUMENT somehow which makes the whole ZKP thing moot.
I'd be more worried about the politics though. There's no way any government on the planet is going to keep a system like that limited to simple age verification. Eventually there's going to be enough pretense to expand the system and block "non-compliant" sites. Why not use the same DOCUMENT to prove age to buy beer? Sanity for guns? Loyalty for food?
What happens if the proof gets flipped to run the other direction and a DOCUMENT is needed to prove you're a certified journalist? Any sources without certification can be blocked and the ZKP aspect doesn't matter at that point because getting the DOCUMENT will be risky if you're a dissenter. Maybe there's an interview. Maybe there's a background check. Has your phone ever shown up near a protest?
It's just like the Android announcement that developers need to identify themselves to distribute apps, even via side loading. The ultimate goal is to force anyone publishing content to identify themselves because then it's possible to use the government and legal system to crush dissenting views.
Big tech caused most of the problems and now they're going to provide the solution with more technology, more cost, and less freedom which is basically what they've been doing for the last 2 decades so it's not a surprise.
Can I log into an age gated service at a library without a phone?
EDIT: Actually do one better - tell them that for 16+ websites, you're actually protecting teenagers by keeping them anonymous.
We should have started arguing when he just said he had a gun, indoors, in the crowd. We shouldn't have quietly walked outside at his demand. But that all happened. Here we are now, at the car, and he's got the gun out, and he's saying "get in", and we're probably not going to win from here -- but pal, it's time to start arguing. Or better yet, fighting back hard.
Because that car isn't going anywhere we want to be. We absolutely can not get in the car right now, and just plan to argue the point later. It doesn't matter how right the argument is at all.
Because if there aren't, then it matters substantially less whether they're possible.
But attestor has to have certainty about the age of the person it issues IDs to. That raises obvious questions.
What states are going to accept private attestors? What states are going accept other states as attestors? What state won't start using its issues ID/Wallet for any purpose it sees fit?
This system seems likely to devolve national Internets only populated by those IDs. That can all happen with ZKPs not being broken.
That is how states work.
The law does not mandate identity, so your argument does not hold.
As I understand it, it's the goal of OpenID4VP[1][2]. Using it a site can request to know if the user is over 18 say, and the user can return proof of just that one claim, I'm over 18, without sharing identifying information.
The new EU age verification solution[3] builds on this for example.
[1]: https://openid.net/specs/openid-4-verifiable-presentations-1...
[2]: https://docs.walt.id/concepts/data-exchange-protocols/openid...
At least in the EU solution they say there would be multiple attestation serivices the user could choose to use. So that would be technically better than nothing.
2) Cigarette vending machines accept VISA cards and government IDs and they're offline.
3) A medium-sized social media network required photos (not scans) of GovIDs, where only year of birth and validity date need to visible. The rest could be blacked out physically.
4) You can guess users' age and only request solid proof only for those you are unsure about.
The problem is that we technical users think of a one-size-fits-all technical approach that works, without a single fail, for all global users. That is bound to fail.
It is only a law and you can break it big time or small time. Reddit's approach might proof way too weak, it'll be fined and given a year to improve. Others might leave the market. Others will be too strict and struggle to get users. Others might have weak enforcement and keep a low profile forever. Others will start small, below the radar and explode in popularity and then enforcement will have to improve.
You can also request identity and then delete it. (Yes, some will fail to delete and get hacked.)
Giving Facebook a free pass is stupid. They're selling your age cohort "10-11" within 0.0037ms for 0.$0003 to the highest bidder on their ad platform.
Some of these are getting batted down by judges, so right now the category of "legal" is especially vague. That's why I phrased it like that.
But also, we see cops just straight up stalking people using government tools. So that's another reason to be concerned about "legal" government actions.
Nothing to do with libertarianism.
There is a ton of evidence that there are harms to unrestricted online access for kids and teens (the book The Anxious Generation is cultural touchstone for this topic at this point). There is a real, well reasoned, and valid movement to do something about this problem.
The solutions proposed aren’t always well targeted and are often hijacked by the pro-surveillance movement, but it’s important to call out that these solutions aren’t well targeted instead of declaring the age verification push isn’t addressing a real problem and constituency.
The harms are real. The solution is a Surveillance Wolf wearing a dead Save The Kids Sheep(tm).
Solutions that might work - RTA headers [0]. More robust parental controls. Not this reimagining of the rules of the internet in service of a fairly vague and ineffective goal. It's like the whole AV concept was designed not to work in the current context at all - almost as if that was the point.
Perhaps I'm going a little out on a limb. I don't think I am - but quick, tell me you need to know where I'm dialing from without asking me where I'm dialing in from.
Given that it's also coming from a bunch of tech males, it comes across as extraordinarily creepy. This is not hard to understand.
I wonder if there's something like internet accelerationism - push things like having friends or watching movies online off the cliff as soon as possible.
Social media is akin to violent video games in the 2000s, tv addiction in the 90s, santanic heavy metal in 80s, and even 'bicycle face' in the 1890s bicycle craze.
Jonathan Haidt seems extremely earnest and thoughtful, but unfortunately being lovingly catapulted to fame for being the guy who affirms everyones gut reaction to change (moral panic)...makes it extremely difficult financially, emotionally and socially for him to steelman the opposite side of that thing.
Even if he hadn't compiled a bunch of suspect research from pre-2010 to make his claims, the field of Psychology is at the center of the replication crisis and is objectively its worst offender. Pyschology studies published in prestigious academic journals have been found to replicate only 36% of the time. [2]
1. https://reason.com/video/2024/04/02/the-bad-science-behind-j...
https://www.xbiz.com/news/294260/washington-av-bill-jumps-on...
> A person who sells, gives away or in any way furnishes to a person under the age of 18 years a book, pamphlet, or other printed paper or other thing, containing obscene language, or obscene prints, pictures, figures or descriptions tending to corrupt the morals of youth
In real life, we think age verification is a good thing. Kids shouldn't buy porn. Teenagers shouldn't get into bars. etc... There has to be room somewhere for reasonable discussion about making sure children do not have access to things they shouldn't. I think it's important to note, that complete dismissal of this idea only turns away your allies and hurts our cause in the long run.
These are not equivalent, I don't have to scan my face, upload my ID and share my personal biometric data with various 3rd parties, who will sell and leak my data, every time I want to look at porn or sip a beer.
Also, there are countries where teenagers can drink and go to pubs, and society hasn't crumbled. We also have several generations of young adults with access to porn, and the sky didn't fall.
Maybe we shouldn't use the government to implement a "papers, please" process just to use and post on the internet, maybe we should instead legislate the root cause of the problem: algorithmic optimization and manipulation. That way everyone benefits, not just kids, and we won't have to scan our faces to look at memes on Reddit.
The one thing you can control is your childs access through their device using parental controls.
I can absolutely guarantee you that any teenager can easily get access to weed, cigarettes and alcohol despite the laws and definitely can use a VPN. It only takes one smart kid to show them how.
Is you argument then that we shouldn't age gate those things in reality either? Would you suggest that teenagers smoke and drink just as much as they would have had it been legal to sell to minors?
Laws don't just exist to stop you, they also exist to shape society. They exist as signals for what we deem appropriate behavior.
Also how much “shaping of society” do you expect to happen when you pass a law that no one respects?
How many kids do you think a law is going to stop from going to the porn sites that completely ignored the law?
How many kids say “I really want to smoke weed but it’s illegally so I won’t do it”?
I think it's generally accepted that marijuana use increases after legalization. So yes.
https://www.mpp.org/issues/legalization/adult-use-legalizati...
Turns out being illegal isn't as much of a disincentive as being uncool. If your parents are smoking it...
Kids: You can sniff glue and get high!!!
To view adult content, use the code to sign a thing. Content company sees the signed code, verifies against the public list and sends the content.
Privacy preserved, no adult content to kids... Easy.
> which is what Google etc have been trying to do for years but this would just completely fast track that.
Excuse me? They have done that for years. There's nothing to "fast track" here. Big Tech already implemented surveillance.
I'm even willing to talk about the possibility that we could use more robust systems deployed more broadly. A lot of folks here are talking about ZKPs in this regard, and that's not a bad idea at all.
The issue I'm trying to sound the horn on is that the current push for AF in the US and EU has nothing to do with kids. I think you could put together a working group on ZKPs and Age Verification, write up a paper and run experiments, and when you bring it to the lawmakers they're gonna say something to the tune of:
"yeah but that's not trustworthy enough and too technical for people to understand so we're just going to serve legal notices to VPN providers instead to tell them that they can't anymore"
...or something to that tune. I'm not a mind reader, I've just read the reports (by lawmakers) mentioning VPNs as an "area of concern".
This is a political gambit and not a new one. The more we treat the current issue as having anything to do with protecting kids the more we legitimize what is an obvious grift.
The EU is currently doing large-scale field tries of the EU Digital Identity Wallet, which they have been working on for several years. It uses ZKPs for age verification. They expect to roll it out to the public near the end of 2026.
The concern about children is aimed at the wrong target. Instead of targeting everyone it would make far more sense to target the platforms. With Roblox having a pedo problem the company should face punishment. That will actually get them to change their ways. However all these massive platforms are major donors to politicians so the chance of that happening is low to none.
It would not surprise me in the least if there are brick-and-mortar businesses doing this, especially larger companies in jurisdictions (such as the majority of the United States) with weak/nonexistent privacy protections.
But yeah, walmart is for sure logging their transactions and selling the data. It's practically free money.
My heuristic is that social media focuses on particular people, regardless of what they're talking about. In contrast, forums (like HN) focus on a particular topic, regardless of who's talking about it.
I was replying to a discussion between two HN users, who were using conflicting definitions of the term. AFAIK they are not "those in power".
AFAIK nobody here is. The point is that with relevance to the current discussion on potential future age-verification laws, only the widest definition matters, because that's what's at risk.
Ok. In real life, do we think having agents from the government and corporations following you everywhere, writing down your every move and word, is a good thing? Or rather, what kind of crime would one have to have committed, so that they would only be allowed out in public with surveillance agents trailing them everywhere?
At best I may avoid using products from certain companies until I really have to, like Google and Microsoft's AIs, or clear cookies after signing into YouTube so it doesn't sign you into everything else, or write a comment here and there about how some Apple APIs like the iCloud Keychain allow Facebook etc to track you across devices and reinstalls, but I'm not ever going bother doing anything more that would actually challenge all this dystopian fuckiness.
We know this because, instead of putting easy-to-use parental controls on new devices sold (and making it easy to install on old ones) with good defaults [1], they didn't even try that, and went directly for the most privacy-hostile solution.
[1] So lazy parents with whatever censorship the government thinks is appropriate for kids, while involved parents can alter the filtering, or remove the software entirely.
Over 70% of teenagers <18 today have watched porn [1]. We all know (many from experience) that kids easily get around whatever restrictions adults put on their computers. We all know the memes about "click here if you're 18" being far less effective than "click here if you're not a robot."
Yes, there were other ways of trying to solve the problem. Governments could've mandated explicit websites (which includes a lot of mainstream social media these days) include the RTA rating tag instead of it being a voluntary thing, which social media companies still would've fought; and governments could've also mandated all devices come with parental control software to actually enforce that tag, which still would've been decried as overreach and possibly would've been easily circumventable for anyone who knows what they're doing (including kids).
But at the end of the day, there was a legitimate problem, and governments are trying to solve the problem, ulterior motives aside. It's not legal for people to have sex on the street in broad daylight (and even that would arguably be healthier for society than growing up on staged porn is). This argument is much more about whether it's healthy for generations to be raised on porn than many detractors want to admit.
[1] https://www.psychologytoday.com/us/blog/raising-kind-kids/20...
And we all turned out fine I might add. In fact there's a lot more attention to consent and respect for women than 20 years ago.
Of course not counting the toxic masculine far right but that doesn't have anything to do with porn but everything with hate.
How would you know whether it has worked or not? Wouldn't the relevant criteria be up to parents themselves?
Also, access to porn isn't new with the internet. When we cleared out my grandpa's house we had to pry open a desk that was chock full of hustlers.
“Ease of access” and “easy access to the most depraved shit you can think of that’s out there” is what changed. That is what is wrong and why many people feel we need to find some way to control that access.
The Internet didn’t come along until I was well into adulthood. Think about what porn access looked like in the late ‘70s and ‘80s. As a teen we were “lucky” if by some rare miracle a friend stole their dad’s Playboy, Penthouse, or Hustler and stashed it in the woods (couldn’t risk your parents finding it under your mattress) for us dudes to learn the finer points of female anatomy. In a week it would be washed out from the elements with nary a nipple to be seen. Those magazines (even hustler) was soft compared to what a few clicks can find today. Basically you got degrees of nudity back then, but we appreciated it.
Hardcore video was very rare to see as a horny teen kid in the ‘80s. Most porn movies was still pretty well confined to theaters, but advent of VHS meant (again by sheer luck) you had to have a friend whose parents happened to be in to it, who had rented or bought a video, it was in the house and accessible, all the adults had to be gone from the house so you could hurry up and watch a few minutes on the family’s one TV with a VCR. You needed to build in viewing time along with rewind time to hide your tracks.
Now…parents just leave the room for a few minutes and a willing kid with a couple of clicks could be watching something far beyond the most hardcore thing I saw as a teen.
The fact is that as difficult as it was to get, you got a hold of it and watched it. Why would 'ease of access' make any difference if you didn't have easy access and got it anyway?
There could have been years between the opportunities we had. I don’t think you conceptualize just how infrequent the opportunity would present itself.
Forcing providers to divine the age of the user, or requiring an adult's identity to verify that they are not a child, is backwards, for all the reasons pointed out. But that's not the only way to "protect the children". Relying on a very minimal level of parental supervision of device use should be fine; we already expect far more than that in non-technology areas.
[1] - https://news.ycombinator.com/item?id=46152074
[2] - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
This completely leaves it up to the families / parents to control and gives some level of compliance to make the effort worth while.
There may even be a way to generate enough noise with the request to prevent any forms of tracking. This sort of thing should really be isolated in that way to prevent potential abuses via data brokers by way of sale of the information
This concept does not involve any tracking if implemented as designed. The user agent detects the RTA header and triggers parental controls if enabled. Many sites already voluntarily self label. [1] Careful how far one drills down as these sites are NSFW and some may be malicious.
[1] - https://www.shodan.io/search?query=RTA-5042-1996-1400-1577-R...
There is no perfect solution that avoids destroying the internet, but this would be a pretty good solution that shelters kids from accidentally entering adult areas, and it doesn’t harm adult internet users. It also avoids sending out information about the user’s age since filtering happens on the client device.
It was derided as a "system for mass censorship", and got shot down. In hindsight a mistake, and it should have been implemented - it was completely voluntary by the user.
But this also needs some kind of guarantee that lawmakers won’t try to force it on FOSS projects that want to operate outside the system. And that companies like Google won’t use EEE to gradually expand this header into other areas and eventually cut off consenting adults who want to operate outside this system. I’m not sure if it is possible to get those guarantees.
Which I still don't love, but is at least more fair.
It's not even necessary to block parents from giving their children Linux desktops or whatever. It'll largely solve the problem if parents are merely expected to enable parental controls on devices that have the capability.
Worse, it leads to situations where society seems to want to flat out be kid free in many ways. With families reportedly afraid to let their kids walk to and from school unsupervised.
I don't know an answer, mind. So this is where I have a gripe with no real answer. :(
What content is appropriate for children is properly up to their parents themselves, not to the government or to some nebulous concept of "society". If parent's choose not to set such a flag on their children's devices, then that means that they're choosing to allow their children to access content without restriction, and that's what defines what is OK for their children to access.
Wait, those grand parents also had bad models to work with, so really it's the great grandparents that were to blame...
No, wait, it was the society that they grew up in that encouraged poor behaviour toward them, and forced them to react by taking on toxic behaviours. We all should pay because we all actively contribute to the world around us, and that includes being silent when we see bad things happening.
I'm not seeing the correlation / causation here.
A. "Your kid is not my problem"
B. "Your kid is everyone's problem"
Note that I'm not even necessarily worried about cops getting called. Quite the contrary, I am fine with the idea of cops having a more constant presence around parks and such. I do worry about people that get up in arms about how things are too unsafe for kids to be let outside. If that is the case, what can we do to make it safe?
Even the idea of prosecuting parents for allowing their child to access 'information,' no matter what that information is, just sounds like asking for 1984-style insanity.
A good rule of thumb when creating laws: imagine someone with opposite political views from yours applying said law at their discretion (because it will happen at some point!).
Another good question to ask yourself: is this really a severe enough problem that government needs to apply authoritarian control via its monopoly on violence to try to solve? Or is it just something I'm abstractly worried about because some pseudo-intellectuals are doing media tours to try to sell books by inciting moral panic?
As with every generation who is constantly worried about what "kids these days" are up to, it's highly highly likely the kids will be fine.
The worrying is a good instinct, but when it becomes an irrational media hysteria (the phase we're in for the millennial generation who've had kids and are becoming their parents), it creates perverse incentives and leads to dumb outcomes.
The truth is the young are more adaptable than the old. It's the adults we need to worry about.
This assumes an absolutist approach to enforcement, which I did not advocate and is not a fundamental part of my proposed solution. In any case, the law already has to make a subjective decision in non-technology areas. It would be no different here. Courts would be able to consider the surrounding context, and over time set precedents for what does and does not cross the bar in a way that society considers acceptable.
You have way too much faith in the fairness of the court system.
What could humanity do instead with all that time and resources?
I know the US is a nation built by lawyers, for lawyers, but this is both its best strength and worst weakness. Sometimes it's in everyones best interest to accept the additional risks individually as opposed to bubble wrapping everything in legislation and expanding the scope of the corrupt lawyer-industrial complex.
Maybe the lawyers could use the extra time fixing something actually important like healthcare or education instead.
Alternatively, just use an older browser that doesn't serve the header.
If anything, you'd want the reverse. A header that serves as a disclaimer saying "I'm an adult, you can serve me anything" and then the host would only serve if the browser sends that header. And you'd have to turn it on through the settings/parental controls.
Now, this doesn't handle the proxy situation. You could still have a proxy site that served the request with the header for you, but there's not much you can do about that regardless.
That's no different to a law mandating identification-based age verification though. A site in a different jurisdiction can ignore that just the same.
1) Given that it just says you're a "child", how does that work across jurisdictions where the adult age may not be 18?
2) It seems like it could be abused by fingerprinters, ad services, and even hostile websites that want to show inappropriate content to children.
It's a client-side flag saying "treat this request as coming from a child (whatever that means to you)". I don't follow what the jurisdiction concern is.
[EDIT] Oooooh you mean if a child is legally 18 where the server is, but 16 where the client is. But the header could be un-set for a 5-year-old, too, so I don't think that much matters. The idea would be to empower parents to set a policy that flags requests from their kids as coming from a child. If they fail to do that, I suppose that'd be on them.
It doesn't seem sufficient, and would probably lead to age verification laws anyway.
Say you're a parent, with child, living in country A where someone becomes an adult when they're 18. Once the child is 18, they'll use their own devices/browsers/whatever, and the flag is no longer set. But before that, the flag is set.
Now in country B or in country C it doesn't matter that the age of becoming an adult is 15 and 30. Because the flag is set locally on the clients device, all they need to do is block requests with the flag, and assume it's faithful. Then other parents in country B or country C set/unset the flag on their devices when it's appropriate.
No need to tell actual ages, and a way for services to say "this is not for children", and parents are still responsible for their own children. Sounds actually pretty OK to me.
> should
I don't know if "should" is intended as a moral statement or a regulatory statement, but it's not at all unusual for server operators to need to comply with laws in the country in which they are operating…
So namespace it then. "I'm a child as defined by the $country_code government". It's no more of a challenge than what identity-based age verification already needs to do.
> 2) It seems like it could be abused by fingerprinters, ad services, and even hostile websites that want to show inappropriate content to children.
This is still strictly better than identify-based age verification. Hostile or illegal sites can already do this anyway. Adding a single boolean flag which a large proportion of users are expected to have set isn't adding any significant fingerprinting information.
One actor verifies ages - and they only need to do so once. Sites give users a key tied to their user account to run by their verifier, who returns another key that attests to their verified age encoded for that specific site, to give back to the site.
The site doesn't know anything about the user, but their user login info. The verifier doesn't know anything about what sites are being visited.
This would seem to address the issues, without creating the pervasive privacy and security problems of every age verifying site creating database of people's government id's, faces, and other personal information.
It also seems like a way out of the legal/legislatorial battle. Which otherwise, is going to be an immortal hydra.
I would trust the EFF to run something like this. Open source. With only one-way encrypted/hashed personal info stored at their end.
(I am not a cryptographic expert. But I believe mechanisms like this are straightforward stuff at this point.)
Fun fact: many ZK identity solutions run centralized provers and can be subpoenaed. Need to use something that generates proofs client-side.
As many are pointing out zero knowledge proofs exist and resolve most of the issues they are referring to. And it doesn't have to be complex. A government (or bank, or anybody that has an actual reason to know your identity) provided service that mints a verifiable one time code the user can plug into a web site is very simple and probably sufficient. Pretty standard PKI can do it.
The real battle to be lost here is that uploading actual identity to random web sites becomes normalised. Or worse, governments have to know what web sites you are going to. That's what needs to be fought against.
So instead of advocating for those sensible and workable solutions, the discussions are always centred on either blocking any attempt at reform while hyperventilating about vague authoritarianism or a similarly vague need to protect the innocent.
Meanwhile in the world of smartphone data providers, social media networks, and the meta/googles of the world: they all know your personal information and identity up to the wazoo - and have far more information on every one of you than what is possessed by your own governments (well except for the governments that are also buying up that data.)
So let me be clear, the gate is open, the horse has bolted - recapturing your privacy is where attention should be focused in this debate... even if it's bad for shareholders.
This is where I'm concerned too. We are seeing a proliferation of third party verification services that I have to interact with and that have no real obligations to citizens, because their customer is the website.
I'd like to see governments step in as semi-trusted third parties to provide primitives that allow us to bootstrap some sort of anonymous verification system. By semi-trusted, I mean trusted to provide attestations like "This person is a US citizen over the age of 18" but not necessarily trusted with an access log of all our websites.
No, fighting back against horrible proposals does not require suggesting an alternative proposal to the alleged problem. That only serves to benefit the malicious actors proposing the bad thing in the first place, the hope that we'll settle on something Not As Bad.
Thank god for the EFF and their everlasting fight to stop these nonsense internet laws. I'm glad they don't waste their time on "well how about this" solutions. The middle ground will never be enough for the proponents of surveillance, and will always be an incremental loss for the victims.
If Big Brother starts mandating the collusion - then yes, there's a hill to die on. But in some ways that's the point here. There are hills to die on - this just isn't it. And if you pick the wrong hill then you already died so you are losing the ones that really mattered. If the EFF pointed out to everyone that there is a privacy preserving answer to the core issue that is driving this, they could then mount a strong defense for the part that is truly problematic, since it isn't actually required to solve the problem.
You may accept this. Others will not.
> But that just outlines a situation where the user's chosen trusted service is hostile to their interests and they need to find one that isn't.
Just?
My view is that there's no reason why we can't come together and come up with a rating system for websites (through HTTP headers, there are already a couple proposals, the RTA header and another W3C proposal).
Once a website just sends a header saying this is adult only content, what YOU as a user do with it is up to you. You could restrict it at the OS level (which is another thing we ALREADY have).
This would match the current system, which allows households to set their devices to block whatever they want, and the devices get metadata from the content producers.
No ID checks needed.
Once a system is in place that infringes on rights nobody will modify it to give citizens more rights.
It also lets someone who knows more than I to elaborate with more depth.
I'd argue that this is negligible for data collectors and governments. Governments already know who you are and what sites you vist for 99.99% of the population. Data collectors already know who you are and have a pretty good idea of the sites you vist.
What unique information is this going to give the government and data collectors to abuse? Lets establish one case that both affects average people and is "bad" and not waste time discussing things that only affect a tiny minority of privacy minded people.
Keep in mind the law states a platform must provide multiple ways to reasonably verify a user is older than 16. No mention of giving the specific user age or requiring govt id
After all, they can legitimately claim it solves much of the issues with other verification schemes - no need to trust third party sites or apps, lower risk of phishing, easier to implement internationally and with foreign nationals, etc.
Of course, the downside (for individuals) is it would take just one legal tweak or pressure from the government to destroy anonymity for good.
It doesn't prevent one person from prohibiting speech... I can tell a pastor to stop preaching on my lawn. But, the government cannot tell a pastor not to preach in the publicly-owned town square (generally, there are exceptions).
There are arguments that certain online forums are effectively "town squares in the internet age" (Twitter in particular, at least pre-Musk). But, I always found that analogy to fall apart - twitter (or whatever online forum) is more like an op-ed section in a newspaper, IMO. And newspapers don't have to publish every op-ed that gets submitted.
Also, the 1st Amendment does not protect you from the consequences of your speech. I can call my boss an asshole to his face legally - and he can fire me (generally, there are labor protections and exceptions).
Some proposed implementation do this. Without the requirement there is no chance of your ID or age being leaked, with zero knowledge proof, there is a chance they leak but can be made small, potentially arbitrarily so. Other implementations come with larger risks.
There were major Supreme Court rulings on the topic recently, see
https://news.ycombinator.com/item?id=44397799 ("US Supreme Court Upholds Texas Porn ID Law (wired.com)"—5 months ago, 212 comments)
https://en.wikipedia.org/wiki/Free_Speech_Coalition_v._Paxto...
And you don’t see a problem with this part?
What the ZKP does is let you limit the information the site collects to the fact that you are under 18, and nothing else. It’s an application of the principle of least privilege. It lets you give the website that one fact without revealing your name, birthdate, address, browsing history, and all your other private data.
After all - if it doesn't share anything other than a guarantee of the "age" of someone who is authenticating with the website then how would the website know there's re-use of identifiers?
- If I can do a zero knowledge proof with an arbitrary age, I can eventually determine anyone's birthday.
- If the only time people need to verify their age is to visit some site that they'd rather not anyone know they visit and that requires showing identity - even if it's 100% secure, a good share of people will balk simply because they do not believe it is secure or creating a chilling effect on speech.
- If the site that verifies identity is only required for porn, then it has a list of every single person who views porn. If the site that verifies identity is contacted every time age has to be re-registered, then it knows how often people view porn.
- If the site that verifies identity is a simple website and the population has been trained that uploading identity documents is totally normal, then you open yourself up to phishing attacks.
- If the site that verifies identity is not secure or keeps records, then anyone can have the list (via subpoena or hacking).
- If the protocol ever exchanges any unique identifier from the site that verifies your identity and the site that verifies identity keeps records, then one may piece together, via subpoena (or government espionage, hacking) every site you visit.
Frankly, the fact that everyone promoting these systems hasn't admitted there are any potential security risks should be like an air raid siren going off in people's heads.
And at the end of all of this, none of it will prevent access to a child. Between VPNs, sharing accounts, getting older siblings/friends to do age verification for them, sites in jurisdictions that simply don't care, the darkweb, copying the token/cert/whatever from someone else, proxying age verification requests to an older sibling/rando, etc. there are way, way too many ways around it.
So one must ask, why does taking all this risk for so little reward make any sense?
Not in principle
See the limits on curse words on TV. Or MPAA ratings for movies.
(IANAL) That demonstrates the opposite: that's a voluntary system with no force of law behind it—the private sector "self-regulating" itself, if you will.
The film rating systems were created under threat of legislation in the first half of the 20th century (so, in lieu of actual legislation). The transformative 1st Amendment rulings of the Warren Court would have made such laws unconstitutional after the 1960's, but the dynamic that created these codes predates that—predates the modern judicial interpretation of the 1st Amendment.
https://en.wikipedia.org/wiki/Hays_Code (history background)
https://en.wikipedia.org/wiki/Motion_Picture_Association_fil... ("The MPA rating system is a voluntary scheme that is not enforced by law")
No, its legal (in some jurisdictions) pornography. Prostitution on the platform, as well as whatever the legal status is in the set of jurisdictions involved, is also, from what I understand, explicitly against the platform ToS.
Prostitution obviously cannot physically happen on an online platform, but it sure is a convenient way to advertise and attract customers, and serve as the payment processor.
Well, no, violating a binding legal agreement is illegal.
> Prostitution obviously cannot physically happen on an online platform, but it sure is a convenient way to advertise and attract customers, and serve as the payment processor.
Which is explicilty prohibited by the law in many places OF operates, and judging from the number of people who are creators on the platform I've seen complaining about people jeopardizing their status with the platform by soliciting it on the platform, also by the actively-enforced terms of the platform. OF is simply not “legal prostitution”, and it is ridiculous to describe it that way
Not touching the rest of this thread's arguments, but that isn't really true. Breaking ToS, or any other contract, is not "illegal"-- it's not a crime. It opens you up to civil (not criminal) penalties if the other party sues, but that's it.
So I agree that instead of fighting some change that I think is inevitable, they should make it so that it works in the most privacy-conscious way possible. And I mean with real technical solutions, like an open-source app or browser extension you can download, a proof-of-concept server for age verification, etc... using the best crypto has to offer.
If I go to the liquor store, I can't just promise "I'm 21+"
If I go to vote, I can't just promise I'm 16+
If I go to buy a lottery ticket I cant just promise...
I think I made my point.
Either you're old enough to understand what porn is and have desires to consume it in which case you won't be scarred by it and don't need protection from it, or your not in which case you won't seek it out. You need id checks for alcohol because people too young to consume it want it, and given how much teens drink and how not a problem lower drinking ages are in other countries even that claim is somewhat dubious.
Things like this will give them a huge advantage in not being manipulated and lied to.
I'd be comfortable with it having large segments of "uncategorized." But right now, if I scan over to my ISP to see how much data I have used for the month, I have little to no help in saying how much of that was what.
Again, I get that that will be a lot I have to write off as "uncategorized." I'm not even trying to drive all telemetry down to zero. I'm comfortable knowing that my HVAC may send diagnostic stuff in, as an example. But it seems kind of crazy to me that this is not something that is often discussed? Do I just miss those discussions?
i am a child header = i am verifying myself as valid target header
has anyone realized that whatever at all the "good" guys do, the "bad" guys will abuse it.
we need canaries [bots with child header], to get a metric on any increase of attempted crimes vs a child.
My SO has been teaching for nearly 20 years now, and mental health in kids has fallen off a cliff in the last two decades. I could fill this page with online bullying stories. Some of which, are especially cruel. Half her students are on medication for anxiety. It's out of control, honestly.
That said, I don't know how to solve it. It's easy to put this on the parents, but that's not the answer. Otherwise, it would be solved already. Some don't care. Some don't have the time to care because they're trying to keep the lights on, and dinner on the table. And, some simply think it doesn't apply to them or their children. Parents on HN are hyper-aware of this sort of thing, but that's definitely the minority.
I know a family that would be most folks least likely candidate for something bad to happen online. Single income, relatively well off, the parent at home has an eye on the kids 24/7. And, if you met the kids, you would most likely qualify them as "good kids". Without going into detail, their life was turned upside down because one of the kids was "joking around" online.
Again, I don't know what the answer to the problem is. Clearly, age verification laws are a veiled attempt to both collect and control data. And, EFF's emphasis on advertising restrictions as a solution, seems off the mark. There's more to it than that. Idk, this shit makes me want to log off permanently, and pretend it's 1992.
I would argue it must be part of the answer, if it isn't literally the answer. You even kind of hit the nail on the head later in that paragraph:
> Parents on HN are hyper-aware of this sort of thing, but that's definitely the minority.
I would start there. Spreading awareness and social pressure is a tractable problem.
At best, you go back and forth between no privacy, a heavily condition privacy. At best.
Let’s take privacy back, but that’s a big process.
If you haven’t internalized surveillance, start working on it!
Paranoid, maybe. Schizophrenics? No. Firstly, "paranoid schizophrenia" is an outdated diagnosis. Paranoia is a common symptom of schizophrenia, but schizophrenics exhibiting paranoia are not considered to have separate mental illness from those who are not. Secondly, schizophrenia is not caused simply by psychological stress, and is associated with a large cluster of positive and negative symptoms, with paranoia being only one of them.
"Perfect" security is only attainable with zero dissent, zero individuality, zero privacy, and zero freedom.
https://en.wikipedia.org/wiki/Organ_transplantation_in_China
Verified Credentials exist on the Web.
Drivers’ licenses exist.
Just show that you are over 18 in a zero knowledge way and be done with it. Why do they need to see your IDs?
1. create our own porn at home and (soon)
2. have home orgasmatrons.
Parents have complete control of the Chat/Porn server and since the orgasmatron necessarily has all your desires stored in its LLM (Large Lust Model) it trivially knows your age and will lock you out.
And internet porn can be banned regardless of age. (that's only half sarcastically said).
Demand for home Large Lust Models and orgasmatrons will soar. You heard it here first. Opportunity for entrepreneurs. And these home-based products are the only way to keep porn away from kids (if parents don't care now, they never will) and to maintain privacy on the internet.
Every place where I've worked in I.T., the rule was "No porn downloading at work. Porn belongs in the home." (especially in the days of slow home modems)
And to be really enforceable, all offshore sites would have to agree to the scheme, including certain Russian ones who are glad to pollute our children's and adults' minds with porn, propaganda and conspiracy theories.
Lastly: There always was and will be media. Micro-SD cards now? If not phones, thrift store picture frames and RPi's. "Porn finds a way."
This is not compelling. The internet I know and love has been dying for a long time for unrelated reasons. The new internet that is replacing that one is an internet that I very much do not love and would be totally ok to see lots of it get harder to access.
Extrapolate that how you will.
The surveillance and censorship system is built, administered and maintained by Silicon Valley companies who have adopted this as their "business model". "Monetising" surveillance of other peoples' noncommercial internet use
These Silicon Valley companies have been surveilling internet subscribers for over a decade, relentlessly connecting online identity to offline identity, hell bent on knowing who is accessing what webpage on what website, where they live, what they are interested in, and so on, building detailed advertising profiles (including the age of the ad target) tied to IP addresses, then selling the subscribers out to advertisers and collecting obscene profits (and killing media organisations that hire journalists in the process)
Now these companies are being forced to share some of the data they collect and store
Gosh, who would have forseen such an outcome
These laws are targeting the Silicon Valley companies, not internet subscribers
But the companies want to spin it as an attack on subscribers
The truth is the companies have been attacking subscriber privacy and attempting to gatekeep internet publication^1 for over a decade, in the name of advertising and obscene profits
1. Discourage subscribers from publishing websites and encourage them to create pages on the company's website instead. Centralise internet publication, collect data, perform surveillance and serve advertisements
Silicon valley uses that information to sell adds, and sometimes votes. Not great, but I can imagine much worse from a State.
We have the technology to do age verification without revealing any more information to the site and without the verification authority finding out what sites we are browsing. However, most people are ignorant of it.
If we don't push for the use of privacy preserving technology we wont get it and we will get more tracking. You cannot defeat age verification on the internet, age verification is already a feature of our culture. The only way out is to ensure that privacy preserving technologies are mandated.
Google even open-sourced technology to enable it: https://blog.google/technology/safety-security/opening-up-ze...
The politicians don't want Zero Knowledge Proof because it prevents the mass-surveillance of internet users. This is all deliberate.
It's deanonymizing and intrusive and mandatory for sites to implement without protecting them from sockpuppets and foreign troll farms.
The CNIL gave up 3 years ago, and gave guidelines, you can read about it here [0]. At the time it read like "How well, we tried, we said it is incompatible with privacy and the GDPR multiple times, we insist one more time that giving tools to parents is the only privacy-safe solution despite obvious problems, but since your fucking law will pass, so the best we can do is to draw guidelines, and present solutions and how to implement them correctly".
I think the EFF should do the same. That's just how it is. Define solutions you'll agree with. Fight the fight on chat control and other stuff where the public opinion can be changed, this is too late, and honestly, if it's done well,it might be fine.
If the first implementation is correct, we will have to fight to maintain the statu quo, which in a conservative society, is the easiest, especially when no other solution have been tested. If it's not, we will have to fight to make it correct, then fight to maintain it, and both are harder. the EFF should reluctantly agree and draft the technical solution themselves.
[0] https://www.cnil.fr/en/online-age-verification-balancing-pri...
Either we create the fix, or the feds take it over. we need to sever the idea of a global internet. per-country and allied nations only. anonymous cert-chain verified ID stored on device. problem fixed.
there's a great game being played out by these users of force against the advocates of desire. everything about the bureaucracies pushing digital ID is unwanted. this isnt about age verification tech, its about illegitimate power for unwanted people who are actuated by forcing their will on others.
we should treat these actions with the open disgust they deserve.
[1] https://www.gva.ch/Site/Passagers/Shopping/Services/Business...
If people care enough, they will build a new internet.
Equally, I think insisting that there must be no controls to internet access whatsoever is not right either. There is now plenty of evidence that eg. social media are very harmful to teenagers - and frankly, before I noticed, going on FB got me depressed each time I did it at one point. And as a parent, you realise how little control you have over your children's tech access. Case in point - my kids seem to have access to very poorly locked down iPads at school. I complained, but they frankly don't understand.
We all accept kids can't buy alcohol and cigarettes, even if that encroaches on their freedom. But or course flashing an ID when you're over 18 is not very privacy-invading.
Likewise, I think it is much better to discuss better means of effecting these access controls. As some comments here mention, there are e.g. zero knowledge proofs.
I'm sure I'll be told it's all a sham to collect data and it's not about kids. And maybe. But I care about kids not having access to TikTok and Pornhub. So I'd rather make the laws better than moan about how terrible it is to limit access to porn and dopamine shots.