[1] https://xcancel.com/Paul_Reviews/status/2044502938563825820
[2] https://xcancel.com/paul_reviews/status/2044723123287666921
[3] https://csa-scientist-open-letter.org/ageverif-Feb2026
| "The saga is turning into a PR disaster for Brussels. "
imo: mostly because the Author wants it be a disaster.
The App has not launched, they published the source code in order to invite external review. I dont have time to every claim, but e.g. this [see quote below] seems to be blown out of proportions to me - the app fails to delete a temp. image, which results in a selfie being stored indefinitely(?) on the internal disk of your device - if an adversary has access to the internal disk of my phone, they can also just access the photo roll.
"For selfie pictures:
Different scenario. These images are written to external storage in lossless PNG format, but they're never deleted. Not a cache... long-term storage. These are protected with DE keys at the Android level, but again, the app makes no attempt to encrypt/protect them.
This is akin to taking a picture of your passport/government ID using the camera app and keeping it just in case. You can encrypt data taken from it until you're blue in the face... leaving the original image on disk is crazy & unnecessary."
The damage is limited because the selfie is only retained on device, but it still does not signal competency from the EU to fail at the most basic hurdle of disposing of the selfie once verification is complete.
The point of this is that you can use the credentials on your phone to prove that you are an adult to a website using zero-knowledge proofs to avoid disclosing your identity to anybody.
If somebody who has access to your unlocked phone can access the data in the app, then this is something that should be tightened up but it’s a substantial privacy improvement over the far more commonplace option of uploading your ID to every website that wants to know if you are an adult.
It’s an attempt to avoid things like this:
> Discord says 70k users may have had their government IDs leaked in breach (Oct 2025, 435 comments) - https://news.ycombinator.com/item?id=45521738
It is my understanding that this is not possible. I would be happy to be shown to be wrong, but to me it seems like you can either prevent people from lending out their credentials, or you can preserve the anonymity of the user, but not both.
You can use 0KP to prove you have a signed certificate issued by your government that says you are an adult, but then anyone with such a certificate can use it to masquerade as however many sock puppets they like and act as a proxy for people who aren't adults. You can have the issuing government in the loop signing one-time tokens to stop Adults-Georg from creating 10k 18+ attestations per day, but then the issuing government and the service providers have a timing side-channel they can use to correlate identities to service users. Is there some other scheme I'm missing that solves this dilemma?
This is not designed to prevent adults from coöperating with minors; that makes no sense as a design goal because any technical measure can always be bypassed with “download this for me and give me the file”. This is designed to prevent minors from being able to access systems without an adult.
Nothing prevents an adult from buying alcohol on behalf of minors; that doesn’t mean laws that prevent minors from directly buying alcohol are useless.
If the proof of adulthood scheme is truly anonymous, one adult with some technical chops who thinks "kids should be allowed to watch porn if they want" would be able to, say, run an adult-o-matic-9000 TOR hidden service that anyone can use to pinky promise that they are an adult without fear of repercussions. If such a service comes with a meaningful risk of being identified and punished, it is by definition not anonymous.
I suppose I'm just not convinced giving up some basic liberties for a law that converts into sternly worded advice if just one adult chooses to break it is a great idea.
The certificates in question can use a few mitigations: short lived, hardware stored (in a TPM, making distribution harder), be single use, have a random id which the service being accessed can check how many times has been used.
> but then the issuing government and the service providers have a timing side-channel they can use to correlate identities
That's not reallya concern, IMO. That would always exist as a risk - most people would probably have a flow of trying to do something, having to prove ID/age, doing that step, continuing with the something, which means you'd probably be able to time correlate the two sides quite often. The solution here is legal with strong barriers, not technical.
Many countries in EU already have electronic identity documents and delegate authentication to mobile apps one way or another.
eID or mobile identity application operating over QR codes and used to log into websites and apps is a commodity here.
This has nothing to do with age verification.
The article links to the source code repository here:
https://github.com/eu-digital-identity-wallet/av-app-android...
That links to the tech spec:
> The solution leverages the existing eIDAS infrastructure, including eIDAS nodes and the trust framework for trusted services, to ensure a high level of security and reliability. By aligning with the technical architecture of the EU Digital Identity Wallet ARF, the solution delivers secure, reusable, and interoperable proofs of age.
> The solution enables users to present their Proof of Age attestation to Relying Parties, primarily for online use cases. The system is optimised for secure and privacy-preserving online presentation, allowing users to prove their eligibility without disclosing unnecessary personal information.
— https://github.com/eu-digital-identity-wallet/av-doc-technic...
Annex A includes details on the ZKP:
> AVI SHOULD support the generation of Zero-Knowledge Proofs using the solution detailed in: "Matteo Frigo and abhi shelat, Anonymous credentials from ECDSA, Cryptology ePrint Archive, Paper 2024/2010, 2024, available at https://eprint.iacr.org/2024/2010".
— https://github.com/eu-digital-identity-wallet/av-doc-technic...
And the linked paper:
> Anonymous digital credentials allow a user to prove possession of an attribute that has been asserted by an identity issuer without revealing any extra information about themselves. For example, a user who has received a digital passport credential can prove their “age is ” without revealing any other attributes such as their name or date of birth.
Without exposing my citizenship, I was able to use by EU-nation issued ID to confirm only my year of birth.
The website supported this country's national ID login method, in the login challenge asked the server to provide my age, before I signed in to confirm (scanning qr code with my mobile app) I was informed what data was requested, then I consented to them confirming my data.
Not very sensitive things work without my physical ID present, sensitive have additional step with me providing my physical ID (to the NFC reader) and unlocking my key (stored on the ID) with a pin.
All in all it's really very sensible and fast.
Not necessarily the EU ID apps we're talking about but some of the existing implementations.
That's the theory. How is it in practice?
In my opinion, it just means there is a single government database to hack to get copies of all IDs...
By the way have the "security experts" checking this app evaluated that part? Or they're just worried about the app users cheating?
That doesn't make sense, all IDs are already in a single government database. Kind of by definition in fact, for IDs to be useful they need to be emitted by a central authority with associated security and revokability guarantees.
The implementations I've seen rely on an app reading your physical ID and its NFC chip, comparing that with a selfie to ensure it's the same person, and being able to provide anonymous proof you are of age based on that, or proof that you are indeed who you say you are.
Yes and those databases are decently protected. However for an "app" someone will do a web 4.0 or 6.0 bridge to access these databases. Maybe even vibe code it. That's what I'm worried about.
No it isn't.
Literally that is not the scope document, and such a solution would not be permitted by the EU as compliant with the legislation.
The app isn't zero knowledge. A prototype workflow has been designed for a one way transfer to sites that is zero knowledge, but it doesn't actually deliver zero knowledge because it you have to verify your age with an external provider to get the credential (which is not zero knowledge), the app has to be secured with either Apple or Google's attestation services (which are not zero knowledge), and the site has to be able to check with the original external provider that the credential hasn't been revoked (which is in no way zero knowledge).
This open source and transparent ZKP-based approach is extremely surprising to see, publishing a draft in advance and inviting the public to break it so it can be improved? Are you kidding me? What about the billions of private investment in all the companies that offer centralized ID checks like Persona, Socure, ID.me and more? Thats a growing billion dollar industry. They all counted on this as a future market opportunity that the EU just seem to have destroyed at least in the EU?
People fighting against this age id app might be paradoxically useful idiots for billion dollar investments and lobbying efforts. The demos is once again dragged into the trenches to fight a war they don't understand.
- MUST use either Google or Apple account - must not be banned by the provider or sanctioned in the USA
These issues have been flagged to the devs working on the blueprint since the inception, only to be handwaved away.
Getting banned can happen randomly even if you're not doing anything illegal or wrong (it's enough for a robot to decide you're within the blast radius), getting sanctioned can happen if you're an UN lawyer investigating human rights abuses USA actually likes.
So I do see a problem here.
Or just give parents easy to use parental controls. But that wouldn't grow the surveillance state.
2. "an attacker can simply remove the PinEnc/PinIV values from the shared_prefs file"... Any android developer knows that to access the shared prefs file you need ROOT access on the phone, which is impossible on the stock os. Rooting the phone requires advanced knowledge. It means deliberately nuking your phone security, which most likely will require factory resetting the phone in the process. Or a hacker would need to use a sophisticated exploit, maybe even 0day, to access an app that would allow him to log in on some adult sites. Sounds reasonable (no).
So, the guy found two very superficial problems in a early demo app. Does not even look at the important code with the actual implementation of the zero knowledge proof cryptography, as it is way above his skill level. Throws malicious allegations mixed with blatant lies. Cries for attention to the whole internet and it gets augmented by news and people who understand security and technology even less than him. He dares calling it "hacking" in under 2 minutes. That's just disgusting.
He even calls himself "Security Consultant". Lord have mercy on whoever is going to work with him.
The app still hasn’t launched. There’s only so long you can run on hype before you lose the readers you were trying to win over.
At least that's what the manufacturer's AI generated article says: https://eidas-pro.com/blog/eu-age-verification-app-hack-expl...
I know a fair number of especially elderly people who want to disable PIN and bio-metrics from their phone, because they view it as a pain to deal with.
PINs can also be guessed or someone might look you over the shoulder and steal it that way. Many phones still doesn't have biometrics, or people don't want to use it.
Our realities might be different, but in my reality a cell phone, which you almost by definition brings with you out in the world, should never be considered a secure device.
I don't think you can guess pins, as the phones locks after a few failed attempts.
You can’t just leave every dangerous thing out in the open because you “view it as a pain to deal with” storing them safely and then blame everyone else for the situation that follows.
Our realities might be different but in my reality if you put 0 (zero) effort to keep some critical things safely away from your child because it’s too much of a hassle to do it, or they’ll get around that anyway, etc. then you’re failing your children.
What do you have on your phone that's dangerous? Phones aren't safety device, and they shouldn't be turned into one.
If you have anything on your phone that should be off limits to your child but make no effort to ensure that (give them the phone, no passwords, no supervision) because it’s too inconvenient you are failing the child. Can I put it in simpler words?
> What do you have on your phone that's dangerous?
I hope you were asking hypothetically.
For one, the phone itself since staring into a small screen at god knows what because supervising them is a chore is bad for anything you can imagine, from eyes, to posture, to brain development. But also a browser that can access anything on the internet (modern Goatse, Rotten, Ogrish, other wholesome sites like that). My credit card numbers. All my passwords. Hardcore porn. Facebook and TikTok. The app that delivers booze to my doorstep. 50 shades of grey (the book and the movie). X (Twitter), I left the worst for last. If you really think a completely open internet connected phone is perfectly safe for a kid at the very least you’re in the wrong conversation.
It doesn’t matter, the discussion is about age verification for things that a child should be kept away from, whatever that is. If you’re trying to protect the kids from anything, especially legitimate concerns, then you can’t expect some mechanism to magically do all that parenting for you. It can help but not be the parent when the parent thinks it’s too inconvenient to actually do some parenting.
Seeing something scary, disturbing, or sexual on the internet as a child does not result in a maladjusted adult. These laws are about one thing and one thing only - furthering the global surveillance network.
Everything else is a smokescreen. Pretending that a phone or any Internet-connected terminal is something that should be kept secured and away from children is a parenting decision, not a policy one, and any attempt to justify it as a policy decision is toxic nonsense at best and astroturfing for the surveillance state at worst.
Well thank God this about a double-blind way to verify your age and not that.
Maybe your argument is that it's not a surveillance state because it is implemented with a 0 knowledge proof. Sure, the age verification is, but that is only part of the system we are talking about. The rest of the system is the demand that every adult play keep-away with their verification, and every host on the internet (that can be adequately threatened) play, too.
The only way for this to be anything else is if every participant can individually decide what should and should not be kept away from children. Such a premise is fundamentally incompatible.
And yes, phones are something parents do "just" share with their kids because nobody is bizarre enough to look at a phone the same way as a gun or a car. It's the YouTube device that can talk to grandma. All you have to do to see proof that it's something people "just" share is to walk into a grocery store and look at parents pushing kids in carts while those kids watch videos. 25 years ago those phones were Game Boys. Nobody is seeing them as a gun. That's the most disconnected from reality take I've seen in my life.
If this is a concept that you can't grasp, then words will never convey it. It's simply a detachment from reality to think people are viewing their phones as a loaded gun and their child as someone hellbent on betraying them and causing massive societal damage.
The phone is the YouTube device. If they get a notification that their kid ordered from Amazon, they'll cancel the order and tell their kid not to do it again. It's seriously that simple. Just go and talk to a parent. They'll think viewing their phones as a WMD is insane.
Okay, so trust them not to access age-gated sites using your credentials then.
That's a solved problem and making an immense vulnerability out of it is silly.
This is a reference app implementation that uses a detailed framework which explicitly has as a core tenet double blindness. The place you prove your age to has no idea about anything other than you being of age, and the thing you use to prove your age has no idea about where you're using that proof.
> "Let’s say I downloaded the app, proved that I am over 18, then my nephew can take my phone, unlock my app and use it to prove he is over 18."
Love the magic step in the middle, unlock my app. Ask for passcode or faceid to “unlock your app”. That’s a lot of legwork the adult has to do so the child can “trick” the system.
Some people will forever be shocked that if they leave on the table an open booze or medicine bottle, loaded gun, etc. a child can just take them and misuse them. The blame is unmistakably with bottle and gun manufacturers, right?
Put a modicum of effort to protect the sensitive apps or supervise the child when you share your device. They can do a lot of damage even with age appropriate apps. Wanna see how quickly your kid will tell everyone on the net how much money you have (via proxies), where you live, and when you go on vacation? Or tell someone the credit card number they swiped from your pocket if the other person makes it sound like a game?
The second premise you are avoiding is that the government can define, for every child, what constitutes misuse.
You are advocating thought crime. You do not have my support.
My government cannot adequately manage responsibility for my cupboards. It therefore shall not have authority over them.
While I appreciate the zero-knowledge proofs is considered, how the hell did no one in charge of the app design think of this? It's is literally the first question I asked when I first heard about this app. You go to the app in a store to buy alcohol, you're asked to verify your age, but that's not what you're doing. Your simply showing the store that you have a phone, with and app, which was configured by some over 18 (maybe).
Honestly I don't think it's possible to verify that you're over 18 without also providing something like a photo ID (and even that is error prone).
You can probably do something online, where the website or app does some back channel communication to a server that verifies a token. Even that is going to have issues. You could add a "List of sites that has verified your age" option where you can revoke the verification, in case your nephew borrows your phone.
They are going to implement this and it will be "good enough", but I don't see this being 100% secure or correct.
The credit card doesn't work as age verification.
I think they "fixed" it. I think it has some effect now that only works if you tilt the phone.
Maybe bundling these under the same system is a mistake and they should be separate systems with different considerations; it would certainly help with arguments about it online ;P
To be honest I just overhead the bouncer talking about them liking the app. Maybe I misheard it.
Why is that even a scenario to discuss?