I don’t love the idea of giving every server I connect to via TLS the ability to fingerprint me by how recently (or not) I’ve fetched MTC treeheads. Even worse if this is in client hello, where anyone on the network path can view it either per connection or for my DoH requests to bootstrap encrypted client hello.
I think Merkle Tree Certificates a promising option. I'll be participating in the standardization efforts.
Chrome has signalled in multiple venues that they anticipate this to be their preferred (or only) option for post-quantum certificates, so it seems fairly likely we will deploy this in the coming years
I work for Let's Encrypt, but this is not an official statement or promise to implement anything yet. For that you can subscribe to our newsletter :)
Sure, it's nice and convenient if you're using an evergreen browser which is constantly getting updates from the mothership, but what is the rest supposed to do? How are we supposed to use this in Curl, or in whatever HTTP library your custom application code is using? How about email clients? Heck, is it even possible with embedded devices?
"The internet" is a hell of a lot bigger than "some website in Google Chrome", and we should be careful to not make all those other use cases impossible.
Linux should probably get one too, but I don’t know who will lead that effort.
In the mean time, browsers aren’t willing to wait on OSes to get their act together, and reasonably so. There’s regulation (and users, especially corporate/government) pushing for post-quantum solutions soon, so folks are trying to find solutions that can actually be deployed.
Browsers have always led in this space, all the way back to Netscape introducing SSL in the first place.
Shor's algorithm requires that a quantum Fourier transform is applied to the integer to be factored. The QFT essentially takes quantum data with a representation that mirrors ordinary binary, and maps it to a representation that encodes information in quantum phase (an angle).
The precision in phase needed to perform an accurate QFT scales EXPONENTIALLY with the number of qubits you're trying to transform. You manage to develop a quantum computer capable of factoring my keys? Fine, I'll add 11 bits to my key length, come back when you've developed a computer with 2000x the phase precision.
We know. They are not able yet to emulate an i4004, let alone be a treat to "computing".
We know that current quantum computers are very weak. We do not know what is physically possible, or even feasible. Quantum computers today struggle with decoherence, but we really genuinely don't know for sure if they always will or if there is a way to overcome it. We have not hit a point where we believe we are up against hard physical limitations that can never be overcome.
> They are not able yet to emulate an i4004, let alone be a treat [sic] to "computing".
I am skeptical this is a good benchmark, though. How many logical qubits do you reckon it would take to emulate an i4004? I don't have the answer, but I wouldn't be surprised if you need less to do something interesting that a classical computer can't reasonably.
the really weird thing it's what this isn't true. we already have quantum error correction schemes that can take a quantum computer with O(1) error and get O(exp(-k)) error with polylog(k) inaccurate qbits (and we have empirical evidence that these schemes work to correct the error of single digit numbers of qbits already). adding 11 bits to the key adds ~12 logical qbits or ~a hundred physical qbits to the side of the QC
https://www.nature.com/articles/s41586-024-08449-y is a paper that shows surface codes work on real computers.
https://arxiv.org/abs/2505.15917v1 is the most recent costing of factoring various size of N using all the recent papers on optimizing the logical qbit counts.
Nobody needs them The 5 eyes already have access to root certs and internet nodes.
What it _really_ matters is that you are secure, and terrorists and pedofiles stand no chance. At least in theory. /s
This proposal is to introduce PQ certificates in WebPKI such as for certificate authorities.
Problem is PQ signatures are large. If certificate chain is small that could be acceptable, but if the chain is large, then it can be expensive in terms of bandwidth and computation during TLS handshake. That is the exchange sends many certificates which embed a signature and a large (PQ) public key.
Merkle Tree Certificates ensures that an up to date client only needs 1 signature, 1 public key, 1 merkle tree witness.
Looking at an MTC generated certificate they've replaced the traditional signing algorithm and signature with a witness.
That means all a client needs is a signed merkle root which comes from an expanding Merkle Tree signed by the MTCA (Merkle Tree CA), which is delivered somehow out of band.
So basically TLS client receives certificate containing new signature algorithm which embeds a witness instead of a signature, a root (not sure if just a hash or a signed hash, I think the former). Client will get the signed roots out of band, which can be pre-verified, which means verifying the witness is simply doing a check on the witness.
Edit: My question: is this really a concern that needs to be addressed? PQ for TLS key exchange addresses a looming threat of HNDL (Harvest Now Decrypt Later). I don't see why we need to address making WebPKI use PQ signatures, at least for awhile now.
PKI for everything else can go at their own pace
If the first time the client doesn't know what root the server's certificate will chain to, therefore it doesn't tell the server what treeheads it has, and so the client gets a full certificate, and then the client caches this to remember for later connections, then... that could work, though it's a slight metadata leak.
Alternatively the client could send the treeheads for all the roots it trusts. That's going to bloat the ClientHello and... it's going to leak a bit of metadata unless if the client does anything other than claim to trust all roots blessed by the CA/Browser Forum, or the Chrome Root Program.
The post didn't discuss it but naively this feels like it becomes a privacy issue?
You may want to pull landmarks from CAs outside of The Approved Set™ for inclusion in what your machine trusts, and this means you'll need data from somewhere else periodically. All the usual privacy concerns over how you get what from where apply; if you're doing a web transaction a third party may be able to see your DNS lookup, your connection to port 443, and the amount of traffic you exchange, but they shouldn't be able to see what you asked for or what you go. Your OS or browser can snitch on you as normal, though.
I don't personally see any new privacy threats, but I may not have considered all angles.
I could see the list of client-supplied available roots being added to client fingerprinting code for passive monitoring (e.g. JA4) if it’s in the client hello, or for the benefit of just the server if it’s encrypted in transit.
There's "full certificates" defined in the draft which include signatures for clients who don't have landmarks pre-distributed, too.
For example, your IP + screen resolution + the TLS handshake head might be enough of a fingerprint to disambiguate your specific device among the general population.
My point is that there really hasn't been a point where domain level traffic information has been truly anonymous. Whether this is an oversight or state actors have made the outcome a reality, I have no idea. Probably a bit of both.
If I understand this correctly each CA publishes a signed list of landmarks at some cadence (weekly)
For the certs you get the landmark (a 256-bit hash) and the hashes along the merkle path to the leaf cert's hash. For a landmark that contains N certs, you need to include log2(N) * hash_len bytes and perform log2(N) hash computations.
For a MTC signature that uses a 256bit hash and N=1 million that's about 20*32=620bytes.
Is this the gist of it?
I'm really curious about the math behind deciding the optimal landmark size and publishing cadence
> If a new landmark is allocated every hour, signatureless certificate subtrees will span around 4,400,000 certificates, leading to 23 hashes in the inclusion proof, giving an inclusion proof size of 736 bytes, with no signatures.
https://davidben.github.io/merkle-tree-certs/draft-davidben-...
That's assuming 4.4 million certs per landmark, a bit bigger than your estimate.
There's also a "full certificate" which includes signatures, for clients who don't have up-to-date landmarks. Those are big still, but if it's just for the occasional "curl" command, that's not the end of the world for many clients.
  MTC also supports certificates that can be issued immediately and don’t require landmarks to be validated, but these are not as small. A server would provision both types of Merkle tree certificates, so that the common case is fast, and the exceptional case is slow, but at least it’ll work.
If the landmarks are generated not so often (say, once every couple of days), then the many clients will need to take the slow path
If the landmarks is generated quickly (once per hour ?), then the client will continuously download those landmarks
But you still need a public key for TLS? Well, just put it in DNS!
And assuming your DNS responses are validated by DNSSEC, it would be even more secure too. You'd be closing a whole lot of attack vectors: from IP hijacks and server-side AitM to CA compromises. In fact, you would no longer need to use CA's in the first place. The chain of trust goes directly from your registrar to your webserver with no third party in between anymore. (And if your registrar or webserver is hacked, you'd have bigger problems...)
(I know its controversial what a blockchain even is, but this seems sufficiently close to how cryptocurrencies work to count)
Don’t we already just use the certificates to just negotiate the final encryption keys? Wouldn’t a quantum computer still crack the agreed upon keys without the exchange details?
But that's largely already true:
The key exchange is now typically done with X25519MLKEM768, a hybrid of the traditional x25519 and ML-KEM-768, which is post-quantum secure.
The exchanged keys typically AES-128 or AES-256 or ChaCha20. These are likely to be much more secure against quantum computers as well (while they may be weakened, it is likely we have plenty of security margin left).
Changing the key exchange or transport encryption protocols however is much, much easier, as it's negotiated and we can add new options right away.
Certificates are the trickiest piece to change and upgrade, so even though Q-day is likely years away still, we need to start working on this now.
Upgrading the key exchange has already happened because of the risk of capture-now, decrypt-later attacks, where you sniff traffic now and break it in the future.
How "typical" are you suggesting this is? Honestly, it's the first I'd heard of this being done at all in the wild (not that I'm an expert). Peeking around a smattering of random websites in my browser, I'm not seeing it mentioned at all.
So that’s a good point. We can quickly add new encryption protocols after the point things are negotiated in the connection, but adding something new or entirely replacing the certificate system or even just the underlying protocols is a big deal.
No, since forward secret key agreement the certificate private key isn't involved at all in the secrecy of the session keys; the private key only proves the authenticity of the connection / the session keys.
Certificates are commonly used to negotiate a symmetric key which I presumed would be vulnerable to quantum computing as well, but apparently AES has some more buffer and also it’s easier to add new negotiated protocols.
I could see government agencies with a big budget having access to it, but I don't see those computers becoming mainstream
Although I could see China having access to it, which is problem.
Chrome and Cloudflare are doing a MTC experiment this year. We'll work on standardizing over the next year. Let's Encrypt may start adding support the year after that. Downstream software might start deploying support MTCs the year after that. People using LTS Linux distros might not upgrade software for another 5 years after that. People run out-of-date client devices for another 5 years too.
So even in that timeline, which is about as fast as any internet-scale migration goes, it may be 10-15 years from today for MTC support to be fully widespread.
But I still think it’s a good idea to start switching over to post-quantum encryption, because the lead time is so high. It could easily take a full 10 years to fully implement the transition and we don’t want to be scrambling to start after Q-day.
Moving from SHA-1 to SHA-2 took ~20 years - and that's the "happy path", because SHA-2 is a drop-in replacement.
The post-quantum transition is more complex: keys and signatures are larger; KEM is a cryptographic primitive with a different interface; stateful signature algorithms require special treatment for state handling. It can easily take more than 20 years.
I can see USA having access to it, which is also a problem. Or any other government.
> Instead of expecting the client to know the server's public key in advance, the server might just send its public key during the TLS handshake. But how does the client know that the public key actually belongs to the server? This is the job of a certificate.
Are you kidding me? You don't know your audience on an article at the nexus of certificate transparency and post-quantum cryptography well-enough to understand that this introduction to PKI isn't required?
Know your audience. Turning over your voice to an AI doesn't do that for you. It will waste everyone's time on thousands of words of vapid nonsense.
So, its natural that some readers would find parts over-explanatory but the hope was that they could read past those bits and the less educated reader would come away having learnt something new.
It's a privacy violating proxy after all.