Acme, a brief history of one of the protocols which has changed the Internet
120 points
12 hours ago
| 10 comments
| blog.brocas.org
| HN
stavros
10 hours ago
[-]
Let's Encrypt did more for privacy than any other organization. Before Let's Encrypt, we'd usually deploy TLS certificates, but as somewhat of an afterthought, and leaving HTTP accessible. They were a pain to (very manually) rotate once a year, too.

It's hard to overstate just how much LE changed things. They made TLS the default, so much that you didn't have to keep unencrypted HTTP around any more. Kudos.

reply
kragen
9 hours ago
[-]
I think it was Snowden who made TLS the default. Let's Encrypt did great work, but basically having the NSA's spying made common knowledge (including revealing some things that were worse than we expected, like stealing the traffic between Google's data centers) created a consensus that unencrypted HTTP had to go, despite the objections of people like Roy Fielding.
reply
inejge
4 hours ago
[-]
> I think it was Snowden who made TLS the default.

Snowden's revelations were a convincing argument, but I would place more weight on Google in its "we are become Evil" phase (realistically, ever since they attained escape velocity to megacorphood and search monopoly status), who strove to amass all that juicy user data and not let the ISPs or whoever else have a peek, retaining exclusivity. A competition-thwarting move with nice side benefits, that is. That's not to say that ISPs would've known to use that data effectively, but somebody might, and why not eliminate a potential threat systemically if possible?

reply
rusk
1 hour ago
[-]
Reading this it seems to me that ISPs missed a trick by not offering privacy features. These features were already baked into mobile wireless it probably wouldn’t have been a huge big deal for them to provide it. That’s what happens when you treat your business as a source of rent
reply
Lammy
8 hours ago
[-]
Ironically, the inability to cache TLS on the edge of my network makes the Internet more surveillable since everything has to pass through the Room 641As of the world and subjects us all to more network behavior analysis. The TLS-everything world leaks so much more metadata. It's more secure but less private.
reply
kragen
8 hours ago
[-]
Yes, that's a real problem. Probably moving to a content-centric networking or named-data networking system would help with it, while also creating difficulties for censorship, and IPFS and Filecoin seem to be deploying such a thing in real life as an overlay network over the internet.
reply
globular-toast
2 hours ago
[-]
You can do it if you're happy to deploy your CA to your network, can't you? Deploying CA certs sucks, though. I wish it was easier.
reply
trvz
4 hours ago
[-]
The article claims http was kept around. My experience was, that once you setup https you just redirected http, like today.

Snowden may have been a coincidence, too. We knew encryption was better, it was just too much of a hassle for most sites.

reply
tialaramex
2 hours ago
[-]
Redirection doesn't get the job done, without at least a mechanism so that browsers reliably stop visiting the HTTP site (HSTS) and ideally an HTTPS-everywhere feature which, in turn, was not deployable for ordinary people until almost every common site they visit is HTTPS enabled and works properly.

The problem is that there are active bad guys. Redirection means when there are no bad guys or only passive bad guys, the traffic is encrypted, but bad guys just ensure the redirect sends people to their site instead.

Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?

Snowden was definitely a coincidence in the sense that this wasn't a pull decision. Users didn't demand this as a result of Snowden. However, Snowden is why BCP #188 (RFC 7258) aka "Pervasive Monitoring is an Attack" happened, and certainly BCP #188 helped because it was shorthand for why the arguments against encryption everywhere were bogus. One or another advocate for some group who supposedly "need" to be able to snoop on you stands up, gives a twenty minute presentation about why although they think encryption is great, they do need to er, not have encryption, the response in one sentence is "BCP 188 says don't do this". Case closed, go away.

There are always people who insist they have a legitimate need to snoop. Right now in Europe they're pulling on people's "protect the children" heart strings, but we already know - also in Europe that the very moment they get a tiny crack for this narrative in march giant corporations who demand they must snoop to ensure they get their money, and government espionage need to snoop on everybody to ensure they don't get out of line.

reply
jstanley
1 hour ago
[-]
> Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?

You can do better than this. You can have your mitm proxy follow the SSL redirect itself, but still present plain HTTP to the client. So the client still sees the true "mysite.example" domain in the URL bar (albeit on plain http), and the server has a good SSL session, but the attacker gets to see all of the traffic.

reply
gorgoiler
11 hours ago
[-]
Thank you Let’s Encrypt, you changed the world and made it better.

Sorry to everyone else who was listening in on the wire. Come back with a warrant, I guess?!

reply
simonw
10 hours ago
[-]
Seriously, talk about impact. That one non-profit has almost single-handedly encrypted most of the web, 700 million sites now! Amazing work.
reply
eimrine
51 minutes ago
[-]
sorry to a basket of my old devices which I would still use
reply
gerdesj
10 hours ago
[-]
I remember deploying SSL on NetWare in the late 1990s and being given ... something that the US allowed to be exported as a munition!

I don't recall the exact details but it was basically buggered - short key length. Long enough to challenge a 80386 Beowulf cluster but no match for whatever was humming away in a very well funded machine room.

You could still play with all the other exciting dials and knobs, SANs and so on but in the end it was pretty worthless.

reply
tiagod
10 hours ago
[-]
A few years ago a client of mine gave me a big-ish APC UPS. I recently got new batteries for it after the outage here in Portugal, and to turn on SSH I had to agree that I was not a terrorist organisation's nor in a country where encryption can not be exported to.
reply
GJim
1 hour ago
[-]
> I had to agree that I was not a terrorist organisation's nor in a country where encryption can not be exported to.

Don't forget when flying to the USA, ticking the box to say you won't try to overthrow the government.

I'm sure that clause has stopped many an invading army in their tracks.

reply
stavros
10 hours ago
[-]
I'm glad it had that. If you were, say, a member of ISIS and used the UPS, they'd be able to successfully sue you for breach.
reply
kragen
9 hours ago
[-]
Right, 40-bit export-grade SSL.
reply
throw0101a
9 hours ago
[-]
There are several other certificate provisioning protocols:

* https://en.wikipedia.org/wiki/Simple_Certificate_Enrollment_...

reply
tialaramex
2 hours ago
[-]
So, the crucial thing ACME has that the other protocols do not is a hole (and some example ways to fill that hole for your purpose, though others are documented in newer RFCs) for the Proof of Control.

See, SCEP assumes that Bob trusts Alice to make certificates. Alice uses the SCEP server provided by Bob, but she can make any certificate that Bob allows. If she wants to make a certificate claiming she's the US Department of Education, or Hacker News, or Tesco supermarkets, she can do that. For your private Intranet that's probably fine, Alice is head of Cyber Security, she issues certificate according to local rules, OK.

But for the public web we have rules about who we should issue certificates to, and these ultimately boil down to we want to issue certificates only to the people who actually control the name they're getting a certificate for. Historically this had once been extremely hard core (in the mid-1990s when SSL was new) but a race to the bottom ensued and it had become basically "Do you have working email for that domain?" and sometimes not even that.

So in parallel with Let's Encrypt, work happened to drag all the trusted certificate issuers to new rules called the "Ten Blessed Methods" which listed (initially ten) ways you could be sure that this subscriber is allowed a certificate for news.ycombinator.com and so if you want to do so you're allowed to issue that certificate.

Several ACME kinds of Proof of Control are actually directly reflected in the Ten Blessed Methods, and gradually the manual options have been deprecated and more stuff moves to ACME.

e.g. "3.2.2.4.19 Agreed‑Upon Change to Website ‑ ACME" is a specific method which is how your cheesiest "Let's Encrypt in a box" type software tends to work, where we prove we control www.some.example by literally just changing a page on www.some.example in a specific way when requested and that's part of the ACME specification so it can be done automatically without a human in the loop.

reply
abhashanand1501
9 hours ago
[-]
Can someone explain why letsencrypt certificates have to be 90 days expiry? I know there is automation available, but what is the rationale for 90 days?
reply
figmert
3 hours ago
[-]
Others have already given your answer, but heads up, LE is lowering the certificate lifetime to 45 days[0].

- [0] https://letsencrypt.org/2025/12/02/from-90-to-45

reply
eimrine
50 minutes ago
[-]
The best computer possible on the Earth today can crack it for 91 days in the best case for him.
reply
pastel8739
9 hours ago
[-]
I’ve heard one rationale that it is short enough to force you to set up the automation, but don’t know if this was actually a consideration or not
reply
cortesoft
8 hours ago
[-]
You can just read their explanation: https://letsencrypt.org/2015/11/09/why-90-days

Tl;dr is to limit damage from leaked certs and to encourage automation.

reply
ChrisArchitect
8 hours ago
[-]
Related recently:

Decreasing Certificate Lifetimes to 45 Days

https://news.ycombinator.com/item?id=46117126

reply
Lammy
8 hours ago
[-]
It's so annoying. Eventually we will get to the point that every connection will have its own unique certificate, and so any compromised CA will be able to be “tapped” for a particular target without anybody else being able to compare certs and figure it out.
reply
kuil009
11 hours ago
[-]
Thank you for your service
reply
wakawaka28
10 hours ago
[-]
Has anyone considered the possibility that a CA such as Let's Encrypt could be compromised or even run entirely by intelligence operatives? Of course, there are many other CAs that could be compromised and making money off of customers on top of that. But who knows... What could defend against this possibility? Multiple signatures on a certificate?
reply
neilv
10 hours ago
[-]
Even funnier, if one SIGINT team built a centralized "encryption everywhere" effort (before sites get encryption elsewhere), but that asset had to be need-to-know secret, so another SIGINT team of the same org, not knowing the org already owned "encryption everywhere", responded to the challenge by building a "DoS defense" service that bypasses the encryption, and started DoS driving every site of interest to that service.

(Seriously: I strongly suspect that Let's Encrypt's ISRG are the good guys. But a security mindset should make you question everything, and recognize when you're taking something on faith, or taking a risk, so that it's a conscious decision, and you can re-evaluate it when priorities change.)

reply
wakawaka28
10 hours ago
[-]
Sounds like Cloudflare honestly. There are many issues with CA trust in the modern Internet. The most paranoid among us would do well to remove every trusted CA key from their OS and build a minimal set from scratch, I suppose. Browsers simply make it too easy to overlook CA-related issues, especially if you think a CA is compromised or malicious.
reply
dbt00
10 hours ago
[-]
A signature on a certificate doesn't allow CA to snoop. They need access to the private key for that, which ACME (and other certificate signing protocols in general) doesn't share with the CA.
reply
throw0101a
9 hours ago
[-]
> They need access to the private key for that, which ACME (and other certificate signing protocols in general) doesn't share with the CA.

Modern TLS doesn't even rely on the privacy of the private key 'as much' as it used: nowadays with (perfect) forward secrecy it's mainly used to establish trust, and after which the two parties generate transient session keys.

* https://en.wikipedia.org/wiki/Forward_secrecy

So even if the private key is compromised sometime in the future, past conversation cannot be decrypted.

reply
tialaramex
27 minutes ago
[-]
In fact knowing the private key for other people's certificate you issue is strictly forbidden for the publicly trusted CAs. That's what happened years back when a "reseller" company named Trustico literally sent the private keys for all their customers to the issuing CA apparently under the impression this would somehow result in refunding or re-issuing or something. The CA checked, went "These are real, WTF?" and revoked all the now useless certificates.

It is called a private key for a reason. Don't tell anybody. It's not a secret that you're supposed to share with somebody, it's private, tell nobody. Which in this case means - don't let your "reseller" choose the key, that's now their key, your key should be private which means you don't tell anybody what it is.

If you're thinking "But wait, if I don't tell anybody, how can that work?" then congratulations - this is tricky mathematics they didn't cover in school, it is called "Public key cryptography" and it was only invented in the 20th century. You don't need to understand how it works, but if you want to know, the easiest kind still used today is called the RSA Digital Signature so you can watch videos or read a tutorial about that.

If you're just wondering about Let's Encrypt, well, Let's Encrypt don't know or want to know anybody else's private keys either, the ACME software you use will, in entirely automated cases, pick random keys, not tell anybody, but store them for use by the server software and obtain suitable certificate for those keys, despite not telling anybody what the key is.

reply
kragen
9 hours ago
[-]
Even access to the private key doesn't permit a passive adversary to snoop on traffic that's using a ciphersuite that provides perfect forward secrecy, because the private key is only used to authenticate the session key negotiation protocol, which generates a session key that cannot be computed from the captured session traffic. Most SSL and TLS ciphersuites provide PFS nowadays.

An active adversary engaging in a man-in-the-middle attack on HTTPS can do it with the private key, as you suggest, but they can also do it with a completely separate private key that is signed by any CA the browser trusts. There are firewall vendors that openly do this to every single HTTPS connection through the firewall.

HPKP was a defense against this (https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning) but HPKP caused other, worse problems, and was deprecated in 02017 and later removed. CT logging is another, possibly weaker defense. (It only works for CAs that participate in CT, and it only detects attacks after the fact; it doesn't make them impossible.)

reply
zzo38computer
10 hours ago
[-]
If the CA is somehow able to control the communication (I think usually they don't, but if they are being run by intelligence operatives then maybe they have that capability, although they probably do not use it a lot if so (in order to reduce the chance of being detected)), they could substitute a certificate with their own keys (and then communicate with the original server using the original keys in order to obtain the information required). However, this does not apply if both sides verify by an independent method that the key is correct (and if not, would allow to detect it).

Adding multiple signatures to a certificate would be difficult because the extensions must be a part of the certificate which will be signed. (However, there are ways to do such thing as web of trust, and I had thought of ways to do this with X.509, although it does not normally do that. Another way would be an extension which is filled with null bytes when calculating the extra signatures and then being filled in with the extra signatures when calculating the normal signature.)

(Other X.509 extensions would also be helpful for various reasons, although the CAs might not allow that, due to various requirements (some of which are unnecessary).)

Another thing that helps is using X.509 client certificates for authentication in addition to server certificates. If you do this, then any MITM will not be able to authenticate (unless at least one side allows them to do so). X.509 client authentication has many other advantages as well.

In addition, it might be helpful to allow you to use those certificates to issue additional certificates (e.g. to subdomains); but, whoever verifies the certificate (usually the client, but it can also be the server in case of a client certificate) would then need to check the entire certificate chain to check the permissions allowed by the certificate.

There is also the possibility that certificate authorities will refuse to issue certificates to you for whatever reasons.

reply
wakawaka28
10 hours ago
[-]
I know that. But presumably, Let's Encrypt could participate in a MITM attack since they can sign another key, so that even the visitor who knows that you use them as a CA can't tell there is a MITM. Checking multiple signatures on the same key could raise the bar for a MITM attack, requiring multiple CA's to participate. I can't be the first person to think of this. I'm not even a web security guy.

It might be interesting for ACME to be updated to support signing the same key with multiple CA's. Three sounds like a good number. You ought to be able to trust CA's enough to believe that there won't be 3 of them conspiring against you, but you never really know.

reply
336611629
10 hours ago
[-]
This problem was solved in the mid 2010s by Certificate Transparency. Every issued certificate that browsers trust must be logged to a public append-only certificate transparency log. As a result, you can scan the logs to see if any certs were issued for your domain for keys that you don't control (and many tools and companies exist to do this).
reply
harrall
5 hours ago
[-]
I wouldn’t consider it “solved” because most organizations and people don’t actually check the log.

And a malicious actor can abuse this fact.

reply
ryandv
10 hours ago
[-]
The signing keys used by the Certificate Authority to assert that the client (leaf) certificate is authentic through cryptographic signing differ from the private keys used to secure communication with the host(s) referenced in the x509 CN/SAN fields.
reply
wakawaka28
10 hours ago
[-]
I know that. At issue is the fact that the signing keys can be used to sign a MITM key. If there were multiple signatures on the original key, it would (or could) be a lot harder to MITM (presumably). Do you trust any CA enough to never be involved in this kind of scandal? Certainly government CA's and corporate CA's MITM people all the time.

Edit: I'm gonna be rate limited, but let me just say now that Certificate Transparency sounds interesting. I need to look into that more, but it amounts to a 3rd party certificate verification service. Now, we have to figure out how to connect to that service securely lol... Thanks, you've given me something to go read about.

reply
coffee--
10 hours ago
[-]
This is where Certificate Transparency -- and it being mandatory for browser trust -- comes in to save the day.
reply
venturecruelty
9 hours ago
[-]
I mean, it doesn't help that the browser duopoly is making it harder and harder to use self-signed certificates these days. Why, if I were more paranoid, I might come to a similar conclusion.
reply
eduction
5 hours ago
[-]
I’m sorry, who the heck wrote this and why should I trust them? Very poorly written, also.

It’s bizarre. There is a photo at the top, no name, no site title. No about page. Extremely untrustworthy.

reply
ThomasMidgley
4 hours ago
[-]
No! It's not bizarre.

Scroll down to the footer--> click on "Homepage"

Then you will get to his homepage: https://www.brocas.org/

reply
RagnarD
3 hours ago
[-]
It certainly affected Wile E Coyote.
reply
donpdonp
7 hours ago
[-]
it seems like all this infrastructure could be replaced by a DNS TXT record with a public key that browsers could use to check the cert sent from the web server. A web server would load a self-signed cert (or whatever cert they wanted), and put the cert's public key into a DNS record for that hostname. Every visit to a website would need two lookups, one for address and one for key. It puts control back into the hands of the domain owners and eliminates the need for letsencrypt.
reply
akovaski
7 hours ago
[-]
I'm not sure what that would solve. You would still need some central entity to sign the DNS TXT record, to ensure that the HTTPS client does not use a tampered DNS TXT record.
reply
tzs
7 hours ago
[-]
If someone can tamper with your DNS TXT records now they can get a certificate for your domain.
reply
franga2000
4 hours ago
[-]
Not tamper with the record directly, but MitM it on the way to a target.
reply
pennomi
5 hours ago
[-]
Ah but then how would nations spy on people by compromising the root certificate?
reply