I wonder if we can ever hope for CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs, authenticated with something like a DNS-01 challenge. I've advocated for this before [1][2], but here's my pitch again:
I want to issue certificates from my own ICA for my homelab and office, to avoid ratelimits and hide hostnames for private services. I submit that issuing a 90-day ICA certificate with a name constraint that only allows it to issue certificates for the specific domain is no more dangerous than issuing a wildcard certificate, and offers enough utility that it should be considered seriously.
Objection 1: "Just use a wildcard cert." Wildcard certs are not sufficient here because they don't support nested wildcards, and — more importantly — they don't allow you to isolate hosts since any host can serve all subdomains. I'd rather not give some rando vibecoded nodejs app the same certificate that I use to handle auth.
Objection 2: "Just install a self-signed CA on all your devices." Installing and managing self-signed CAs on every device is tedious, error prone, and arguably more dangerous than issuing a 90-day name-constrained ICA.
Objection 3: "Aren't name constraints not supported by all clients?" On the contrary, they've had wide support for almost a decade, and for those just set the critical bit.
I understand this is not a "just ship it lmao" kind of change, but if we want this by 2030 planning for it needs to start happening now.
https://isbgpsafeyet.com/ https://notes.valdikss.org.ru/jabber.ru-mitm/
1. No way to enforce what the issued end-entity certificates look like, beyond name constraints. X509 is an overly-flexible format and a lot of the ecosystem depends on a subset of them being used, which is enforced by policy on CAs.
2. Hiding private domains wouldn’t be any different than today. CT requirements are enforced by the clients, and presumably still would be. Some CAs support issuing certs without CT now, but browsers won’t accept them.
3. Allowing effectively unlimited issuance would likely overwhelm CT, and the whole ecosystem collapses.
How many levels of dots do you need?
>I'd rather not give some rando vibecoded nodejs app the same certificate that I use to handle auth.
Use a reverse proxy to handle TLS instead?
In all of these cases it would be idiotic to distribute the same wildcard cert to each host. And please don't say "you just shouldn't want to do that".
This has been challenging for some subscribers who are unaccustomed to receiving any legitimate site traffic from foreign countries.
https://community.letsencrypt.org/t/multi-perspective-valida...
Now that it's a requirement for the whole web PKI, it will be interesting to see the pressure against blanket geoblocking increase. (Or maybe more web hosts will make it easier to use DNS challenge methods to get certificates.)
I just find a constant frustration that geoblocking is often discussed as "bad" when... if you aren't running a global service, is an incredibly powerful tool. Even among global services, the hesitation to intelligently use risk-based authentication strategies remains deeply frustrating... there's no reason an account which has never been accessed outside the United States should be permitted to suddenly log in from Nigeria. Credit card companies figured this stuff out decades ago.
But the answer is no, self-signed certs dont have to folllw c/ab.
Does this mean the corporations have to reveal all their internal DNS and sites to the public (or at least the CA) and let them do DV, if they want certs issued for their wholly-internal domains that will be valid in normal browsers?
The blog post has nothing to do with this, because it was already the case with certificate transparency. The solution is to use wildcard certificates. For instance if you don't want secretproject.evil.corp to be visible to everyone, you could get a wildcard certificate for *.evil.corp instead.
Using an ACME DNS challenge would be the simplest option if it wasn’t such a pain to integrate with most DNS services; but even HTTP challenges don’t actually need to expose the same server that actually runs the service, just one that serves /.well-known/acme-challenge/* during the validation process. (For example, this could be the same server via access control rules that check what interface the request came in on, or a completely different server with split-horizon DNS and/or routing, or a special service running on port 80 that’s only used for challenges.)
(I was thinking about this a lot recently because I had a service that wanted to do HTTP challenges but I didn’t want to put the whole thing on the Internet. In the end my solution was to assign an IPv6 range which is routed by VPN internally but to a proxy server for public requests: https://search.feep.dev/blog/post/2025-03-18-private-acme)