Of course, EV certs aren’t as attractive as they used to be given browser UI changes no longer call them out like they used to. But if we are going to have “extra-verified” certs, it might make sense to mandate a higher level of DNS security for them
https://educatedguesswork.org/posts/dns-security-dnssec/ https://educatedguesswork.org/posts/dns-security-dane/
From my perspective, the challenge with DNSSEC is that it just doesn't have a very good cost/benefit ratio. Once the WebPKI exists, "critical path" use of DNSSEC only offers modest value. Now, obviously, this article is about requiring CAs to check DNSSEC, which is out of the critical path and of some value, but it's not clear to me it's of enough value to get people to actually roll out DNSSEC.
WebPKI works without DNSSEC, whereas DANE (a more secure WebPKI replacement) depends on a robust DNSSEC deployment.
- Web PKI is inherently insecure and can't be fixed on its own. The root problem is that the CAs we "trust" can issue certificates without technical controls. The best we can do is ask them to be nice and force them provide a degree of (certificate) transparency to enable monitoring. This is still being worked on. Further, certificates are issued without strong owner authentication, which can be subverted (and is subverted). [3]
- The (very, very) big advantage of Web PKI is that it operates online and supports handshake negotiation. As a result, iteration can happen quickly if people are motivated. A few large players can get together and effect a big change (e.g., X25519MLKEM768). DNSSEC was designed for offline operation and lacks negotiation, which means that everyone has to agree before changes can happen. Example: Kipp Hickman created SSL and Web PKI in 3 months, by himself [1]. DNSSEC took years and years.
- DNSSEC could have been fixed, but Web PKI was "good enough" and the remaining problem wasn't sufficiently critical.
- A few big corporations control this space, and they chose Web PKI.
- A humongous amount of resources has been spent on iterating and improving Web PKI in the last 30 years. So many people configuring certificates, breaking stuff, certificates expiring... we've wasted so much of our collective lives. There is a parallel universe in which encryption keys sit in DNS and, in it, no one has to care about certificate rotation.
- DNSSEC can't ever work end-to-end because of DNS ossification. End-user software (e.g., browsers) can't reliably obtain any new DNS resource records, be it DANE or SVCB/HTTPS.
- The one remaining realistic use for DNSSEC is to bootstrap Web PKI and, possibly, secure server-to-server communication. This is happening, now that CAs are required to validate DNSSEC. This one changes finally makes it possible to configure strong cryptographic validation before certificate issuance. [2]
[1] https://www.feistyduck.com/newsletter/issue_131_the_legend_o...
[2] https://www.feistyduck.com/newsletter/issue_126_internet_pki...
[3] https://redsift.com/guides/a-guide-to-high-assurance-certifi...
I've worked with small businesses and even small technical teams as a DNS consultant specifically.
DNS has only been a source of issues and confusion and not at all related to any requirements, just a form of checkbox implementation.
I do understand it's one of those technologies that are developed due to legitimate requirements, but it flows downstream and people just adopt it without really understanding the simpler solutions or what exactly it's meant to do.
That said if I had ever gotten a bigger client like a TLD registrar or a downstream registrar, then sure I would have had to work with it, but I've only ever had to learn how to uninstall it actually.
You have any cryptographers that are satisfied with unauthenticated name server checks?
You got a point: 1k isn't great and of course mainstream cryptographers will advocate for higher. That doesn't change that it's still acceptable within the existing security model nor that better alternatives are available. The cryptographic strength of DNSSEC isn't a limiting factor that fatally dooms the whole project. We have to upgrade the crypto used in large-scale infrastructure all the time!
And yes, uptake of better crypto is poor but I find chicken-and-egg arguments disingenuous when coming from someone who zealously advocates to make it worse. Furthermore, your alternative is no signing of DNS records. Find me a cryptographer who thinks no PKI is a better alternative. I know DJB griped about DNSSEC when proposing DNSCurve, which protects the privacy of the payload but not the intergrity of the payload.
I made a mistake once and signed with wrong keys which then broke DANE. It‘s good to validate your DNSSEC (and DANE, CAA etc.) setup through external monitoring.
If you own and host your own domain, it's probably very easy to have your DNS provider enable DNSSEC for you, maybe just a button click. They'd sure like you to do that, because DNSSEC is itself quite complicated, and once you press that button it's much less likely that you're going to leave your provider. DNSSEC mistakes take your entire domain off the Internet, as if it had never existed.
There's a research project, started at KU Leuven, that attempts an unbiased "top N" list of most popular domains; it's called the Tranco List. For the last year or so, I've monitored the top 1000 domains on the Tranco list to see which have DNSSEC enabled. You can see that here:
There's 2 tl;dr's to this:
First, DNSSEC penetration in the top 1000 is single digits % (dropping sharply, down to 2%, as you scope down to the top 100).
Second, in a year of monitoring and recording every change in DNSSEC state on every domain in this list, I've seen just three Tranco Top 1000 domains change their DNSSEC state, and one of those changes was Canva disabling DNSSEC. (I think, as of a few weeks ago, they've re-enabled it again). Think about that: 1000 very popular domains, and just 0.3% of them thought even a second about DNSSEC.
DNSSEC is moribund.
(I did a lot of the work of shipping that product in a past life. We had to fight the protocol and sometimes the implementers to beat it into something deployable. I am proud of that work from a technical point of view, but I agree DNSSEC adds little systemic value and haven’t thought about it since moving on from that project almost 10 years ago. It doesn’t look like DNSSEC itself has changed since, either.)
Then a few government sites, which have mandated it. The first hit after those is around #150.
| Last updated. | 2026-03-16 05:04 -0700 |
|:--------------------------------------|:-----------------------|
| Total number of DS Records | 25,099,952 |
| Validatable DNSKEY record sets | 24,559,043 |
| Total DANE protected SMTP | 4,165,253 |
There's a graph of the growth of signed zones the past 7 years [2].I get it that DNSSEC doesn’t make a lot of sense for large organizations with complex networks. that have been around for decades.
But if you're self-hosting a website for your personal use or for a small-ish organization and your registrar supports it (most do), there's no reason not to enable DNSSEC. I did it recently using Cloudflare and it was a single checkbox in the settings.
An estimated more than 90% of ICANN's ~1,400 top-level domains are DNSSEC enabled, so that shouldn't be a barrier.
Since most of us don't have a personal IT department at our disposal, for the small guy, DNSSEC prevents cache poisoning attacks, man-in-the-middle attacks and DNS spoofing. There are other ways to mitigate these attacks of course, but I've found DNSSEC to be pretty straightforward.
[1]: https://stats.dnssec-tools.org/#/top=tlds
[2]: https://stats.dnssec-tools.org/#/top=dnssec?top=dane&trend_t...
* The same reasons not to deploy DNSSEC that face large organizations apply to you: any mistake managing your DNSSEC configuration will take your domain off the Internet (in fact, you'll probably have a harder time recovering than large orgs, who can get Google and Cloudflare on the phone).
* Meanwhile, you get none of the theoretical upside, which in 2026 comes down to making it harder for an on-path attacker to MITM other readers of your site by tricking a CA into misissuing a DCV certificate for you --- an attack that has already gotten significantly harder over the last year due to multiperspective. The reason you don't get this upside is that nobody is going to run this attack on you.
Even if the costs are lower for small orgs (I don't buy it but am willing to stipulate), the upside is practically nonexistent.
"Cache poisoning attacks, man-in-the-middle attacks and DNS spoofing" are all basically the same attack, for what it's worth. DNSSEC attempts to address just a subset of these; most especially MITM attacks, for which there are a huge variety of vectors, only one of which is contemplated by DNSSEC.
Finally, I have to tediously remind you: when you're counting signed domains, it's important to keep in mind that not all zones are equally meaningful. Especially in Europe, plenty of one-off unused domains are signed, because registrars enable it automatically. The figure of merit is how many important zones are signed. Use whichever metric you like, and run in through a bash loop around `dig ds +short`. You'll find it's a low single-digit percentage.
Set your TTL to five minutes and/or hand over DNS management to a service provider.
> Meanwhile, you get none of the theoretical upside, which in 2026 comes down to making it harder for an on-path attacker to MITM other readers of your site by tricking a CA into misissuing a DCV certificate for you --- an attack that has already gotten significantly harder over the last year due to multiperspective. The reason you don't get this upside is that nobody is going to run this attack on you.
Didn't save Cloudflare from a bad TLS certificate being issued. I still think that reducing the number of bad actors from 300 to the root servers and your registrar is a meaningful reduction in attack surface.
> DNSSEC attempts to address just a subset of these; most especially MITM attacks, for which there are a huge variety of vectors, only one of which is contemplated by DNSSEC.
How would authenticating DNS records cryptographically not address cache poisoning, MITM, and DNS spoofing in relation to DNS lookups? Also, DNSSEC doesn't have to solve all problems to make it worth doing.
> Finally, I have to tediously remind you: when you're counting signed domains, it's important to keep in mind that not all zones are equally meaningful. Especially in Europe, plenty of one-off unused domains are signed, because registrars enable it automatically. The figure of merit is how many important zones are signed. Use whichever metric you like, and run in through a bash loop around `dig ds +short`. You'll find it's a low single-digit percentage.
Yet you complain about DNSSEC being to hard to deploy and not getting enough deployment. Wouldn't it be nice if they could leverage that automatic signing to also generate TLS, SSH, and other certificates?
It seems to me like it actually solves a problem, what is the solution to "I want/need to be able to trust the DNS answer" without DNSSEC?
Resolvers have put in the effort to use most of the range of source ports and all of the range of request ids, as well as mixed caps, so predicting queries is difficult and blind spoofing requires an unreasonable number of packets.
Additionally, commercial DNS services tend to be well connected anycast. This means most queries can be served with a very low round trip time; reducing the spoofing window. Additionally, there's less opportunity to observe requests as they traverse fewer networks and less distance.
Generally, traffic has moved to certificate authenticated protocols. CAs are required to verify domain control from multiple locations, so an attacker asserting domain control would need to do so for the victim as well as multiple other locations in order to get a certificate issued.
Further; if we assume you plan to assert domain control by taking over or MITMing the IP of a DNS server, it seems likely you could do the same for the IP of an application server. DNSSEC doesn't help very much in that case. (DNSSEC with DANE could help in that case, but to a first approximation, nothing supports that, and there doesn't appear to be any movement towards it)
The bigger thing here is DoH, which has very real penetration, and works for zones that don't do anything to opt-in. That's what a good design looks like: it just works without uninvolved people having to do extra stuff.
I think DNSSEC supporters, what few of them are left, are really deep into cope about what transport security is doing to the the rationale for DNSSEC deployment. There's nothing about DoH that makes it complicated to speak it to an authority server. The only reason I can see that we're not going to get that is that multi-perspective kills the value proposition of even doing that much.
There’s a problem with HTTPS, though. HTTPS uses URLs that use WebPKI to tie the URL to the certificate validation algorithm. Which means you need WebPKI certificates, which needs DNS. Chicken, meet egg.
Maybe there could be a new URL scheme that doesn’t need WebPKI. It could be spelled like:
https_explicit:[key material]//host.name/path
or maybe something slightly crazy and even somewhat backwards compatible if the CA/browser people wouldn’t blow a fuse: https://1.2.3.4.ipv4.[key material].explicit_key.net
explicit_key.net would be some appropriate reserved domain, and some neutral party (ICANN?) could actually register it, expose the appropriate A records and, using a trusted and name-constrained intermediate CA, issue actual certificates that allow existing browsers to validate the key material in the domain name.I agree with them.
The ACME standard recommends ACME-based CAs use DNSSEC for validation, section 11.2 [1]:
An ACME-based CA will often need to make DNS queries, e.g., to
validate control of DNS names. Because the security of such
validations ultimately depends on the authenticity of DNS data, every
possible precaution should be taken to secure DNS queries done by the
CA. Therefore, it is RECOMMENDED that ACME-based CAs make all DNS
queries via DNSSEC-validating stub or recursive resolvers. This
provides additional protection to domains that choose to make use of
DNSSEC.
An ACME-based CA must only use a resolver if it trusts the resolver
and every component of the network route by which it is accessed.
Therefore, it is RECOMMENDED that ACME-based CAs operate their own
DNSSEC-validating resolvers within their trusted network and use
these resolvers both for CAA record lookups and all record lookups in
furtherance of a challenge scheme (A, AAAA, TXT, etc.).
[1]: https://datatracker.ietf.org/doc/html/rfc8555/#section-11.2(You edited your comment to include more detail about when LE started validating DNSSEC; all I know is that it's been many years that they've been doing it.)
If we're doing to defer to industry, does only the opinion of website operators matter, or do browsers and CAs matter too? Browsers and CAs tend to be pretty important and staff big security teams too.
So do we wait for all the stragglers? Wait for the top 500 or top 2500 to make it mandatory? Who takes financial responsibility for those that fell through the cracks?
DNSSEC zone signing lets one resolve records without having to directly go through trusted (ie centralizing) nameservers. (If you run your own recursive resolver this just changes the set of trusted servers to the zones' servers).
I've made this argument in the context of your poo-pooing DNSSEC before, and I don't expect you to be receptive to it this time. Rather I just really wish I could get around to writing code to demonstrate what I mean.
That is obviously not a claim you can make of the WebPKI. Your problem here is that the WebPKI is a very large superset of the security capabilities of DNSSEC. Unlike with DNSSEC, people --- millions of them --- actually rely on it.
It isn't that easy on AWS.
It also generally is not that easy if your domain registrar is not the same as your dns host, because it involves both parties. And some registrers don't have APIs for automatic certificate rotation, so you have to manually rotate the certs periodically.
Register only has public material
The master is bind9, and any semi-trusted provider can be used as slave/redundency/cdn getting zonetransfers including the RRsigs
Well in cases where I have had to deal with DNSSEC, I've had to rotate the KSK annually for compliance reasons.
You’ve clearly put a lot of effort into limiting adoption. I’d really value your thoughts on this response to your anti-DNSSEC arguments:
And NTP, which is basically a dependency for DNSSEC due to validity intervals too;
From https://news.ycombinator.com/item?id=47270665 :
> By assigning Decentralized Identifiers (like did:tdw or SSH-key DIDs) to individual time servers and managing their state with Key Event Receipt Infrastructure (KERI), we can completely bypass the TLS chicken-and-egg problem where a client needs the correct time to validate a server's certificate.
> To future-proof such a protocol, we can replace heavy certificate chains with stateless hash-based signatures (SPHINCS+, XMSS^MT) paired with lightweight zkSNARKs. If a node is compromised, its identity can be instantly revoked and globally broadcast via Merkle Tree Certificates and DID micro-ledgers, entirely removing DNS from the security dependency chain.
The system described there I think could replace NTP NTS, DNS, DNSSEC, and maybe CA PKI revocation; PQ with Merkle Tree certificates
At least you're consistent.
I don't think I'm out on a limb suggesting that random small domains should not enable DNSSEC. There's basically zero upside to it for them. I think there's basically never a good argument to enable it, but at least large, heavily targeted sites have a colorable argument.
I've struggled to think of an especially unexamined example because after all they tend to sit out of conscious recall, I think the best I can do is probably that my favourite comic book character is Miracleman's daughter, Winter Moran. That's a consistent belief I've held for decades, I haven't spent a great deal of time thinking about it, but it's not entirely satisfactory and probably there is some introduced nuance, particularly when I re-examined the contrast between what Winter says about the humans to her father and what her step-sister Mist later says about them to her (human) mother because I was writing an essay during lockdown.
This seems really odd, probably fundamentally incorrect. "Believing something over time means it is less likely that you are engaging in good faith"? Totally insane take.
"More secure" begs the question "against what?", which the blog post doesn't seem to want to go into. Maybe it's secure from hidden tigers.
My favourite DNSSEC "lolwut" is about how people argue that it's something "NIST recommends", whilst at the same time the most recent major DNSSEC outage was......... time.nist.gov! (https://ianix.com/pub/dnssec-outages.html)
Please don't stealth-edit your posts after I respond to them. If you need to edit, just leave a little note in your comment that you edited it.
Yes it did hit HN and you just said, "I stand by what I wrote." and then complain about buggy implementations and downtime connected to DNSSEC. As if that isn't true for all technologies, let alone /insecure/ DNS. DNS is connected to a lot of downtime because it undergirds the whole internet. Making the distributed database that delegates domain authority cryptographically secure makes everything above it more secure too.
I rebutted your arguments point-by-point. You don't update your blog post to reflect those arguments nor recent developments, like larger key sizes.
I write things people disagree with all the time. I can't recall ever having been mad that people didn't cite me for things we disagree about. Should I have expected all the people who hated coding agents to update their articles when I wrote "My AI Skeptic Friends Are All Nuts"? I didn't realize I was supposed to be complaining about that.
I'm frustrated that you seem to blow me off and insult me when I try to engage in good faith discussion, but I'm not angry at you. I just ran into this post while procrastinating at work and here we are, in the same loop.
I think we are both trying to make the internet a safer place. It's sad we can't seem to have a productive conversation on the matter.
Why? I can see this argument for large domains that might be using things like anycast and/or geography-specific replies. But for smaller domains?
> There's basically zero upside to it for them.
It can reduce susceptibility to automated wormable attacks. Or to BGP-mediated attacks.
So if the router between the web server and the Internet is compromised, it can just get trusted certs for all the HTTPS traffic going through it, enabling transparent MITM to inject its payload.
But there is no money in making that a solution and a TON of money in selling you BS HTTPS certs. There is a lot of people spreading FUD about it. It's a shame.
Ah yes, because lets encrypt is rolling in the $$$$.
The sad thing is that Mozilla and others have to spend millions bankrolling Let's Encrypt instead of using the free, high assurance PKI that is native to the internet!
It's of course possible that the total numbers are lower than the costs of the WebPKI -- I haven't run them -- but I don't think free is the right word.
LE isn't primarily funded by non-profits, as you can see from the sponsor list here: https://isrg.org/sponsors/
Anyway, I think there's a reasonable case that it would be better to have the costs distributed the way DNSSEC does, but my point is just that it's not free. Rather, you're moving the costs around. Like I said, it may be cheaper in aggregate, but I think you'd need to make that case.
I mean, Mozilla got the ball rolling and it's still run on donations (even if they come from private actors).
> Like I said, it may be cheaper in aggregate, but I think you'd need to make that case.
The PKI is already there: we have 7 people who can do a multisig for new root keys. There is a signing ceremony in a secure bunker somewhere that gets live streamed. The HSMs and servers are already paid for. Cert transparency/monitoring is nice but now it's hard-coded to HTTPS instead of being done more generically. There's a lot of duplicated effort.
Among others:
Let’s Encrypt was created through the merging of two simultaneous
efforts to build a fully automated certificate authority. In 2012, a
group led by Alex Halderman at the University of Michigan and
Peter Eckersley at EFF was developing a protocol for automatically
issuing and renewing certificates. Simultaneously, a team at Mozilla
led by Josh Aas and Eric Rescorla was working on creating a free
and automated certificate authority. The groups learned of each
other’s efforts and joined forces in May 2013.
...
Initially, ISRG was funded almost entirely through large dona-
tions from technology companies. In late 2014, it secured financial
commitments from Akamai, Cisco, EFF, and Mozilla, allowing the
organization to purchase equipment, secure hosting contracts, and
pay initial staff. Today, ISRG has more diverse funding sources; in
2018 it received 83% of its funding from corporate sponsors, 14%
from grants and major gifts, and 3% from individual giving.
Except for the period before the launch when Mozilla and EFF
were paying people's salaries, including mine, it was
never really the case that Let's Encrypt was primarily funded
by non-profits.> and it's still run on donations (even if they come from private actors).
I agree, but I think it's important to be precise about what's happening here, and like I said, it's never been the case that LE was really funded by non-profits.
> > Like I said, it may be cheaper in aggregate, but I think you'd need to make that case. > > The PKI is already there: we have 7 people who can do a multisig for new root keys. There is a signing ceremony in a secure bunker somewhere that gets live streamed. The HSMs and servers are already paid for. Cert transparency/monitoring is nice but now it's hard-coded to HTTPS instead of being done more generically. There's a lot of duplicated effort.
I think this is a category error. The main operational cost for DNSSEC is not really the root, which is comparatively low load, but rather the distributed operations for every registry/registrar, and server to register keys, sign domains, etc.
One way to think about this is that running a TLD with DNSSEC is conceptually similar to operating a CA in that you have to take in everyone's keys and sign them. It's true you don't need to validate their domains, but that's not the expensive part. Operating this machinery isn't free, especially when you have to handle exceptional cases like people who screw up their domains and need manual help to recover. Now, it's possible that it's a marginal incremental cost, but I doubt it's zero. Upthread, you suggested that people are already paying for this in their domain registrations, but that just means that the TLD operator is going to have to absorb the incremental cost.
However, I'm don't feel sorry for registrars or TLDs. Verisign selling HTTPS certs while running the root TLDs is a conflict of interest and I believe the perverse incentives are a big part of the reason why DNSSEC and DANE are stalled out. TLDs are a monopoly business and ICANN is quasi-commercial entity that should never have been a for-profit business.
I certainly think it is fair to ask them to pay for all this.
I actually agree with you that in an abstract architectural sense a DNSSEC-style solution for authenticating they keys for endpoints is better. The problem from my perspective is that for a number of reasons that we've explored elsewhere in this thread, there is no practical way to get there from here.
To put this more sharply: in the world as it presently is with ubiquitous WebPKI deployment, the marginal benefit of DNSSEC strikes me as quite modest, even if it were universally deployed. Worse yet, the incremental benefit to any specific actor of deploying DNSSEC is even lower, which makes it very hard to get to universal deployment.
> However, I'm don't feel sorry for registrars or TLDs. Verisign selling HTTPS certs while running the root TLDs is a conflict of interest and I believe the perverse incentives are a big part of the reason why DNSSEC and DANE are stalled out. TLDs are a monopoly business and ICANN is quasi-commercial entity that should never have been a for-profit business. > >I certainly think it is fair to ask them to pay for all this.
I also do not feel sorry for registrars. However, it's also not clear to me that if somehow they were forced to incur incremental cost X per domain name, they would not find a way to pass it onto us. With that said, I also don't think that's really why DNSSEC and DANE are stalled out; rather I think that it's the deployment incentives I mentioned above.
Note that despite the confusing naming and the fact that VeriSign was once a CA, they no longer are and have not been since 2010, as described in the second paragraph of their Wikipedia page. https://en.wikipedia.org/wiki/Verisign. In fact, in my experience VeriSign is very pro-DNSSEC.
Ah yes. Let's take something that's prone to causing service issues and strap more footguns to it.
It's not worth it, because the cost is extremely quantifiable and visible, whereas the benefits struggle to be coherent.
With DNSSEC, a host with control over a domain's DNS records could use that to issue verifiable public keys without having to contact a third party.
I ran into this while working on decentralized web technologies and building a parallel to WebPKI just wasn't feasible. Whereas we could totally feed clients DNSSEC validated certs, but it wasn't supported.
1. Things that use TLS and hence the WebPKI 2. Other things.
None of what you've written here applies to the TLS and WebPKI case, so I'm going to take it that you're not arguing that DNSSEC validation by clients provides a security improvement in that case.
That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.
There may be other applications where a global public PKI makes sense; presumably those applications will be characterized by the need to make frequent introductions between unrelated parties, which is distinctly not an attribute of the SSH problem.
DNSSEC lets you delegate a subtree in the namespace to a given public key. You can hardcode your DNSSEC signing key for clients too.
Don't get me started on how badly VPN PKI is handled....
The WebPKI and DNSSEC run global PKIs because they routinely introduce untrusting strangers to each other. That's precisely not the SSH problem. Anything you do to bring up a new physical (or virtual) involves installing trust anchors on it; if you're in that position already, it actually harms security to have it trust a global public PKI.
The arguments for things like SSHFP and SSH-via-DNSSEC are really telling. It's like arguing that code signing certificates should be in the DNS PKI.
Providing global PKI and enabling end-to-end authentication by default for all clients and protocols certainly would make the internet a safer place.
Do you hardcode Github and AWS keys in your SSH config? Do you think it would be beneficial to global security if that happened automatically?
Further, I haven't "moved on to another argument". Can you answer the question I just asked? If I have an existing internal PKI for my fleet, what security value is a trust relationship with DNSSEC adding? Please try to be specific, because I'm having trouble coming up with any value at all.
It would benefit the likes of Wikileaks. You could do all the crypto in your basement with an HSM without involving anyone else.
> That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.
But do they? That requires adding support for another protocol.
I would like to live in a world where I don't have to copy/paste SSH keys from an AWS console just to have the piece-of-mind that my SSH connection hasn't been hijacked.
DNS mistakes take your entire domain off the Internet, as if it had never existed.
I'm preparing a proposal to add an advisory mode for DNSSEC. This will solve a lot of operational issues with its deployment. Enabling it will not have to be a leap of faith anymore.
This isn't so much as a scary story I'm telling so much as it is an empirically observable fact; it's happened many times, to very important domains, over the last several years.
In particular, the long TTL of DNS records itself is a historic artifact and should be phased out. There's absolutely no reason to keep it above ~15 minutes for the leaf zones. The overhead of doing additional DNS lookups is completely negligible.
> This isn't so much as a scary story I'm telling so much as it is an empirically observable fact; it's happened many times, to very important domains, over the last several years.
So has the TLS cert expiration. And while you can (usually) click through it in browsers, it's not the case for mobile apps or for IoT/embedded devices. Or even for JS-rich webapps that use XMLHttpRequest/fetch.
And we keep making Internet more fragile with the ".well-known" subtrees that are served over TLS. It's also now trivial for me to get a certificate for most domains if I can MITM their network.
Edit: BTW, what is exactly _expiring_ in DNSSEC? I've been using the same private key on my HSM for DNSSEC signing for the last decade. You also can set up signing once, and then never touch it.
There's ample evidence that the cost/benefit math simply doesn't work out for DNSSEC.
You can design new DNSSECs with different cost profiles. I think a problem you'll run into is that the cost of the problem it solves is very low, so you won't have much headroom to maneuver in. But I'm not reflexively against ground-up retakes on DNSSEC.
Where you'll see viscerally negative takes from me is on attempts to take the current gravely flawed design --- offline signers+authenticated denial --- as a basis for those new solutions. The DNSSEC we're working with now has failed in the marketplace. In fact, it's failed more comprehensively than any IETF technology ever attempted: DNSSEC dates back into the early-mid 1990s. It's long past time to cut bait.
Now here is where I disagree. Just off the top of my head, how about HIP, IP multicast and PEM?
Multicast gets used (I think unwisely) in campus/datacenter scenarios. Interdomain multicast was a total failure, but interdomain multicast is more recent than DNSSEC.
HIP is mid-aughts, isn't it?
S-HTTP was a bigger failure in absolute terms (I should know!) but it was eventually published as Experimental and the IETF never really pushed it, so I don't think you could argue it was a bigger failure overall.
(I hate to IETFsplain anything to you so think of this as me baiting you into correcting me.)
To really nerd out about it, it seems to me there are two metrics.
1. How much it failed (i.e., how low adoption was). 2. How much effort the IETF and others put into selling it.
From that perspective, I think DNSSEC is the clear winner. There are other IETF protocols that have less usage, but none that have had anywhere near the amount of thrust applied as DNSSEC.
Why? What is the real difference between DNSSEC and HTTPS?
I'd argue that the only difference is that browser vendors care about protecting against MITM on the client side. They're fine with MITM on the server side or with (potentially state-sponsored) BGP prefix hijacks. And I'm not fine with that personally.
> Where you'll see viscerally negative takes from me is on attempts to take the current gravely flawed design --- offline signers+authenticated denial --- as a basis for those new solutions.
Yes, I agree with that. In particular, NSEC3 was a huge mistake, along with the complexity it added.
I think that we should have stuck with NSEC for the cases where enumeration is OK or with a "black lies"-like approach and online signing. It's also ironic because now many companies proactively publish all their internal names in the CT logs, so attackers don't even need to interact with the target's DNS to find out all its internal names.
> In fact, it's failed more comprehensively than any IETF technology ever attempted: DNSSEC dates back into the early-mid 1990s. It's long past time to cut bait.
I would say that IPv6 failed even more. It's also unfair to say that DNSSEC dates back to the 90-s, the root zone was signed only in 2008.
The good news is that DNSSEC can be improved a lot by just deprecating bad practices. And this will improve DNS robustness in general, regardless of DNSSEC use.
Speaking as someone who was formerly responsible for deciding what a browser vendor cared about in this area, I don't think this is quite accurate. What browser vendors care about is that the traffic is securely conveyed to and from the server that the origin wanted it to be conveyed to. So yes, they definitely do care about active attack between the client and the server, but that's not the only thing.
To take the two examples you cite, they do care about BGP prefix hijacks. It's not generally the browser's job to do something about it directly, but in general misissuance of all stripes is one of the motivations for Certificate Transparency, and of course the BRs now require multi-perspective validation.
I'm not sure precisely what you mean by "MITM on the server side". Perhaps you're referring to CDNs which TLS terminate and then connect to the origin? If so, you're right that browser vendors aren't trying to stop this, because it's not the business of the browser how the origin organizes its infrastructure. I would note that DNSSEC does nothing to stop this either because the whole concept is the origin wants it.
For the vast majority of Let's Encrypt certs, you only need to transiently MITM the plain HTTP traffic between the server and the rest of the net to obtain the certificate for its domain. There will be nothing wrong in the CT logs, just another routine certificate issuance.
It is possible to limit this with, yes, DNS. But then we're back to square one with DNS-based security. Without DNSSEC the attacker can just MITM the DNS traffic along with HTTP.
Google, other browser makers, and large services like Facebook don't really care about this scenario. They police their networks proactively, and it's hard to hijack them invisibly. They also have enough ops to properly push the CAA records that will likely be visible to at least one point-of-view for Let's Encrypt.
1. DNSSEC only protects the name lookup for a host, and TLS/HTTPS protects the entire session.
2. People actually rely on TLS/HTTPS, and nobody relies on DNSSEC, to the point where the root keys for DNSSEC could be posted on Pastebin tonight and almost nobody would have to be paged. If the private key for a CA in any mainstream browser root program got published that way, it would be all hands on deck across the whole industry.
It only provides privacy, it doesn't verify that the resolver didn't tamper with the record.
>to the point where the root keys for DNSSEC could be posted on Pastebin tonight and almost nobody would have to be paged.
This would very much be a major issue and lots of people would immediately scramble to address it. The root servers are very highly audited and there is an absurd amount of protocol and oversight of the process.
When you make arguments like this, or the weird SSH argument you're making across the thread, or the weird "this would be good for Wikileaks" thing you did elsewhere, you clarify how tenuous your argument is. Remember, you're in the position of arguing that 95%+ of large site operators are wrong about this, and have been for decades, and you're the one who's right. That can definitely happen! But it's an extraordinary claim and your evidence thus far is pretty terrible.
> 2. People actually rely on TLS/HTTPS, and nobody relies on DNSSEC
Sure. But I treat it as a failing of the overall ecosystem rather than just the technical failure of DNSSEC. It's not the _best_ technology, but it's also no worse than many others.
This is the outcome of browser vendors not caring at all about privacy and security. Step back and look at the current TLS infrastructure from the viewpoint of somebody in the 90-s:
You're saying that to provide service for anything over the Web, you have to publish all your DNS names in a globally distributed immutable log that will be preserved for all eternity? And that you can't even have a purely static website anymore because you need to update the TLS cert every 7 days? This is just some crazy talk!
(yes, you technically can get a wildcard cert, but it requires ...drumroll... messing with the DNS)
The amount of just plain brokenness and centralization in TLS is mind-boggling, but we somehow just deal with it without even noticing it anymore. Because browser vendors were able to apply sufficient thrust to that pig.
100%. The reasons why are explained in some detail here: https://educatedguesswork.org/posts/dns-security-dane/. The TL;DR is that by the time DANE was created the WebPKI already existed and was universal and so adding DANE didn't buy you anything because you still were going to have to have a WebPKI certificate more or less in perpetuity.
> This is the outcome of browser vendors not caring at all about privacy and security.
This is false. The browser vendors care a great deal about privacy and security. Source: it was my job at Mozilla to care about this, amongst other things. It may be the case that they have different priorities than you.
> You're saying that to provide service for anything over the Web, you have to publish all your DNS names in a globally distributed immutable log that will be preserved for all eternity?
Well, back when people were taking DNSSEC and DANE more seriously, there was a lot of talk of doing DNSSEC Transparency.
> And that you can't even have a purely static website anymore because you need to update the TLS cert every 7 days? This is just some crazy talk!
This is hyperbole, because nobody is forcing you to update the TLS cert every 7 days. It's true that the lifetimes are going to go down to 45 days eventually and LE offers 6 day certificates, but those are both optional and non-default.
Moreover, the same basic situation applies to DNSSEC, because your zone also needs to be signed frequently, for the same underlying reason: disabling compromised or mississued credentials.
Yet somehow they managed to wrangle hundreds of CAs to use the CT logs and to change the mandated set of algorithms.
> Well, back when people were taking DNSSEC and DANE more seriously, there was a lot of talk of doing DNSSEC Transparency.
And this would have been great. But it only needs to make transparent the changes in delegation (actually, only DS records) from the TLD to my zone. Not anything _within_ my zone.
And tellingly, the efforts to enable delegation in WebPKI are going nowhere. Even though X.509 is supporting it from the beginning (via name constraints, a critical extension).
> This is hyperbole, because nobody is forcing you to update the TLS cert every 7 days.
The eventual plan is to have shorter certs. 47 days will be mandated by 2029.
It also doesn't really change my point: I can't have a purely static server anymore and expect it to be accessible.
> Moreover, the same basic situation applies to DNSSEC, because your zone also needs to be signed frequently, for the same underlying reason: disabling compromised or mississued credentials.
That's incorrect. I've been using the same key (inside my HSM) since 2016. And I don't have to update the zone if it's unchanged. DNSSEC is actually _more_ secure than TLS, because zone signing can be done fully offline. With TLS, the key material is often a buggy memcpy() away from the corrosive anonymous Internet environment.
So you can rotate the DNSSEC keys, but it's neither mandated nor necessary. The need for short-lived certs for TLS is because there's no way to check their validity online during the request (OCSP is dead and CRLs are too bulky). But with DNSSEC if at any point my signing key is compromised, I can just change the DS records in the registrar to point to my updated key.
I'm not sure I see the connection here. What I'm saying is that the benefit for sites to adopt DANE is very low because as long as there are a lot of non-DANE-using clients out there they still need to have a WebPKI cert. This has nothing to do with CT and not much to do with the SHA-1 transition.
Re: your broader point about static sites, I don't think you're correct about the security requirements. Suppose for the sake of argument that your signing key is compromised: sure you can change the DS records but the attacker already has a valid DNSSEC record and that's sufficient to impersonate you for the lifetime of the record (recall that the Internet Threat Model is that the attacker controls the network so they can just send whatever DNS responses they want). What prevents this is that the records expire, so the duration of compromise is the duration of those records, just like with the WebPKI without revocation [0]. The same thing is true for the TLSA records signed by your ZSK.
In the DNSSEC/DANE paradigm, then, there are two signatures that have to happen regularly:
- The signature of the parent over the DS records, attesting to your ZSK. - The signature of your ZSK over the TLSA records.
In the WebPKI paradigm, the server has to regularly contact the CA to get a new certificate. [1]
I agree with you that one advantage of DNSSEC is that that signing can all be done offline and then the data pushed up to the DNS servers, but it's still the case that something has to happen regularly. You've just pushed that off the TLS server and into the DNS infrastructure.
More generally, I'm not sure what you mean by a "purely static server". TLS servers are inherently non-static because they need to do the TLS handshake and I think the available evidence is the ACME exchange isn't that big a deal.
[0] As an aside, all the major browsers now have some compressed online revocation system, but that's not necessarily a generalizable solution.
[1] When we first were designing LE and ACME, I advocated for the CA to proactively issue new certificates over the old key, but things didn't end up that way, and of course you'd still need to download it.
Everyone knows "WebPKI", e.g., self-appointed "cert authorities", generally relies on DNS
With an added DNSSEC step, perhaps this is now limited to ICANN DNS only
Self-appointed "cert authorities" checking with self-appointed domainname "authority". A closed system
I mean, now you've brought it up, I am concerned about it - but the level of concern is somewhere between "spontaneous combustion of myself leading to exploitation of my domain DNS because my bugger-i-ded.txt instructions are rubbish" and "cosmic rays hitting all the exact right bits at the exact right time to bugger my DNS deployment when I next do one which won't be for a while because even one a year is a fast pace for me to change something."
(Plus I'm perfectly capable of taking my sites and domains offline by incompetent flubbery as it is; I don't need -more- ways to fuck things up.)
There are also good reasons many serious admins don't trust signing authorities. If you know... you know why... =3
Yes, exactly.
On a busy site, the incurred additional load cost can bite hard.
A lot of people will leave it off for the same reasons as DoH or DoT. =3
[1]: https://sockpuppet.org/blog/2015/01/15/against-dnssec/ [2]: https://easydns.com/blog/2015/08/06/for-dnssec/
"Government Controlled PKI!"
- Governments own the domains, you just rent them. They can kick your site off and validate their HTTPS certs regardless of DNSSEC.
"Weak Crypto!"
- 1K key sizes were fine given the threat model required cracking one in a year. They have since been increased.
"DNSSEC Doesn’t Protect Against MITM Attacks"
- DNSSEC protects against MITM attacks!
- It's just that most clients don't perform local validation due to low adoption.
- In reality, you are just making the circular argument to NOT adopt DNSSEC because adoption is low.
- There are LOTS more MITM opportunities with HTTPS. We spent a massive effort on cert transparency, yet even Cloudflare missed a rouge cert being issued.
"There are Better Alternatives to DNSSEC"
- There is no alternative to signing domain name data and you point to crypto systems that do something other than that.
- "There are better alternatives to HTTPS: E2E JS crypto with trust on first use"
- What about SSH? I guess we are doomed to run everything over HTTPS and pay dumb cert authorities for the privilege of doing so.
"Bloats record sizes"
- ECC sigs can be sent in a single packet.
- Caching makes first connect latency irrelevant.
On and on and on. These are trivially refutable but you just shut the conversation down and point out instances of downtime ... as if DNS doesn't cause a lot of downtime anyaway.
> - ECC sigs can be sent in a single packet.
It's 2026. If you're deploying a cryptosystem and not considering post-quantum in your analysis, you'd best have a damn good reason.
ECC signs might be small, but the world will be moving to ML-DSA-44 in the near future. That needs to be in your calculus.
Also, I worked for a DNS company. People stopped caring about ulta-low latency first connect times back in the 90s.
You are clearly very proud of your work devaluing DNSSEC. But pointing to lack of adoption doesn't make your arguments valid.
They did? That's certainly going to be news to the people at Google, Mozilla, Cloudflare, etc. who put enormous amounts of effort into building 0-RTT into TLS 1.3 and QUIC.
It seems to me that you're saying here that (1) the hyperscalers do care but (2) it's under control. I'm not necessarily arguing with (2) but as far as the hyperscalers go: (1) they drive a lot of traffic on their own (2) in many cases they care so their users don't have to.
Hyperscalers go to crazy lengths because they can measure monetary losses due to milliseconds of less view time and it's much easier when they have distributed cloud infrastructure anyway. But it's not really solving a problem for their customers. At least when I worked in DNS land ... latency micro-benchmarking was something of a joke. Like, sure, you can shave off a few tens of milliseconds, but it's super expensive. If you want to reduce latency, just up your TTL times and/or enable pre-fetching.
As a blocker for DNSSEC ... people made arguments about HTTPS overhead back in the day too. DoH also introduces latency, yet people aren't worried about that being a deal killer.
They did, and then we spent an enormous amount of time to shave off a few round trip times in TLS 1.3 and QUIC. So I'm not sure this is as strong an argument as you seem to think it is.
> DoH also introduces latency, yet people aren't worried about that being a deal killer.
Actually, it really depends. It can actually be faster. Here are Mozilla's numbers from when we first rolled out DoH. https://blog.mozilla.org/futurereleases/2019/04/02/dns-over-...
And here are some measurements from Hounsel et al. https://arxiv.org/abs/1907.08089
But if it's worth doing for HTTP, why not for DNS?
> Actually, it really depends. It can actually be faster. Here are Mozilla's numbers from when we first rolled out DoH.
Oh fun!
I'm sorry I don't understand your question.
You're not going to find this answer satisfying, I suspect, but there are two main reasons browsers and big sites (that's what we're talking about) didn't bother to try to make DNSSEC faster:
1. They didn't think that DNSSEC did much in terms of security. I recognize you don't agree with this, but I'm just telling you what the thinking was. 2. Because there is substantial deployment of middleboxes which break DNSSEC, DNSSEC hard-fail by default is infeasible.
As a consequence, the easiest thing to do was just ignore DNSSEC.
You'll notice that they did think that encrypting DNS requests was important, as was protecting them from the local network, and so they put effort into DoH, which also had the benefit of being something you could do quickly and unilaterally.
I run a bunch of websites personally. I have ACME-issued TLS certificates from LetsEncrypt. I monitor the Certificate Transparency logs, and have CAA records set.
What's the threat model that should worry me, where DNSSEC is the right improvement?
It's just a very difficult statistic to get around! Whenever you make a claim like this, you're going to have address the fact that basically ~every high-security organization on the Internet has chosen not to adopt the protocol, and there are basically zero stories about how this has bit any of them.
From your link elsewhere, https://easydns.com/blog/2015/08/06/for-dnssec/
>We might see a day when HTTPS key pinning and the preload list is implemented across all major browsers, but we will never see these protections applied in a uniform fashion across all major runtime environments (Node.js, Java, .NET, etc.)[...]
Is this not the same flaw?
You really aren't going to respond to any of those points? You stand by your complaint DNSSEC being "government controlled PKI" when TLDs are a government controlled naming system? And your alternative is to advocate for privately owned PKI run by companies with no accountability that are also much more vulnerable to attack?
Campaigning against cryptographically signing DNS records is a weird life choice man.
If I've said something in this thread that you disagree with, say so and say why (you'll need something better than "I wrote about this 11 years ago and you weren't nice enough to me about it"). Right now, all you're doing is yelling about a post I wrote 11 years ago and haven't cited once on this thread.
Of course, as you know, I stand by that post. But it's not germane to the thread.
I'm upset that your incorrect arguments have gotten so much traction that the internet is a less safe place for it.
> wrote a post disagreeing with my post, and I didn't go back and revise my post to capture all the arguments you had that I disagreed with. Sorry, but not sorry.
You in a sibling thread:
> I feel pretty confident that the search bar refutes this claim you're making. What you're trying to argue is that I've avoided opportunities to argue about DNSSEC on HN. Seems... unlikely.
It seemed like you wanted to have this discussion but I guess not.
> yelling about a post I wrote 11 years ago and haven't cited once on this thread. ... Of course, as you know, I stand by that post. But it's not germane to the thread.
Do you know what comment thread you are in? I complained about FUD and cited your blogpost. This is what this thread is about.
(HTTPS really needs a way to make a URL where the URL itself encodes the keying information.)
Presumably the problem is that it just takes for-fucking-ever to make anything happen inside CA/BForum. Case in point: we were all today years old before CA/BForum required CAs to actually use DNSSEC if it's set up.