There are a number of other DNS servers which are not written in C, which support transport-secured DNS like DoH (DNS-over-HTTP), DoT, and DoQ; but do they correctly handle this malformed input?
From the mailing list disclosure, which doesn't yet have a CVE FWIU? https://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/20... :
Dnsmasq forwards queries with special characters (e.g., ~, !, *, _) to upstream recursive resolvers.
Some upstream recursive resolvers silently discard such malformed queries (no NXDomain/ServFail response).
Dnsmasq does not validate or detect this situation, and waits silently, creating a large attack window.
During this window, attackers can brute-force TxID (16-bit) and source port (16-bit) with a high probability of success (birthday paradox effect).
Security Impact
Attackers can poison any cached domain name in Dnsmasq.
[...] We recommend adding:
Detection mechanisms when upstream resolvers remain silent.
Rate limiting and spoof-detection techniques, similar to those in PowerDNS
> PowerDNS Mitigation: https://docs.powerdns.com/recursor/settings.html#spoof-nearm... spoof-nearmiss-max
What resolvers silently discard (or do anything else weird with) requests with QNAMES that have non-hostname queries (which aren't "malformed")?
The "special character" thing sounds like a red herring: IIUC, dnsmasq isn't dealing with lost responses correctly, creating a window for birthday collision attack?
However, Dnsmasq continues to wait for a response. The attacker only needs to brute force 32 bits (source port and TxID) to falsify a response and poison the cache.
The correct and expected behaviour of Dnsmasq would have been not to forward invalid requests to the resolver.
They aren't "invalid requests". You can put literally anything in a domain name (see RFC 2181, section 11) and the upstream should respond. I'm curious what resolvers are dropping these requests.
The correct behavior is for dnsmasq to forward requests to the upstream regardless of the content of the QNAME. If dnsmasq doesn't get a response back in some reasonable amount of time, it should (probably) return SERVFAIL to its client.
Further, DNS mostly uses UDP which is unreliable -- all DNS clients must deal with the query or response being lost. Dnsmasq's timeouts might be overly long (I didn't bother to check), but this is a minor configuration issue.
This sounds like the (well known) birthday attack, the defense of which is precisely the point of DNSSEC. AFAIK, dnsmasq supports DNSSEC, so the right answer is to turn on validation.
--fast-dns-retry=[<initial retry delay in ms>[,<time to continue retries in ms>]]
Under normal circumstances, dnsmasq relies on DNS clients to do retries; it does not generate timeouts itself. Setting this option instructs dnsmasq to generate its own retries starting after a delay which defaults to 1000ms. If the second parameter is given this controls how long the retries will continue for otherwise this defaults to 10000ms. Retries are repeated with exponential backoff. Using this option increases memory usage and network bandwidth. If not otherwise configured, this option is activated with the default parameters when --dnssec is set.
--dnssec
Validate DNS replies and cache DNSSEC data. When forwarding DNS queries, dnsmasq requests the DNSSEC records needed to validate the replies. The replies are validated and the result returned as the Authenticated Data bit in the DNS packet. In addition the DNSSEC records are stored in the cache, making validation by clients more efficient. Note that validation by clients is the most secure DNSSEC mode, but for clients unable to do validation, use of the AD bit set by dnsmasq is useful, provided that the network between the dnsmasq server and the client is trusted. Dnsmasq must be compiled with HAVE_DNSSEC enabled, and DNSSEC trust anchors provided, see --trust-anchor. Because the DNSSEC validation process uses the cache, it is not permitted to reduce the cache size below the default when DNSSEC is enabled. The nameservers upstream of dnsmasq must be DNSSEC-capable, ie capable of returning DNSSEC records with data. If they are not, then dnsmasq will not be able to determine the trusted status of answers and this means that DNS service will be entirely broken.
Do you deny DNSSEC's goal is to protect DNS data? Do you deny "Query ID prediction attacks" (or more generally, flooding attacks) aim to corrupt DNS data? Do you deny the 16-bit transaction ID allows for effective flooding attacks?
As for "almost nothing in the DNS is signed", while it's true the percentage of second-level domains aren't signed, the DNS root is signed, all generic top-level domains, and the vast majority of country code TLDs are signed. In some countries (e.g., The Netherlands) more than 50% of the zones in their ccTLD are signed. As we've seen empirically, with improved automation/tools and authoritative servers that turn on DNSSEC-signing by default, the percentage will go up.
This notion of DNSSEC signatures being widespread comes up in every thread about the protocol. Here's a little thingy I threw together because I got tired of typing out the bash "dig" loop to regenerate it in threads:
Note that the Tranco list is international, so captures popular zones in places that have automatic (and security-theatric) DNSSEC signatures, as well as amplifying the impact of vendors like Cloudflare who have several different zones in the top 1000. Even with all that included: single digits.
It's been over 30 years of tooling work on DNSSEC --- in recent time intervals, DNSSEC adoption in North America has gone down. Stick a fork in it.
In any event, that's a nice site that provides useful stats.
Source port randomization, BCP38, and then the 0x20 qname capitalization trick, all turned out to be far more practical mitigations for query-id concerns and others prioritized them. "We really need this massive internet-wide jobs-program lift of the entire Internet, without even providing confidentiality, to solve this query-id issue. Never mind the easier fixes."
I'll just leave this here: https://blog.netherlabs.nl/articles/2008/07/09/some-thoughts...
This whole episode reminds me of the story of the Citigroup Center in New York. Years after its completion, an architecture student uncovered that key supports for the building had been done incorrectly and unsafely. It was at risk of collapse in high winds.
The structural engineer worked with the building owner and city to repair the building in secret, before everything was eventually made public. It makes for a story of a folk hero, and it's a great narrative of recovery. Meanwhile the stories of the structural engineers and construction supervisors who weren't woefully negligent and who just quietly built safe buildings go uncelebrated.
* https://dns.cr.yp.narkive.com/fAkXdiM0/update-on-the-djb-bug...
It should tell you something that even I, who am and was on a different continent to all of these people, knew about this stuff well before it became an ISC press release. I'd like to say that it was Paul Jarc who went into the consequences of what one could do with response forgeries, on Bernstein's dns mailing list, but I might be remembering the wrong person. Certainly, list regulars had read Bernstein's discussion of DNS security and realized the implications.
The logical consequences of being able to forge whatever response one likes were readily apparent. Bert Hubert noted publicly at the time of the Kaminsky announcements that xe had been not only aware of this for years,
* https://mailman.powerdns.com/pipermail/pdns-users/2008-July/...
but had even been trying to get an IETF draft approved about port+ID randomization, and bailiwick checking, acknowledging the factors involved and promoting the adoption of the well-known mitigations as mandatory.
Amusingly for the instant case of researchers rediscovering the well-known, you can read M. Hubert's first draft from a year and a half before the ISC press release, and it lays out there exactly what I laid out here elsewhere in this very discussion, about a query to Google Public DNS taking a second from cold to answer for ~.www.example.com and that being more than enough time to send a tonne of forged responses at 2006 network speeds.
* https://datatracker.ietf.org/doc/html/draft-ietf-dnsext-forg...
Daniel Bernstein was right in the late 1990s about randomizing source ports, and randomization did effectively foreclose on Kaminsky's vulnerability. But I'm unaware of a cite in which he outlines Kaminsky's attack in any detail. His djbdns countermeasure was a sensible response to BIND's QID prediction problem, which Paul Vixie was reluctant to fix because the QID only gave him 16 bits of randomness to work with.
I'm not saying you're certainly wrong that other people had discovered the random-name / authority spoofing attack Kaminsky came up with, only that I'm intimately familiar with this whole line of security research and I'm unaware of a source laying it out --- I am thus skeptical of the claim.
Kaminsky figured out how to build a much more practical way to exploit what was known already. This was very significant, and it's one of the ultimate examples of PoC||GTFO finally triggering action. He deserves a lot of credit.
If you get to pick the specific time interval, you can prove anything.
Why don’t you link the full graph? <https://www.verisign.com/en_US/company-information/verisign-...>
(The graph shows that DNSSEC usage is instead increasing since the end of last year, and at that time, its lowest point, was only ever as low as it was back in 2023.)
There is a 15% drop like you describe, but as the other commenter said, it doesn't show usage falling for the past year (as you had implied).
I have no dog in this race, I don't care about DNSSEC. If you can't access the page, that's your business. But it bothers me that you would assert this data agrees with your point without even looking at it. That's pretty uncharitable.
Note how he cleverly did not say that; he said “in recent time intervals”. And you can certainly count the time from 2023-2024 as being “recent”. He technically was not wrong, and technically did not lie.
All I'm saying is that I find it remarkable that DNSSEC adoption in North America sharply dropped over the course of 2023 --- that, and the fact that the graph tops out at 7MM zones, a big-looking number that is in fact very small.
I think it's funny that the graph serves my argument better than 'teddyh's. But really, I think it's ultimately meaningless. That's because the figure of merit for DNSSEC adoption isn't arbitrary signed zones but rather popular signed zones. And that in turn is because the distribution of DNS queries is overwhelmingly biased to popular zones --- if you can sample a random DNS query occurring somewhere in the US right now, it's much more likely to be for "googlevideo.com" than for "aelcargo.site" (a name I just pulled off the certificate transparency firehose).
The Verisign graph 'teddyh keeps posting is almost entirely "aelcargo.site"-like names†. The link I posted upthread substantiates that.
† And that in turn is because DNS providers push users into enabling managed DNSSEC features, because disabling DNSSEC is terrifying and so DNSSEC is an extremely effective lock-in vector --- that's not me making it up, it's what the security team at one of the few large tech companies that actually have it enabled told me when I asked why the hell they had it enabled.
Then why did you use the graph — or at least the information it displays – as the finishing slam dunk point of your post?
> The Verisign graph 'teddyh keeps posting
I “keep posting” it because it’s a good solid counterargument, and it’s also very funny; I originally got the link from you, but as time goes by, the graph keeps proving you wrong.
> why the hell they had it enabled.
Yes, why does a security team have a security feature enabled? It is truly a mystery.
But wait, your main argument, in this post, is that nobody “popular” uses DNSSEC, but do you mean that you actually personally pressure all the popular ones who do use it, to stop? Does not that severely skew your data into irrelevance?
Just disabled DNSSEC in my PiHole. Too many domains which are incorrectly configured leading to non-existing domain errors.
And at least as far as I could find, PiHole has no way to selectively disable DNSSEC validation for certain domains.
That's an interesting and somewhat surprising data point given the use of DNSSEC validation at public resolvers (e.g., 1.1.1.1, 8.8.8.8, etc.). Might be something that would be useful to track by those following DNSSEC deployment.
For selectively disabling DNSSEC validation, I gather PiHole+dnsmasq doesn't support Reverse Trust Anchors (RTA). Unfortunate.
And to be clear: while there are 4.3 billion numbers, the birthday paradox means you only need to spam 65,535 UDP packets to succeed
Seems like you need to send ~ 2^16 requests and then ~ 2^16 spoofed replies.
That said, I should point out that there is nowadays a loophole for special-casing labels that begin with underscore, called out by the SVCB document. The loophole does not allow for dropping the client requests, though.
On the gripping hand, all that this report boils down to is a rediscovery that if the queried server does not answer immediately, there's a window for an attacker with access to the path between client and server (or at least the ability to blindly route packets into that path with forged source addresses) to inject forged responses; that the message ID and random port number system is woefully inadequate for making brute forcing hard at even late 1990s network speeds; and that most of the mitigation techniques for forgery (including the PowerDNS one called out in this report) are useless if the attacker can see the query packet go by in the first place. The right answer is proper cryptography, not nonsense about punctuation characters that are supposedly "malformed".
Something we have known since 2002.
* https://cr.yp.to/djbdns/forgery.html
The DNS protocol is a terrible protocol. This report is not some novel discovery.
If dnsmasq was only caching the ANSWER section, then the only thing which could be poisoned would be the qname. If cache poisoning for arbitrary domain names is being observed, then it would seem that information from the ADDITIONAL or AUTHORITY is being cached as well.
* https://github.com/Avunit/Dnsmasq-Cache-Poisoning/blob/main/...
They're also relying upon the random source port being allocated from a subset of the available port range, 32768 to 61000 in their default setting.
The claim in the code is that it is Google Public DNS that is failing to respond to queries where the domain name has had an extra label prepended, and that label is 1 character long and the character is a tilde.
Google Public DNS has no such non-response problems with ~.www.example.com in my part of the world.
However, note that they are injecting the forged responses from the very same machine that sent the initial query to dnsmasq, with no delay whatsoever. Whereas it takes Google Public DNS a second or so to look up ~.www.example.com here. So really there's no methodologically sound evidence that Google Public DNS even has the fault with these punctuation characters as claimed.
edit: the only github account of the reporter is github.com/idealeer . this avunit is something random
(It's on my list to try loading the Python 2 version of dnspython and see if that works. Yeah, Ignition's internal scripting layer is running Jython, at version 2.)
Edit: that's not to say that some middlebox isn't dropping them in the name of "securitah".
"Those [length] restrictions aside, any binary string whatever can be used as the label of any resource record.
Dnsmasq should (MUST in RFC 2119 language) forward requests -- it would be a bug not to. The upstream resolver shouldn't (MUST NOT in RFC 2119 language) silently drop them -- it would be a bug if they did.
Brute forcing transaction/port ID collisions to poison the cache is a long known flaw in the DNS protocol (it has been known since at least 1993) that led to the creation of DNSSEC.
It wasn't until Kaminsky combined transaction IDs with additional data poisoning in 2008 --- an entirely novel attack --- that BIND began randomizing and gave up on holding out for DNSSEC. You'll notice that since 2008 DNS cache poisoning has basically vanished as a real operational security concern. That's not because of DNSSEC.
I think the Kremlinology here is super interesting and I'm happy to keep digging with you, but again: DNS spoofing was a live issue for a couple months in 1995, when it was resolved by QID randomization in BIND (and then QID+port randomization in djbdns), and then a live issue again for about a month and a half in 2008, when it was finally resolved by QID+port randomization in BIND. DNSSEC had nothing at all to do with it.
My hostnames with emojis in them might disagree.
I always disable/remove dnsmasq when I can. Compared to the alternatives, I have never liked it. This is at least the second major dnsmasq coding mistake that has been published in recent memory.^1 Pi-Hole was based on dnsmasq which turned me off that as well.
1.
https://www.jsof-tech.com/wp-content/uploads/2021/01/DNSpooq...
https://www.cisa.gov/news-events/ics-advisories/icsa-21-019-...
https://www.malwarebytes.com/blog/news/2021/01/dnspooq-the-b...
https://web.archive.org/web/20210119133618if_/https://www.js...
https://seclists.org/oss-sec/2021/q1/49
Anyway, never gonna happen. Just wishful thinking.
If you're using dnsmasq behind NAT or a stateful firewall, how would an attacker be able to access the service in the first place?
https://forum.openwrt.org/t/security-advisory-2021-01-19-1-d...
What was the first?
https://openwrt.org/advisory/2021-01-19-1
> Dnsmasq has two sets of vulnerabilities, one set of memory corruption issues handling DNSSEC and a second set of issues validating DNS responses. These vulnerabilities could allow an attacker to corrupt memory on the target device and perform cache poisoning attacks against the target environment.
> These vulnerabilities are also tracked as ICS-VU-668462 and referred to as DNSpooq.
https://web.archive.org/web/20250121143405/https://www.jsof-...
> DNSpooq - Kaminsky attack is back!
> 7 new vulnerabilities are being disclosed in common DNS software dnsmasq, reminiscent of 2008 weaknesses in Internet DNS Architecture
Some less breathless sourcing, though I can’t blame OP for being excited in the above post:
https://www.kb.cert.org/vuls/id/434904
https://www.cisa.gov/news-events/ics-advisories/icsa-21-019-...
"Some users of our service (NextDNS), discovered this issue since edgekey.net has been added to some anti-tracker blocklists, resulting in the blocking of large sites like apple.com, airbnb.com, ebay.com when used with unbound."
As Pi-Hole is a modified dnsmasq, NextDNS may be a modified unbound
You can use unbound
I do not use a cache
For HTTP I use a localhost-bound TLS forward proxy that has the DNS data in memory; I gather the DNS data in bulk from various sources using various methods; there are no remote DNS queries when I make HTTP requests
Unbound is overkill for how I use DNS on the local network
Those are different use cases.
https://raw.githubusercontent.com/NLnetLabs/unbound/master/d...
https://raw.githubusercontent.com/NLnetLabs/unbound/master/d...
Unbound can also answer queries from data in a text file read into memory at startup, like an authoritative nameserver would; no recursion
So assume it's a bit of an inaccurate phrasing.
EDIT: nope, the email itself seeks disclosure coordination etc. So yeah, oops.
Contact.
There is a dnsmasq mailing list at http://lists.thekelleys.org.uk/mailman/listinfo/dnsmasq-discuss which should be the first location for queries, bugreports, suggestions etc. The list is mirrored, with a search facility, at https://www.mail-archive.com/dnsmasq-discuss@lists.thekelleys.org.uk/. You can contact me at simon@thekelleys.org.uk.
Interesting assertion -- do you have anything to back this up?
While DNSSEC can prevent a name server operator from effectively modifying zone data (at least without the signing key and for resolvers that bother to validate), protecting against an authoritative name server operator from maliciously modifying the zone data in their server was not a significant consideration in any of the IETF/implementation discussions I was in (back in the late 90s, I ran ISC during the development of BIND 9.0 and participated quite heavily in DNS-related working groups).
Transport security obviously only protects the channel of communication. It does nothing for ensuring the authenticity of the data. In order to protect the data so it doesn't matter where the data comes from (authoritative, resolver, off the back of a truck, etc.), you have to take the steps DNSSEC takes. This was recognized as far back as 1993.
Oh, and it doesn't appear dnsmasq supports DoH/DoT forwarding (not positive as I don't use it and haven't looked at the code). It does support DNSSEC.
I happen to think DNSSEC, writ large, is also engineering malpractice, but when I use that term just upthread, I was referring to your insistence that people deploy a top-down data authentication mechanism in order to resolve a trivial transaction spoofing attack, given the availability of multiple existing tools that decisively address those kinds of problems without any of the expense and coordination required for DNSSEC.
Last time you were arguing DNSSEC wouldn't solve BGP hijacks because whoever was hijacking the DNS server would just hijack the web server instead.
https://web.archive.org/web/20250729043725/http://sockpuppet...
They clearly seem to have strong feelings about the issue. I don’t, and I don’t know much about it either, so I will not comment further.
https://sockpuppet.org/blog/2015/01/15/against-dnssec/
You'll notice, I didn't bring DNSSEC into this conversation. I agree: it didn't belong here; DNSSEC has nothing to do with this dnsmasq thing.
I did not mean it as a slight by doing so. I meant it as an acknowledgment of the sensitive nature of these issues, and part of my work as an amateur journalist and archivist. I say amateur because I do not do this work for personal benefit or gain, but because I like researching and learning myself, and I don’t believe in putting a light I also read by under a bushel.
Here’s another archive:
While searching for your post on archive.today or whatever their name is, I saw that it had also been archived without a trailing “/“ which redirects to the one with a trailing slash, so depending on how I parse what you said and how you originally posted it and how the redirect is implemented, I could see how that statement might be ambiguous but that kind of thing might be handled by your webserver software. I don’t think that it’s worth mentioning, but you did say it’s always been at that URL, which I don’t dispute. I’ve never known it to be at any other URL.
Is it largely being replaced by EDNS, or what?
Now that we have that giant out of the way, what’s the next one you’re tilting at?
I think there's a lot of reasons why DNSSEC is moribund. It was a necessary accompaniment to IPSEC back in the mid-1990s when everybody assumed we'd be all v6 all IPSEC by 2000. Then Kashpureff's bailiwick attack happened, and we got this:
https://mailman.nanog.org/pipermail/nanog/1997-July/122606.h...
... but the bailiwick caching behavior was a straight-up bug, and rand(3) was enough to make QID spoofing more annoying to exploit than it was worth. Something like 5 years later we had the birthday attack, but I don't recall anybody taking it especially seriously --- maybe because at roughly the same time, DNSSEC was going through the "typecode roll" that took us from DNSSEC to DNSSECbis, and nobody was confident about pushing DNSSEC at that point; the TLDs weren't even signed.
Then 5 years after that we got Kaminsky. There's a spark of interest in DNSSEC after that... but all the vendors who hadn't already adopted DJB's randomization immediately did, and Kaminsky's attack stopped mattering.
By this point I think it was clear to everybody that protecting transactions wasn't going to be the motivating use case for DNSSEC, so people shifted to DANE: using DNSSEC as a global PKI to replace the X.509 certificate authorities. But DANE flat-out never worked; you couldn't deploy it in a way that was resilient against downgrades, so there was simply no point.
Then Google and Mozilla killed several of the largest CAs, and used their market power to force CT on the remaining (and thoroughly cowed) CAs. And LetsEncrypt happened. So modern concern over replacing the X.509 CAs registers somewhere in seriousness alongside Linux on the Desktop.
People try to come up with increasingly exotic reasons why we'll be forced to use DNSSEC with the WebPKI; it's not so much DANE anymore as it is resilience against BGP attacks and validation of ACME DNS challenges. It's all pretty unserious.
Meanwhile: unlike DNSSEC, which has seen only marginal adoption over 30 years, DoH has caught fire. Within the next 5 years, it's not unlikely that we'll come up with some deployment scenario whereby CAs can use DoH to secure lookups all the way to authority servers. We'll see. It's a lot more likely than a global deployment of DNSSEC.
There's just no reason for it to exist anymore.
I have a lot more reasons than this not to like DNSSEC --- I actively dislike it as a protocol and as a cryptosystem. But those are just my takes, and what I've related in this comment is I think pretty much objectively verifiable.
I contend that creating signatures over the data at the sending side and then validating those signatures at the receiving end is superior protection to encrypting the channel over which the data is transmitted. Protecting the data is end-to-end. Protecting the channel is hop-by-hop. If you could ensure that the client speaks directly to the authoritative(s), the protection would be equivalent. But that's not how the DNS is operationally deployed (dnsmasq is a perfect example: forwarders really shouldn't exist).
I would agree with you that operationally, DNSSEC deployment is lacking (i.e., water is wet) and the requirement for both the authoritative side and resolving side to simultaneously implement is a (very) heavy lift. However, even your Tranco stats show that there are pockets of non-trivial deployment (e.g., governments) and I believe the trend is increased deployment over the long term globally.
Fortunately, it's not either/or. I personally prefer DoT/DoH to a trusted (i.e., that I run) resolver that DNSSEC validates. Unfortunately, as dnsmasq doesn't appear to implement forwarding to DoT/DoH resolvers, you're left with DNSSEC or not using dnsmasq (which is what I gather you're recommending).
Honestly this statement is not at all surprising. I would love to hear your perspective on what problem they thought was being solved.
Ensuring you have received a correct copy of a zone is the one thing we did get out of DNSSEC.
> I ran ISC during the development of BIND 9.0 and participated quite heavily in DNS-related working groups
Then we must have crossed paths. I'll happily buy you a beer or three next time.
> you have to take the steps DNSSEC takes
Ehhh. We are 30 years on and the current state is: recursive resolvers can strip DNSSEC and nothing happens but if they turn on validation things occasionally break.
Transport security has solved most of the real world problems and is being rapidly adopted: https://stats.labs.apnic.net/edns
I have done this on my router, along with a couple firewall rules to prevent plaintext DNS queries leaking out of the WAN port. dnsmasq is configured to talk to dnsproxy, and dnsproxy is configured to use DNS over TLS with 1.1.1.1 [3]
[1] https://github.com/AdguardTeam/dnsproxy
[2] https://openwrt.org/docs/guide-user/services/dns/dot_dnsmasq...
Imagine that:
* I have an evil system at 192.0.2.1
* target at 198.51.100.1 which is an MTA, and is it's own resolver with dnsmasq.
* foobar.com has a nameserver that silently drops any request with a ! in the first label
I first send a mail to 192.51.100.1 claiming to be from bob@"foo!bar.foobar.com"
192.51.100.1 sends a request to the auth ns for foobar.com, which gets droped.
While this is happening, I spam the crud out of 192.51.100.1 from 192.0.2.1 with forged answers for foo!bar.foobar.com that contain additional responses stating deb.debian.org is at 192.0.2.1 with a ttl of months.
If I am lucky dnsmasq caches BOTH the foo!bar.foobar.com response, and the deb.debian.org one, meaning that future accesses to deb.debian.org instead go to my attacker-controlled nastybox.
They're giving Google Public DNS as example of a failure here. Whereas what happens in my testing is that it's a cache miss for Google Public DNS, which takes a little over 1 second to look everything up from cold in my part of the world for ~.www.example.com .
And in that second they have more than enough time, at LAN speeds (since they are injecting the forged responses from the dnsmasq client machine), to send a tonne of forged DNS/UDP responses which are only around a hundred bytes long each.
I cannot believe this is true because that would be too dumb
edit: I don't see how is the avunit github related to the report. I don't think it is?
dig 'test~!*_' @resolver-ip
We truly need a "right to repair" for IOT & consumer networking devices. Any device not receiving monthly security updates should have the firmware keys & source published so the community can take over.
The correct mitigation is turning on DNSSEC. This sort of attack, known since 1993, is why DNSSEC was created.
dnsmasq cannot do this and just forwards UDP to UDP and TCP to TCP. (and doesn't know DoT or DoH)
DoH (and DoT, DNS over TLS) doesn't have this problem, as the whole thing is secured. (Well, this concrete problem might actually be prevented just by using plain old DNS over TCP, but I am not sure about that.)