https://cloud.google.com/load-balancing/docs/url-map-concept...
Today, hundreds of millions of requests per second can be handled by L4 systems like Google Cloud and Cloudflare. Today traffic to a subdomain is almost certainly being routed through the same infrastructure anyway so there is no benefit to using a subdomain. That's why I describe subdomains as archaic in the context of building a highly available system like Google's.
If you're Google in 2010, maps.google.com is a necessity. If you're Google in 2025, maps.google.com is a choice. Subdomains are great for many reasons but are no longer a necessity for high availability.
>Subdomains are archaic
You presented a bit different argument. Also I disagree - maps.google.com is a fundamentally different service, so why should it share a domain with google.com? The only reason it's not googlemaps.com is because being a subdomain of google.com implies trust.
But I guess it's pretty subjective. Personally I always try to separate services by domain, because it makes sense to me, but maybe had the internet went a different path I would swear path routing makes sense.
Yeah, this would definitely block that.
DNS-based (hostname) allowlisting is just starting to hit the market (see: Microsoft's "Zero Trust DNS" [1]) and this would kill that. Even traditional proxy-based access control is neutered by this and the nice thing about that is that it can be done without TLS interception.
If you're left with only path-based rules you're back to TLS interception if you want to control network access.
[1] https://techcommunity.microsoft.com/blog/networkingblog/anno...
But the vast majority of users don’t care about URL structure. If a company goes through the effort to change them, it’s because the company expects to benefit somehow.
Why doesn't Google have DNSSEC.
Google doesn't have DNSSEC because they've chosen not to implement it, FWIU.
/? DNSSEC deployment statistics: https://www.google.com/search?q=dnssec+deployment+statistics...
If not DNSSEC, then they should push another standard for signing DNS records (so that they are signed at rest (and encrypted in motion)).
Do DS records or multiple TLDs and x.509 certs prevent load balancing?
Were there multiple keys for a reason?
Containers, pip, and conda packages have TUF and now there's sigstore.dev and SLSA.dev. W3C Verifiable Credentials is the open web standard JSONLD RDF spec for signatures/attestations.
IDK how many reinventions of GPG there are.
Do all of these systems differ only in key distribution and key authorization, ceteris paribus?
It’s unreachable anyway
man resolv.conf, read up on search domains and the ndots option
[0]: https://jvns.ca/blog/2022/09/12/why-do-domain-names-end-with...
http://uz./ serves a 500 error.
At Google scale redirecting requests to ccTLD versions uses up plenty of resources and bandwidth:
Get request to .com (like from urlbar searches)
GeoIP lookup or cookie check
Redirect to ccTLD
Much of this is then repeated on the ccTLD.
This change should decrease latency for users (no redirect, no extra DNS lookups, no extra TLS handshake) and enhance caching of resources.