This seems a severe limitation for certain applications (e.g. VPNs) which tunnel traffic which already has an ordering mechanism (such as TCP) [2]. This is not a limitation of, e.g., Noise [3], one of the existing hybrid protocols HPKE seems intended to replace.
[1] https://www.rfc-editor.org/rfc/rfc9180.html#name-message-ord...
[2] https://openvpn.net/faq/what-is-tcp-meltdown/
[3] https://noiseprotocol.org/noise.html#out-of-order-transport-...
For example, in TLS Encrypted Client Hello (ECH) [0], the server publishes a public key which is then used by the client to encrypt the ClientHello message. That encryption is performed using HPKE. Similarly, MLS uses HPKE as part of its inner core.
And because people often misunderstand RFCs as being "official":
> This document is not an Internet Standards Track specification; it is published for informational purposes.
> This document is a product of the Internet Research Task Force (IRTF). The IRTF publishes the results of Internet-related research and development activities. These results might not be suitable for deployment. This RFC represents the consensus of the Crypto Forum Research Group of the Internet Research Task Force (IRTF). Documents approved for publication by the IRSG are not candidates for any level of Internet Standard; see Section 2 of RFC 7841.
RFCs produced by the IRTF are not IETF standards. However, as a practical matter, when the IETF wants to specify cryptographic algorithms, it has the Cryptographic Forum Research Group (part of IRTF) do it. HPKE is one such case, but see also X25519 (RFC 7748), Verifiable Random Functions (RFC 9381), etc. The CFRG has its own consensus development process that terminates in RFCs which are then treated more or less as standards even they are formally not.
This doesn't apply to other IRTF RFCs, which generally aren't treated as having any special status.
>> This document is not an Internet Standards Track specification; it is published for informational purposes.
I guess we should stop using NAT then, because it is also 'only' informational and not standards-track:
> This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind.
Yep, that sounds completely factual to me.
Is it a similar situation for RFC 9180? I’m not delving into it so I don’t know whether it’s something concretely and compatibly implementable, or just a broad description of an approach, but even if it is the latter, it probably could have been something concretely and compatibly implementable, so I’m reasonably sure it’s still a different situation from NAT, where, by nature, nothing directly connected to it is suitable for Standards Track.
* https://datatracker.ietf.org/doc/html/rfc6296
I wonder if these people would consider informational better or worse than experimental.
(At the time I didn't know NA(P)T44 was 'only' informational, so was not able to bring it up as a 'counter-argument'.)
> A key transport protocol is similar to a key exchange algorithm in that the sender, Alice, generates a random symmetric key and then encrypts it under the receiver’s public key. Upon successful decryption, both parties then share this secret key.
Isn't the following just describing how everyday PKE works?
> The general paradigm here is called "hybrid public-key encryption" because it combines a non-interactive key exchange based on public-key cryptography for establishing a shared secret, and a symmetric encryption scheme for the actual encryption.
It feels like the blog post is not principally concerned with explaining the benefits of HPKE, because everyone is already using it, but rather just proposing a standard for the many ways people are doing it.
The real benefit of this is the long tail of obscure applications whose weaknesses no one is looking at - until eventually, some bad actor does.
Replacing working cryptographic standards is expected from an NSA front.
> A paper by Martinez et al. provides a thorough and technical comparison of these different standards. The key points are that all these existing schemes have shortcomings. They either rely on outdated or not-commonly-used primitives such as RIPEMD and CMAC-AES, lack accommodations for moving to modern primitives (e.g., AEAD algorithms), lack proofs of IND-CCA2 security, or, importantly, fail to provide test vectors and interoperable implementations
For more thorough analysis of one of its novelties namely authenticated mode you can check this paper:
Analysing the HPKE Standard:
https://link.springer.com/chapter/10.1007/978-3-030-77870-5_...
From what I can gather it seems to me this work tries to unify and generalize several existing hybrid public key encryption standards, which all apparently have various issues, mostly stemming from using outdated primitives and no extensibility.
So this work tries to introduce extesibility in a secure way, while also ensuring interoperability. The motivation seems to be able to use HPKE in IETF standards.
Another benefit over previous standards seems to be the addition of authenticated modes, where the sender authenticates itself to the recipient.
It's a subset of what Noise defines, standardized and further parameterized.
The point of the RFC is to level up (and make consistent) future cryptographic designs from the IETF. It's not something you'd use directly.
https://neilmadden.blog/2021/01/22/hybrid-encryption-and-the...
tl;dr: resistance to padding attacks, better support for non-RSA cryptosystems
For some reason, this page doesn't have a 301 redirect set up when you access the plain HTTP page.
My opinion on the topic:
I think that's perfectly reasonable. Why would they want to force users to use TLS? I don't understand the appeal of HTTP->HTTPS 301 redirects on every single website.
A static document is served on the linked page, no passwords are transmitted. Plaintext is perfectly enough for fulfilling requests for this document. TLS would just use more processing power and time to deliver the document.
If the website visitor is interested in downloading the document encrypted, he is free to use the HTTPS protocol with a browser setting.
If the website publisher is interested in sending the document encrypted, he is free to use HSTS or the redirect hack you described.
But I don't see any reason why noiseprotocol.org would like all connections to them being encrypted; they are not an online bank. Enforcing TLS just prevents visitors without TLS support from being able to view documents on their website.
> A static document is served on the linked page, no passwords are transmitted. Plaintext is perfectly enough for fulfilling requests for this document. TLS would just use more processing power and time to deliver the document.
While I don't have hard figures for the traffic volume for this domain, I'd wager it's on the order of 100,000 requests a month -- maybe on the order of a million on the high end. That's something you can easily handle with a micro instance in AWS as an example, and under almost every cloud hosting pricing scheme, the extra cost from the (honestly, quite minimal) difference in power from serving HTTPS rather than HTTP isn't billed. The additional time to deliver a document over HTTPS is imperceptible to humans on any remotely modern (last ~15 years) hardware on almost any connection outside of the absolute worst network environments.That it doesn't receive sensitive data is moot -- it sends what can easily be considered sensitive data. White papers about encryption and obfuscation protocols are often of interests to, for example, political dissidents both in countries where they're at risk and abroad, and they may be targeted in either case. Modern extensions like ECH (which is beginning to be widely available[0]) offer a degree of protection against that, and prevent snooping in general.
Political dissidents aside, it's another channel advertisers can use to see your browsing habits and build up a targeted advertising profile. There have been isolated incidents where ISPs were caught injecting ads into users' pages; I think a more realistic issue would be ISPs auctioning off browsing histories.
> If the website visitor is interested in downloading the document encrypted, he is free to use the HTTPS protocol with a browser setting.
We live in a world where almost all content is delivered over HTTPS thanks to the efforts of orgs like LetsEncrypt. I wouldn't have noticed that that document is plain-HTTP if not for the fact that I decided to toggle Firefox's HTTP-only mode earlier that day just to see what impact it has on day-to-day browsing.Most people aren't careful about this, and most people probably prefer privacy if the option costs them nothing and takes no time.
> But I don't see any reason why noiseprotocol.org would like all connections to them being encrypted; they are not an online bank. Enforcing TLS just prevents visitors without TLS support from being able to view documents on their website.
"Visitors without TLS support" represents a vanishingly small number of people that I don't even know how to describe. Who would this even be, in practical terms? [0] https://blog.cloudflare.com/announcing-encrypted-client-hello/
Adding that silent redirect, then, merely looks or feels safer, but, at best, seems more likely to encourage broken behavior among users, who now might always see secure locks in their browser but not realize all of the links they are clicking were insecure and so the chain of trust on their location is now tainted. In contrast, none of the alternatives -- 1) not listening on port 80, 2) replacing the http website with a stub which asks users to manually change the scheme to https, or 3) providing two copies of the website (one that is secure and one that isn't) -- seem anywhere near as dangerous.
(BTW, you are also ascribing magical protection properties to ECH: in addition to the obvious issues with the IP address of a server usually being sufficient to tell what website someone is using, TLS fails to protect users against content fingerprint attacks--think stuff like figuring out what part of Google Maps you are looking at, what video in YouTube you are watching, or even what query you are typing into a search bar"--and so isn't really a sufficient basis for protecting someone against persecution. You need some kind of Internet overlay network to even begin to approach that problem, converting it back into the integrity problem.)
b:~[0]# time curl --no-progress-meter --output /dev/null http://noiseprotocol.org
real 0m0,353s
user 0m0,004s
sys 0m0,003s
b:~[0]# time curl --no-progress-meter --output /dev/null https://noiseprotocol.org
real 0m0,705s
user 0m0,041s
sys 0m0,006s
b:~[0]#
It takes double the time to load the page with TLS. No matter the hardware and network (tested with high bandwidth low latency not congested FTTH) you can't go faster than the speed of light and the additional TLS handshake does not make loading any faster.Notice also the increase in CPU time when using HTTPS (user field in time).
> That it doesn't receive sensitive data is moot -- it sends what can easily be considered sensitive data. White papers about encryption and obfuscation protocols are often of interests to, for example, political dissidents both in countries where they're at risk and abroad, and they may be targeted in either case. Modern extensions like ECH (which is beginning to be widely available[0]) offer a degree of protection against that, and prevent snooping in general.
AFAIK ECH only works with a central MiTM proxy like cloudflare, not with a selfhosted page. It's debatable what privacy implications hosting using cloudflare has. You cannot prevent the hostname from being sent in plaintext to a selfhosted third party website if you want SNI, which servers often expect clients to send. EDIT: ECH is disabled globally on Cloudflare currently: https://developers.cloudflare.com/ssl/edge-certificates/ech/
Whenever you connect via TLS, Firefox sends an unencrypted OCSP HTTP request to the CA servers that includes the hostname. Yet another thing that slows you down when requesting websites protected by TLS that wasn't measured in my curl benchmark (unless OCSP stapling is used, of course).
> Political dissidents aside, it's another channel advertisers can use to see your browsing habits and build up a targeted advertising profile. There have been isolated incidents where ISPs were caught injecting ads into users' pages; I think a more realistic issue would be ISPs auctioning off browsing histories.
ISPs can only do that if you allow them to. Where I live, they can't; it's forbidden by law and network history is considered personal data. Injecting ads is also illegal where I live; ISPs tampering with anything above layer 3 is not allowed as per Internet neutrality laws here.
Just the IP address is enough to figure out someone is browsing the Noise Protocol website: Try visiting http://138.68.46.44
> I wouldn't have noticed that that document is plain-HTTP if not for the fact that I decided to toggle Firefox's HTTP-only mode earlier that day just to see what impact it has on day-to-day browsing.
Maybe set this your browser up so that it automatically changes the protocol to HTTPS from HTTP without failing? This way, you wouldn't've noticed any issue at all.
Bottom line: There is no reason to force users to use TLS on noiseprotocol.org and doing so would be worse for users of the website without providing much benefits.
Sure, let’s create another one and expect it being used by all othe those who use existing stabdards.
It would be great to understand the exact specific motive which drives it instead of converging on an existing standard.
The idea here is simply that the IETF is going to need to do asymmetrically encrypted message exchanges in future standards, and CFRG's job is to make sure there's a well-vetted, carefully designed construction they can pull off the shelf to do that. There is not, in 2024, an existing reference point for this construction that IETF WGs could use, if only because of PQC.
You have to wait for an IETF WG to actually use HPKE before you get to complain about the standards proliferation; it'll be that "customer" (of CFRG) standard that will need to be judged on whether further standardization is a good idea.
It doesn't make much sense to dunk on the actual cryptographers at IRTF, who are more or less just explaining to the IETF WGs how to do cryptography properly.
(I say this as someone who considers themself an opponent of all cryptographic standards bodies).
The idea here is to write a clean, easy-to-use spec for hybrid public-key
encryption. (We're using the name "ECIES", but as the draft notes, the
idea is clearly more general.) This primitive has come up in IETF work on
MLS and ESNI [0][1], and in several other protocols, e.g., through the NaCl
"box" API [2]. The hope here is to have a single spec that unifies these
ideas and can be the target of formal verification.
I admit that there's a little bit of XKCD#927 here [3], but I think there's
good work to do here in terms of addressing some more modern use cases
(e.g., streaming / multiple encryptions from a single DH) and possibly
enabling better post-quantum support by generalizing to KEM instead of DH.