Upcoming changes to Let's Encrypt and how they affect XMPP server operators
103 points
by zaik
10 hours ago
| 14 comments
| blog.prosody.im
| HN
nilslindemann
9 minutes ago
[-]
[delayed]
reply
thayne
1 hour ago
[-]
I can think of a of other ways that client certificates could work, but they have problems too:

1. Use DANE to verify the client certificate. But that requires DNSSEC, which isn't widely used. Would probably require new implemntations of the handshake to check the client cert, and would add latency since the server has to do a DNS call to verify the clients cer.

2. When the server receives a request it makes an https request to a well known enpdpoint on the domain in the client-cert's subject that contains a CA, it then checks that the client cert is signed by that CA. And the client generates the client cert with that CA (or even uses the same self-signed cert for both). This way the authenticity of the client CA is verified using the web PKI cert. But the implementation is kind of complicated, and has an even worse latency problem than 1.

3. The server has an endpoint where a client can request a client certificate from that server, probably with a fairly short expiration, for a domain, with a csr, or equivalent. The server then responds by making an https POST operation to a well known enpdpoint on the requested domain containing a certificate signed by the servers own CA. But for that to work, the registration request needs to be unauthenticated, and could possibly be vulnerable to DoS attacks. It also requires state on the client side, to connect the secret key with the final cert (unless the server generated a new secret key for the client, which probably isn't ideal). And the client should probably cache the cert until it expires.

And AFAIK, all of these would require changes to how XMPP and other federated protocols work.

reply
jammcq
10 hours ago
[-]
I like how the article describes how certificates work for both client and server. I know a little bit about it but what I read helps to reinforce what I already know and it taught me something new. I appreciate it when someone takes the time to explain things like this.
reply
MattJ100
7 hours ago
[-]
Thanks! I didn't intentionally write this for a broader audience (I didn't expect to see it while casually opening HN!). Our user base is quite diverse, so I try to find the balance between being too technical and over-explanatory. Glad it was helpful!
reply
agwa
7 hours ago
[-]
Is there a reason why dialback isn't the answer?

I would think it's more secure than clientAuth certs because if an attacker gets a misissued cert they'd have to actually execute a MitM attack to use it. In contrast, with a misissued clientAuth cert they can just connect to the server and present it.

Another fun fact: the Mozilla root store, which I'd guess the vast majority of XMPP servers are using as their trust store, has ZERO rules governing clientAuth issuance[1]. CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.

[1] https://www.mozilla.org/en-US/about/governance/policies/secu...

reply
MattJ100
7 hours ago
[-]
> Is there a reason why dialback isn't the answer?

There are some advantages to using TLS for authentication as well as encryption, which is already a standard across the internet.

For example, unlike an XMPP server, CAs typically perform checks from multiple vantage points ( https://letsencrypt.org/2020/02/19/multi-perspective-validat... ). There is also a lot of tooling around TLS, ACME, CT logs, and such, which we stand to gain from.

In comparison, dialback is a 20-year-old homegrown auth mechanism, which is more vulnerable to MITM.

Nevertheless, there are some experiments to combine dialback with TLS. For example, checking that you get the same cert (or at least public key) when connecting back. But this is not really standardized, and can pose problems for multi-server deployments.

> It has never been secure to accept the clientAuth EKU when using the Mozilla root store.

Good job we haven't been doing this for a very long time by now :)

reply
agwa
7 hours ago
[-]
Ah, I didn't know that dialback doesn't use TLS. That's too bad.
reply
MattJ100
7 hours ago
[-]
Sorry, it's late here and I guess I didn't word it well. Dialback (these days) always runs over a TLS-encrypted connection, as all servers enforce TLS.

The next question is how to authenticate the peer, and that can be done a few ways, usually either via the certificate PKI, via dialback, or something else (e.g. DNSSEC/DANE).

My comment about "combining dialback with TLS" was to say that we can use information from the TLS channel to help make the dialback authentication more secure (by adding extra constraints to the basic "present this magic string" that raw dialback authentication is based on).

reply
nightpool
2 hours ago
[-]
How would dialback-over-TLS be "more vulnerable to MITM" though? I think that claim was what led to the confusion, I don't see how TLS-with-client-EKU is more secure then TLS-with-dialback
reply
RobotToaster
9 hours ago
[-]
Why did LE make this change? It feels like a rather deliberate attack on the decentralised web.
reply
ameliaquining
9 hours ago
[-]
Google has recently imposed a rule that CA roots trusted by Chrome must be used solely for the core server-authentication use case, and can't also be used for other stuff. They laid out the rationale here: https://googlechrome.github.io/chromerootprogram/moving-forw...

It's a little vague, but my understanding reading between the lines is that sometimes, when attempts were made to push through security-enhancing changes to the Web PKI, CAs would push back on the grounds that there'd be collateral damage to non-Web-PKI use cases with different cost-benefit profiles on security vs. availability, and the browser vendors want that to stop happening.

Let's Encrypt could of course continue offering client certificates if they wanted to, but they'd need to set up a separate root for those certificates to chain up to, and they don't think there's enough demand for that to be worth it.

reply
thayne
2 hours ago
[-]
So that argues against including CAs that don't issue server authentication cerificates. That's somewhat reasonable, although it does put non-browser use cases in an awkward position, since there isn't currently a standard distribution channel for trusted CAs that is independent of browsers.

But prohibiting certs from being marked for client usage is mostly unrelated to that goal because:

1. There are many non-web use cases for certificates that are only used for server authentication. And

2. There are use cases where it makes sense to use the same certificate used for web PKI as a client with mTLS to another server using web PKI, especially for federated communication.

reply
xg15
8 hours ago
[-]
This sounds a lot like the "increasing hostility for non-web usecases" line in the OP.

In theory, Chrome's rule would split the CA system into a "for web browsers" half and a "for everything else" half - but in practice, there might not be a lot of resources to keep the latter half operational.

reply
notepad0x90
7 hours ago
[-]
Why can't Let's Encrypt push-back on this for their users' sake? What is Google going to do? distrust LE certs?
reply
agwa
7 hours ago
[-]
Google Chrome (along with Mozilla, and eventually the other root stores) distrusted Symantec, despite being the largest CA at the time and frequently called "too big to fail".
reply
notepad0x90
5 hours ago
[-]
Given how ubiquitous LE is, I think people will switch browsers first. non-chrome browsers based on chrome are plenty as well, they can choose to trust LE despite Chrome's choices. Plus, they had a good reason with Symantec, a good reason to distrust them that is. This is just them flexing, there is no real reason to distrust LE, non-web-pki does not reduce security.
reply
nightpool
2 hours ago
[-]
GP gave a very good reason that non-web-PKI reduces security, you just refused to accept it. Anybody who has read any CA forum threads over the past two years is familiar with how big of a policy hole mixed-use-certificates are when dealing with revocation timelines and misissuance.
reply
kej
9 hours ago
[-]
>when attempts were made to push through security-enhancing changes to the Web PKI, CAs would push back on the grounds that there'd be collateral damage to non-Web-PKI use cases

Do you (or anyone else) have an example of this happening?

reply
agwa
8 hours ago
[-]
After the WebPKI banned the issuance of new SHA-1 certificates due to the risk of collisions, several major payment processors (Worldpay[1], First Data[2], TSYS[3]) demanded to get more SHA-1 certificates because their customers had credit card terminals that did not support SHA-2 certificates.

They launched a gross pressure campaign, trotting out "small businesses" and charity events that would lose money unless SHA-1 certificates were allowed. Of course, these payment processors did billions in revenue per year and had years to ship out new credit card terminals. And small organizations could have and would have just gotten a $10 Square reader at the nearest UPS store if their credit card terminals stopped working, which is what the legacy payment processors were truly scared of.

The pressure was so strong that the browser vendors ended up allowing Symantec to intentionally violate the Baseline Requirements and issue SHA-1 certificates to these payment processors. Ever since, there has been a very strong desire to get use cases like this out of the WebPKI and onto private PKI where they belong.

A clientAuth EKU is the strongest indicator possible that a certificate is not intended for use by browsers, so allowing them is entirely downside for browser users. I feel bad for the clientAuth use cases where a public PKI is useful and which aren't causing any trouble (such as XMPP) but this is ultimately a very tiny use case, and a world where browsers prioritize the security of ordinary Web users is much better than the bad old days when the business interests of CAs and their large enterprise customers dominated.

[1] https://groups.google.com/g/mozilla.dev.security.policy/c/RH...

[2] https://groups.google.com/g/mozilla.dev.security.policy/c/yh...

[3] https://groups.google.com/g/mozilla.dev.security.policy/c/LM...

reply
ge0rg
8 hours ago
[-]
It is really great how they write "TLS use cases" and in fact mean HTTPS use cases.

CA/Browser Forum has disallowed the issuance of server certificates that make use of the SRVName [0] subjectAltName type, which obviously was a server use case, and I guess the only reason why we still are allowed to use the Web PKI for SMTP is that both operate on the server hostname and it's not technically possible to limit the protocol.

It would be perfectly fine to let CAs issue certificates for non-Web use-cases with a different set of requirements, without the hassle of maintaining and distributing multiple Roots, but CA/BF deliberately chose not to.

[0] https://community.letsencrypt.org/t/srvname-and-xmppaddr-sup...

reply
RobotToaster
9 hours ago
[-]
Isn't LE used for half the web at this point?

Calling Google's bluff and seeing if they would willingly cut their users off from half the web seems like an option here.

reply
bawolff
8 hours ago
[-]
That's not how this would work.

Based on previous history where people actually did call google's bluff to their regret, what happens is that google trusts all current certificates and just stops trusting new certs as they are issued.

Google has dragged PKI security into the 21st century kicking and screaming. Their reforms are the reason why PKI security is not a joke anymore. They are definitely not afraid to call CA companies bluff. They will win.

reply
xg15
8 hours ago
[-]
How is "client certificates forbidden" in any way an improvement?
reply
bawolff
1 hour ago
[-]
As a general rule in cryptography, a lot of vulnerabilities relate confusing the system by using a correct thing in the wrong context. Making it a rule that you have to use separate chains for separate purposes is a good rule from a general design standpoint.
reply
Avamander
7 hours ago
[-]
Not forbidden, just not going to be a part of WebPKI.

It's one of those things that has just piggybacked on top of WebPKI and things just piggybacking is a bad idea. There have been multiple cases in the past where this has caused a lot of pain for making meaningful improvements (some of those have been mentioned elsewhere in this thread).

reply
xg15
7 hours ago
[-]
What exactly do you mean with "WebPKI"?

The PKI system was designed independently of the web and the web used to be one usecase of it. You're kind of turning that around here.

reply
mcpherrinm
7 hours ago
[-]
The current PKI system was designed by Netscape as part of SSL to enable secure connections to websites. It was never independent of the web. Of course PKIs and TLS have expanded beyond that.

"WebPKI" is the term used to refer to the PKI used by the web, with root stores managed by the browsers. Let's Encrypt is a WebPKI CA.

reply
Avamander
7 hours ago
[-]
The idea of a PKI was of course designed independently, there are many very large PKIs beyond WebPKI. However the one used by browsers is what we call WebPKI and that has its own CAs and rules.

You're trying to make it sound like there has ever been some kind of an universal PKI that can be used for everything and without any issues.

reply
bawolff
6 hours ago
[-]
> What exactly do you mean with "WebPKI"?

WebPKI is the name of a specific PKI system, where PKI us a generic term for any PKI.

reply
detourdog
9 hours ago
[-]
I’m disappointed that a competitor doesn’t exist that uses longevity of IP routing as a reputation validator. I would think maintaining routing of DNS to a static IP is a better metric for reputation. Having unstable infrastructure to me is a flag for fly by night operations.
reply
ocdtrekkie
9 hours ago
[-]
Well, be prepared for certificates that change every 7 to 47 days, as the Internet formally moves to security being built entirely on sand.
reply
webstrand
8 hours ago
[-]
I wonder if this is a potential "off switch" for the internet. Just hit the root ca so they can't hand out the renewed certificates, you only have to push them over for a week or so.
reply
gus_massa
8 hours ago
[-]
People will learn to press all the buttons with scarry messages to ignore the wrong certificates. It may be a problem for credit cards and online shopping.
reply
ocdtrekkie
7 hours ago
[-]
HSTS was specifically designed to block you from having any ignore buttons. (And Firefox refuses to implement a way to bypass it.)

But this is also why the current PKI mindset is insane. The warnings are never truly about a security problem, and users have correctly learned the warnings are useless. The CA/B is accomplishing absolutely nothing for security and absolutely everything for centralized control and platform instability.

reply
blibble
5 hours ago
[-]
> The CA/B is accomplishing absolutely nothing for security and absolutely everything for centralized control and platform instability.

is it their fault?

with the structure of the browser market today: you do what Google or Apple tell you to, or you're finished as a CA

the "forum" seems to be more of a puppet government

reply
ocdtrekkie
5 hours ago
[-]
The CA/B is basically some Apple and Google people plus a bunch of people who rubber stamp the Apple and Google positions. Everyone is culpable and it creates a self-fulfilling process. Everyone is the expert for their company's certificate policy so nobody can tell them it's dumb and everyone else can say they have no choice because the CA/B decided it.

Even Google and Apple from a corporate level likely have no idea what their CA/B reps are doing and would trust their expertise if asked, regardless of how many billions of dollars it is burning.

The CA/B has basically made itself accountable to nobody including itself, it has no incentives to balance practicality or measure effectiveness. It's basically a runaway train of ineffective policy and procedure.

reply
duskwuff
9 hours ago
[-]
Not precisely an answer, but there's some related discussion here:

https://cabforum.org/2025/06/11/minutes-of-the-f2f-65-meetin...

The real takeaway is that there's never been a lot of real thought put into supporting client authentication - e.g. there's no root CA program for client certificates. To use a term from that discussion, it's usually just "piggybacked" on server authentication.

reply
mhurron
9 hours ago
[-]
No, it feels like the standard 'group/engineer/PM' didn't think anyone did anything different from their own implementation.

Lets Encrypt is just used for like, webservers right, why do this other stuff webservers never use.

Which does appear to be the thinking, though they blame Google, which also seems to have taken the 'webservers in general don't do this, it's not important' - https://letsencrypt.org/2025/05/14/ending-tls-client-authent...

reply
pseudalopex
9 hours ago
[-]
Google forced separate client and server PKIs.[1]

[1] https://letsencrypt.org/2025/05/14/ending-tls-client-authent...

reply
tkel
6 hours ago
[-]
Prosody is also the base of Snikket[1], a popular recent XMPP server. Snikket is basically just a Prosody config.[2]

[1] https://snikket.org/service/quickstart/

[2] https://github.com/snikket-im/snikket-server/blob/master/ans...

reply
benjojo12
8 hours ago
[-]
For those wondering if ejabberd Debian systems will be impacted, it seems like for now there no fix, the issue is being tracked here: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1127369
reply
Avamander
8 hours ago
[-]
Code can just ignore the EKU. Especially if the ecosystem consists of things that are already using certificates in odd ways, as it shouldn't be making outgoing connections without it in the first place.
reply
nickf
8 hours ago
[-]
Client authentication with publicly-trusted (i.e. chaining to roots in one of the major 4 or 5 trust-store programs) is bad. It doesn't actually authenticate anything at all, and never has.

No-one that uses it is authenticating anything more than the other party has an internet connection and the ability, perhaps, to read. No part of the Subject DN or SAN is checked. It's just that it's 'easy' to rely on an existing trust-store rather than implement something secure using private PKI.

Some providers who 'require' public TLS certs for mTLS even specify specific products and CAs (OV, EV from specific CAs) not realising that both the CAs and the roots are going to rotate more frequently in future.

reply
nightpool
2 hours ago
[-]

    No part of the Subject DN or SAN is checked.
Is this true of XMPP? I thought it enforced that the SAN matched the XMPP identifier in question
reply
ajross
8 hours ago
[-]
A client cert can be stored, so it provides at least a little bit of identification certainty. It's very hard to steal or impersonate a specific client cert, so the site has a high likelihood of knowing you're the same person you were when you connected before (even though the initial connection may very well not have ID'd the correct person!). That has value.

But it also doesn't involve any particular trust in the CA either. Lets Encrypt has nothing to offer here so there's no reason for them to try to make promises.

reply
nickf
8 hours ago
[-]
Eh, it's pretty easy to impersonate if the values in the certificate aren't checked, and you could get one from any of a list of public CAs.

If you're relying on a certificate for authentication - issue it yourself.

reply
ajross
7 hours ago
[-]
Point being that if you get a valid TLS connection from a client cert, and then you get another valid connection from the same cert tomorrow, you can be very certain that the entity connecting is either the same software environment that connected earlier, or an attacker that has compromised it. You can be cryptographically certain that it is not an attacker that hasn't effected a full compromise of your client.

And there's value there, if you're a server. It's why XMPP wants federated servers to authenticate themselves with certificates in the first place.

reply
ahmedtd
46 minutes ago
[-]
If that's all you want to accomplish, you don't need WebPKI. Just generate a private key and a self-signed certificate.

(This is basically how Let's Encrypt / ACME accounts work)

reply
bawolff
8 hours ago
[-]
I feel like using web pki for client authentication doesn't really make sense in the first place. How do you verify the common name/subject alt name actually matches when using a client cert.

Using web pki for client certs seems like a recipe for disaster. Where servers would just verify they are signed but since anyone can sign then anyone can spoof.

And this isn't just hypothetical. I remember xmlsec (a library for validating xml signature, primarily saml) used to use web pki for signature validation in addition to specified cert, which resulted in lot SAML bypasses where you could pass validation by signing the SAML response with any certificate from lets encrypt including the attackers.

reply
xg15
8 hours ago
[-]
> How do you verify the common name/subject alt name actually matches when using a client cert.

This seems exactly like a reason to use client certs with public CAs.

You (as in, the server) cannot verify this at all, but a public CA could.

reply
nickf
8 hours ago
[-]
A public CA checks it one-time, when it's being issued. Most/all mTLS use-cases don't do any checking of the client cert in any capacity. Worse still, some APIs (mainly for finance companies) require things like OV and EV, but of course they couldn't check the Subject DN if they wanted to.

If it's for auth, issue it yourself and don't rely on a third-party like a public CA.

reply
ge0rg
7 hours ago
[-]
A federated ecosystem of servers that need to verify each other based on their domain name as the identity is the prime use-case for a public CA to issue domain-verified client certificates. XMPP happens to be this ecosystem.

Rolling out a private PKI for XMPP, with a dedicated Root CA, would be a significant effort, essentially redoing all the hard work of LetsEncrypt, but without the major funding, thus ending up with an insecure solution.

We make use of the public CAs, that have been issuing TLS certificates based on domain validation, for quite a few years now, before the public TLS CAs have been subverted to become public HTTPS-only CAs by Google and the CA/Browser Forum.

reply
Avamander
7 hours ago
[-]
> Rolling out a private PKI for XMPP, with a dedicated Root CA, would be a significant effort

Rolling out a change that removes the EKU check would not be that much effort however.

reply
xg15
7 hours ago
[-]
That's exactly what prosody is doing, but it's a weird solution. Essentially, they're just ignoring the missing EKU flag and pretend it would be there, violating the spec.

It seems weird to first remove the flag and then tell everyone to update their servers to ignore the removal. Then why remove it in the first place?

reply
Avamander
7 hours ago
[-]
I think you're confusing different actors here. The change was made by the CA/B Forum, the recommendation is just how it is if you want to use a certificate not for the purposes intended.
reply
ge0rg
7 hours ago
[-]
Yes, this is what is happening. It isn't happening fast enough, so some implementations (especially servers that don't upgrade often enough, or running long-term-support OS flavors) will still be affected. This is the impact that the original article is warning about.

My point was that this is yet another change that makes TLS operations harder for non-Web use cases, with the "benefit" to the WebPKI being the removal of a hypothetical complexity, motivated by examples that indeed should have used a private PKI in the first place.

reply
xg15
7 hours ago
[-]
> A public CA checks it one-time, when it's being issued.

That's the same problem we have with server certs, and the general solution seems to be "shorter cert lifetimes".

> Worse still, some APIs (mainly for finance companies) require things like OV and EV, but of course they couldn't check the Subject DN if they wanted to.

Not an expert there, but isn't the point of EV that the CA verified the "real life entity" that requested the cert? So then it depends on what kind of access model the finance company was specifying for its API. "I don't care who is using my API as long as they are a company" is indeed a very stupid access model, but then I think the problem is deeper than just cert validation.

reply
bawolff
6 hours ago
[-]
> That's the same problem we have with server certs, and the general solution seems to be "shorter cert lifetimes".

No it isn't, and that's not the reason why cert lifetimes are getting smaller.

Cert lifetimes being smaller is to combat certs being stolen, not man in the middle attacks.

reply
bawolff
4 hours ago
[-]
Too late for an edit, i read a bit more about how xmpp works, i guess the cert is not really about network access controls or authenticating the connection, but authenticating the data is coming from the right server.

So i guess that could make sense.

reply
nickf
8 hours ago
[-]
You are correct, and the answer is - no-one using publicly-trusted TLS certs for client authentication is actually doing any authentication. At best, they're verifying the other party has an internet connection and perhaps the ability to read.

It was only ever used because other options are harder to implement.

reply
xg15
8 hours ago
[-]
It seems reasonable for server-to-server auth though? Suppose my server xmpp.foo.com already trusts the other server xmpp.bar.com. Now I get some random incoming connection. How would I verify that this connection indeed originates from xmpp.bar.com? LE-assigned client certs sound like a good solution to that problem.
reply
bawolff
5 hours ago
[-]
> It seems reasonable for server-to-server auth though? Suppose my server xmpp.foo.com already trusts the other server xmpp.bar.com.

If you already trust xmpp.foo.com, then you probably shouldn't be using PKI, as PKI is a complex system to solve the problem where you don't have preexisting trust. (I suppose maybe PKI could be used to help with rolling over certs)

reply
Avamander
8 hours ago
[-]
Which is almost exactly why WebPKI doesn't want to support such use-cases. Just this EKU change alone demonstrates how it can hinder WebPKI changes.
reply
ge0rg
7 hours ago
[-]
Can you point out, at which point in time exactly, the public TLS PKI infrastructure has been reduced to WebPKI?
reply
Avamander
7 hours ago
[-]
Can you point out at which point in time exactly it was designed to serve every use-case?
reply
ge0rg
7 hours ago
[-]
The public TLS PKI was never supposed to serve every use case and you know it. But let me point out when it was possible to get a public CA certificate for an XMPP server with SRVname and xmppAddr:

  Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1096750 (0x10bc2e)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = IL, O = StartCom Ltd., OU = Secure Digital Certificate Signing, CN = StartCom Class 1 Primary Intermediate Server CA
        Validity
            Not Before: May 27 16:16:59 2015 GMT
            Not After : May 28 12:34:54 2016 GMT
        Subject: C = DE, CN = chat.yax.im, emailAddress = hostmaster@yax.im
        X509v3 extensions:
            X509v3 Subject Alternative Name: 
                DNS:chat.yax.im, DNS:yax.im, xmppAddr:chat.yax.im, dnsSRV:chat.yax.im, xmppAddr:yax.im, dnsSRV:yax.im
Ironically, this was the last server certificate I obtained pre-LetsEncrypt.
reply
Avamander
7 hours ago
[-]
So you understand that there are different purposes as well. Are you saying that you can't get a client auth certificate any more?
reply
xg15
7 hours ago
[-]
Huh? The entire purpose of that EKU change was to disallow that usecase. How did that demonstrate problems for WebPKI?
reply
Avamander
7 hours ago
[-]
This post here is the demonstration, that some non-WebPKI purpose is causing issues and complaints. This has happened before with SHA-1 deprecation. WebPKI does not want this burden and should not have this burden.
reply
xg15
7 hours ago
[-]
Ok, so this is an official split of "WebPKI" and "everything else PKI" then?

Last time I checked, Let's Encrypt was saying they provide free TLS certs, not free WebPKI certs. When did that change?

reply
bawolff
2 hours ago
[-]
You are being pedantic but also pedantically incorrect.

Lets encrypt provides value by providing signed TLS certs that are enrolled in webPKI (i.e. trusted by browsers).

If they were just provided a (not necessarily trusted) tls cert, like what anyone can generate from the command line, nobody would use them.

reply
Avamander
7 hours ago
[-]
That's being overly pedantic. PKIs for different purposes have been separate for a while, if not from the start. LE is still giving you a "TLS cert".
reply
denus
7 hours ago
[-]
I wonder if issues like this couldn't be a use case for DANE.
reply
MattJ100
6 hours ago
[-]
Yes, definitely. Prosody supports DANE, but DNSSEC deployment continues to be an issue when talking about the public XMPP network at large. Ironically the .im TLD our own site is on still doesn't support it at all.
reply
PunchyHamster
10 hours ago
[-]
Shame LE didn't give people option to generate client and client+server auth certs
reply
forty
9 hours ago
[-]
Yes, but then the lack of pragmatism shown by the XMPP community is a bit disconcerting
reply
SahAssar
8 hours ago
[-]
What is the lack of pragmatism you are talking about?
reply
forty
8 hours ago
[-]
The refusal to accept server only certificate as client certificate for server
reply
MattJ100
7 hours ago
[-]
There might be some confusion here, as there is no refusal at all.

As stated in the blog post, we (Prosody) have been accepting (only) serverAuth certificates for a long time. However this is technically in violation of the relevant RFCs, and not the default behaviour of TLS libraries, so it's far from natural for software to be implementing this.

There was only one implementation discovered so far which was not accepting certificates unless they included the clientAuth purpose, and that was already updated 6+ months ago.

This blog post is intended to alert our users, and the broader XMPP community, about the issue that many were unaware of, and particularly to nudge server operators to upgrade their software if necessary, to avoid any federation issues on the network.

reply
kokx
7 hours ago
[-]
The article literally talks about how one of the server implementations does exactly that:

> Does this affect Prosody?

> Not directly. Let’s Encrypt is not the first CA to issue server-only certificates. Many years ago, we incorporated changes into Prosody which allow server-only certificates to be used for server-to-server connections, regardless of which server started the connection. [...]

reply
superkuh
9 hours ago
[-]
It is not pragmatic to design your protocol for web use cases when it's not the web.
reply
bawolff
8 hours ago
[-]
Unless im missing something, this is a poor design full stop. How are they validating SAN on these client certificates?
reply
agwa
8 hours ago
[-]
XMPP identifiers have domain names, so the XMPP server can check that the DNS SAN matches the domain name of the identifiers in incoming XMPP messages.

I've seen non-XMPP systems where you configure the DNS name to require in the client certificate.

It's possible to do this securely, but I agree entirely with your other comment that using a public PKI with client certs is a recipe for disaster because it's so easy and common to screw up.

reply
abnormalitydev
9 hours ago
[-]
Is there any reason why things gravitate towards being web-centric, especially Google-centric? Seeing that Google's browser policies triggered the LE change and the fact that most CAs are really just focusing on what websites need rather than non-web services isn't helpful considering that browsers now are terribly inefficient (I mean come on, 1GB of RAM for 3 tabs of Firefox whilst still buffering?!) yet XMPP is significantly more lightweight and yet more featureful compared to say Discord.
reply
xg15
8 hours ago
[-]
> Is there any reason why things gravitate towards being web-centric, especially Google-centric?

Yes, the reason is called "Chrome" and "90% market share"...

reply
everfrustrated
9 hours ago
[-]
From https://letsencrypt.org/2025/05/14/ending-tls-client-authent...

"This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline."

TL;DR blame Google

reply
bawolff
9 hours ago
[-]
Google didn't force lets encrypt to totally get out of the client cert business, they just decided it wasn't worth the effort anymore.
reply
nickf
8 hours ago
[-]
Publicly-trusted client authentication does nothing. It's not a thing that should exist, or is needed.
reply
ahmedtd
43 minutes ago
[-]
I don't think this is true. It's something that could be useful, with some sort of ACME-like automated issuance, but should definitely be issued from a non-WebPKI certificate authority.
reply
everfrustrated
8 hours ago
[-]
Feel free to start your own non-profit to issue client certs signed by a public authority.

As LE says, most users of client certs are doing mtls and so self-signed is fine.

reply
josephcsible
8 hours ago
[-]
> they just decided it wasn't worth the effort anymore

That seems disingenuous. Doesn't being in the client cert business now require a lot of extra effort that it didn't before, due entirely to Google's new rule?

reply
Avamander
7 hours ago
[-]
No, not really. Unless you consider basic accountability "extra effort".
reply
jauntywundrkind
7 hours ago
[-]
I really fail to understand or sympathize with Let's Encrypt limiting their certs so. What is gained by slamming the door on other applications than servers being able to get certs?

In this case I do think it makes sense for servers to accept certs even as marked by servers, since it's for a s2s use case. But this just feels like such an unnecessary clamping down. To have made certs finally plentiful, & available for use... Then to take that away? Bother!

reply