Occasionally it fails, almost always it's something unexpected happening, but occasionally we catch their errors(verified by connecting from various endpoints/DNS queries/etc). We used to call them all the time whenever that happened. Now we just auto-retry on failure in an hour and that fixes the issue all of the time(so far). We only re-try once and then fail with a ticket. Most of us like our paychecks, so we are pretty good about getting that ticket resolved quickly.
I would definitely however spend effort into verifying a host key that changes unexpectedly.
My attempts to convince them to use the same key came to naught, so instead I use one of the IP addresses.
I could alternately erase the known hosts entry on each transfer. That would probably have been preferable.
I also got a shell on it when I attempted ssh, so you can guess the care that is taken with it.
I was actually personally a victim of such (unsuccessful) attack in the tor network. SSH login to my hidden service complained about a wrong fingerprint. The same happened when I tried again. After tor restart, problem disappeared. I assume this was an attempt at SSH mitm by one of exit nodes?
Or something like that.
Good idea. That way when your CA private key leaks (the key which we never ever rotate, of course) the bad guys can compromise the whole fleet and not just one server. Bonus points if the same CA is also used for authenticating users.
As with X.509, any serious usage will involve a hardware security module, so that compromise of the CA host does not allow the key to be leaked. You'd still have a very bad day, but it can be mitigated.
I do think it's a fairly significant flaw that SSH CA doesn't support intermediate CA's (or at least didn't last time I looked into it) to enable an offline root CA.
>Bonus points if the same CA is also used for authenticating users.
The SSH CA mechanism can be used for both Host and User auth, yes.
Keeping in mind, in a real use case this would be tied to something like active directory / LDAP, so you can automate issuance of ssh keys to users and hosts.
Systems configured to trust the SSH CA can trust that the user logging in is who they say they are because the principal has already been authenticated and vouched for by the identity provider, no more manually managing known_hosts and authorized_keys, or having to deal with Trust On First Use or host key changed errors.
You can also set the CA's endorsement of the issued keys to fairly short lifetimes, so you can simplify your keymat lifecycle management a great deal - no worrying about old keys lying around forever if the CA only issues them as valid for an hour / day / etc. .
Overall I think you still come out ahead on security.
If you aren't bothering to verify then they do not need to trick you at all.
In DayJob we have a lot of clients send feeds and collect automated exports via SFTP, and a few to whom we operate the other way (us pulling data via SFTP or pushing it to their endpoint). HTTPS based APIs are very common and becoming more so, but SFTP is still big in this area (we offer some HTTPS APIs, few use them instead of SFTP).
One possible exploit route, for a malicious actor playing a long and targetted game, that could affect us:
1. Attacker somehow poisons our DNS, or that of a specific prospective client of ours, sending traffic for sftp.ourname.tld to their server, and has access to our mail.
2. Initially they just forward traffic so key verification works for existing users. They monitor for some time to record host addresses that already access the server, so when they start intervening they can keep just forwarding connections from those addresses, so those users see no warnings (and are unaffected by the hack).
3. When they do start to intercept connections from hosts not already on the list make above instead of forwarding everything, existing users are unaffected¹ but new users coming in from entirely different addresses now go to their server and if they are not verifying the key will happily send information through it², authenticating with the initial user+pass we sent or PKI using the public key they sent, with the malicious server connecting through to ours to complete the transfers.
4. Now wait and collect data as no one realises there is a MitM, and later use any PII or other valuable information for ransom/extortion purposes.
Of course there are ways to mitigate this attack route. For one: source address whitelisting, supported by OpenSSH's key based auth as the acceptable source list can be included with the public key so only specific sources can use that key for auth. But they client would have to make effort to do this, and if they aren't going to make the effort to verify the host key then they aren't going to make other efforts either.
We do have some clients who verify the host properly and/or give us source addresses to limit connections to when they provide a public key, we work with financial institutions who are appropriately paranoid about their data and the data of their customers, some even use PGP for data in transit (and in case it is ever stored where it shouldn't be) for an extra level of paranoia. But most do none of this. Most utterly ignore our strong suggestion that they use keys, or change passwords in case of email breach, instead using the password we mail them before first connection for eternity.
--------
[1] none of our clients are likely to be sending files from dynamic source addresses, at most the source might move around a v4/24 or v6/64, currently I don't think all of them connect from a single IPv4 address, I've had one recently let us know (months in advance) that their source address will be changing.
[2] it can connect to us and send the data
Literally happens every single damn day and literally nobody on the face of this earth ever gives a shit.
Host keys are the stupidest idea in the history of computer so-called "security".
Mine change maybe once every couple of years, if I do a full reinstall without copying over the old host key. And then I know exactly why it changed.
Nobody knows how the hell the host keys are generated in the first place. Don't worry about it.
> And then I know exactly why it changed.
Really? What is a "full" reinstall as opposed to a "non-full" reinstall, and how much exactly reinstall do I need for my host keys to change?
If my host keys were changing regularly, I would worry about it. There's no legitimate reason for that to be happening, since I'm not regularly wiping the drive and reinstalling, nor am I regularly manually deleting the host keys (the other way they get regenerated).
> is ordering via ssh secure?# you bet it is. arguably more secure than your browser. ssh incorporates encryption and authentication via a process called public key cryptography. if that doesn’t sound secure we don’t know what does. [1]
I think this is wrong though for exactly the reasons described in this post. TLS verifies that the URL matches the cert through the chain of trust, whereas SSH leaves this up to the user to do out-of-band, which of course no one does.
But then the author of this article goes on to say (emphasis mine):
> This result represents good news for both the SSL/TLS PKI camps and the SSH non-PKI camps, since SSH advocates can rejoice over the fact that the expensive PKI-based approach is no better than the SSH one, while PKI advocates can rest assured that their solution is no less secure than the SSH one.
Which feels like it comes out of left field. Certainly the chain of trust adds some security, even if it's imperfect. I know many people just click through the warning, but I certainly don't.
I think you need to point out that TLS utilizes the browsers cert store for that chain of trust. If a bad actor acquires an entity that has a trusted cert, or your cert store is compromised, that embedded cert store is almost entirely useless which has happened on more than one occassion (Chinese government and Symantec most recently).
https://expeditedsecurity.com/blog/control-the-ssl-cas-your-...
This is typically caught pretty quickly but there's almost nothing a user can do to defend against a chain of trust attack. With SSH, while nobody does it, at least you have the ability to protect yourself.
In browser land, the client browser doesn't get a cert to prove their identity, it's one-way only.
Certainly TLS supports client certs, browsers(at least some) technically even implement a version, but the UX is SOOOO horrible that nobody uses it. Some people have tried, the only people that have ever seen any success with client side authentication certificates over a web browser are webauthn/passkeys and the US Military(their ID cards have a cert in them).
webauthn/passkeys are not fully baked yet, so time will tell if they will actually be a success, but so far their usage is growing.
(1) = Most browsers defer to the operating system for TLS support, meaning there's not just a layer boundary but a (major) organizational one. A lot of the relevant standards are also stuck in the 1990s and/or focused on narrow uses like the aforementioned U.S. military and so they ossified.
(2) = The granularity of TLS configuration in web servers varies widely among server software and TLS libraries. Requesting client credentials only when needed meant tight, brittle coupling between backend applications and their load balancer configuration, which was also tricky to secure properly.
I have 2 problems with webauthn/passkeys:
* You MUST run Javascript, meaning you are executing random code in the browser, which is arguably unsafe. You can do things to make it safer, most of these things nobody does(never run 3rd party code, Subresource Integrity, etc).
* The implementations throughout the stack are not robust. Troubleshooting webauthn/passkey issues is an exercise in wasted time. About the only useful troubleshooting step you can do is delete the user passkey(s) and have them try again, and hope whatever broke doesn't break again.
This is served over TLS, so it's no worse than TLS. You can also benefit from the paved road that LetsEncrypt has provided. It might not be as smooth as SSH CAs once they're set up, but setting those up and the Day 2 operations involved isn't nearly as straightforward.
For planned key rotations, you could sign the new key with the old key and send that in the handshake, so the client could change the known_hosts file on its own.
For unplanned rotations (server got nuked), you could isntruct your users to use a secure connection and run "ssh-replace-key server.example.com b3f620", which would re-run TOFU, with the last param being an optional truncated hash of the key for extra security.
You could also do a prompt like "DANGER!! The host key has changed. If you know this is expected or of your IT administrator told you to do so, type 'I know what I'm doing or the IT admin told me to do this' ".
They do understand passwords, and most can manage an SMS code as a second factor. That's about the limit of what you can count on.
Your key has to parts: public and private. You give your public part to the server so it knows it's talking to you because only you have the private part. The server has its own pair and it gives you its public part so you know you're not talking to an impostor server. The private key is never sent, it stays on your computer, but it does some fancy math so the server can know you have it.
It seems like it ought to matter, but if roughly nobody verifies and yet the sky has not fallen - does it?
On the other hand, we know of at least two suppliers of software that run with elevated access everywhere (including the dev side of every advanced military) that have been breached by unknown parties for years. The most likely explanation, by far, is that the sky only didn't fall yet because nobody wants it to. And that leaves us vulnerable to somebody suddenly wanting it.
it's so banal to check host keys.
In hindsight, it isn't a very useable DNS RCODE.
And yes, it is preferable to use SSH certificates (for as long as you are aware that private keys must be guarded jealously). We need PAKE capability in SSH, preferably Signal-like protocol for authentication.
And use DTLS (two sets of client and server PKI , one for each direction) to guard that link between SSH gatewat(s) and SSH authentication (certificate-based) server.
The starting point of all this:
https://jpmens.net/2019/03/02/sshd-and-authorizedkeyscommand...
SSH keys are amazing, portable and in some ways easier to use than Passkeys. But for them to successfully replace passwords and account configuration, which works decently well for a service like pico.sh, the user experience needs to be improved significantly. Not impossible but what does become a continuous and ongoing problem is verification.
SSL certificates are probably the best we can do for the "talk to a server you've never heard of" scenario, but we can do enormously better for the scenario where you're SSHing into a server you already have a pre-existing trust relationship with.
WebPKI only realistically serves a small portion of the SSH hosts out there. This is quite different from the situation with HTTPS. Even so, this would still be very convenient and useful. As I said elsewhere, I think this is sub-5% of SSH servers.
X.509 more broadly could replace SSH certificates. Many institutional settings already have trust stores set up to include their in-house CAs. Public clouds and major hosting providers could also set up their own CAs, but they would have trouble distributing them (cf. AWS RDS, for example). Now we're probably up to 25% or so of deployed SSH servers. In the case of clouds, though, this adds a massive new exploitation vector (IP reassignment) and thus puts pressure on expiration/revocation.
The rest are going to need self-signed certs.
Between the non-WebPKI CA distribution problem and the probable predominance of self-signed certs, trust-on-first-use would still be the norm, and so relying on pre-existing trust relationships would still be necessary. We could augment TOFU/known-hosts with some kind of certificate or CA pinning rather than just key pinning, though.
So, again, while I think adopting X.509 isn't a bad idea, and makes a lot more sense today than it did in 2010 (pre-Heartbleed!) when SSH added certificates, it's not really solving the problem that SSH has much better than today's solutions, no matter how well it solves the problem that HTTPS has.
This is backwards. Breaking SSH authentication permits subverting most websites; the converse is not true.
> "pre-existing trust relationship" prevents from rotation keys
This is also false. Things like Signal and OTR rotate keys frequently and automatically within pre-existing trust relationships.
In fact, many people who don't properly understand SSH's trust-on-first-use system (so don't actually verify server certificate fingerprints) argue for browsers to support it as an option alongside the current certificate signing & verification regimes.
So, even if SSH supported X.509 certificates, which isn't necessarily a bad idea, it would be completely detached from WebPKI, thus removing most of the benefit.
There are CA which will issue certificates for public IP addresses. So any public ssh server also can use these certificates.
There's no reason to detach ssh PKI from Web PKI. They can use exactly the same certificates and keys.
FWIW, there is an RFC for X.509 certificates in SSH, but it has not achieved wide adoption: https://www.rfc-editor.org/rfc/rfc6187
I think that github alone makes a sizeable chunk of these connections. So if there was some better mechanism to establish trust before first handshake, that would benefit all of these connections.
One approach that I could envision is to simply host ssh public key on some well known path (github.com/.well-known/ssh.pub) and ssh client will grab it over https before first connection and when key suddenly changes.
I think we could also attack this problem from the angle of phasing out git+ssh protocol (or at least greatly reducing its use) by improving unattended/headless HTTPS user authentication.
[1] https://docs.redhat.com/en/documentation/red_hat_enterprise_...
For SSH this is fine, because very rarely is anyone connecting to a random SSH server on the internet without being able to talk to the operators (hi Github, we see you there, being the exception).