1) User goes to BAD website and signs up.
2) BAD website says “We’ve sent you an email, please enter the 6-digit code! The email will come from GOOD, as they are our sign-in partner.”
3) BAD’s bots start a “Sign in with email one-time code” flow on the GOOD website using the user’s email.
4) GOOD sends a one-time login code email to the user’s email address.
5) The user is very likely to trust this email, because it’s from GOOD, and why would GOOD send it if it’s not a proper login?
6) User enters code into BAD’s website.
7) BAD uses code to login to GOOD’s website as the user. BAD now has full access to the user’s GOOD account.
This is why “email me a one-time code” is one of the worst authentication flows for phishing. It’s just so hard to stop users from making this mistake.
“Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website, and passing that link to BAD is more tedious and therefore more suspicious. However, if some popular email service suddenly decides your login emails or the login link within should be blocked, then suddenly many of your users cannot login.
Passkeys is the way to go. Password manager support for passkeys is getting really good. And I assure you, all passkeys being lost when a user loses their phone is far, far better than what’s been happening with passwords. I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
I feel like I'm doing something stupidly wrong or missing a prompt somewhere, or maybe UX is just shitty everywhere, but if I, a millennial who grew up programming and building computers, struggle with this, then I don't expect my mom, who resets her password pretty much every time she needs to sign into her bank, to get it to work.
I haven't invested more time in this because if it's so unusable for me as an engineer, it's a non-starter for the general public.
It's too early for grandma to use, IMO.
The only relevant Bitwarden setting appears to be "Ask to save and use passkeys" under Notifications. I do turn off the browser's built-in password manager, though I believe anything else relevant I have at default. If you have those and they still aren't getting saved, then I'm at a loss, but wish I knew why they don't work for you. In Bitwarden, you can see if there's a passkey saved in an entry, as the creation timestamp is shown right under the password field and editing an entry also allows deleting a passkey.
Depends on the brand of the passkey i think.
I use Yubikeys, and passkeys just work. On Chrome, Firefox and Safari, both on macOS and Linux (specifically Alpine). I also tried with iPhones (for my family), and it also just works.
I haven't tried using an Android device, is it what you are trying?
The presence of other attestation types in the spec allows passkeys to replace the use of other classes of authentication that already exist (e.g. smartcard). For example, it's very reasonable for a company to want to ensure that all their employees are using hardware Yubikeys for authentication. Furthermore, sharing the bulk of implementation with the basic case is a huge win. Codepaths are better tested, the UIs are better supported on client computers, etc.
The presence of attestations in the spec, does not impinge on user freedom in any meaningful way.
I always reject attestation requests and I don't recall ever having been refused, so if this was a real problem it seems like I ought to have noticed by now.
The protocol normally allows you to omit the attestation, but they worked around an extra call after a successful registration flow that sends you to an error page if your FIDO2 passkey isn't from one of these large approved vendors: https://learn.microsoft.com/en-us/entra/identity/authenticat...
I found out by trying to prototype my own FIDO2 passkey, and losing my mind trying to understand why successful flow that worked fine on other websites failed with Microsoft. It turns out, you are not allowed to do that.
B2C I would expect more latitude on requiring attestation.
I think it's reasonable to have attestation for the corporate use case. If they're buying security devices from a certain vendor, it's reasonable for their server to check that the person pretending to be you at the other end is using one of those devices. It's an extra bit of confidence that you're actually you.
I once knew a guy who refused to let his office computer go to sleep just to avoid having to enter his password to unlock his computer. He was a really senior guy too, so IT bent to allow him do this. What finally made him lock his computer was a colleague sending an email to all staff from his open outlook saying “Hi everyone, it’s my birthday today and I’m disappointed because hardly anyone has come by to wish me happy birthday”. The sheer mortification made him change his ways.
It started with OneNote web a couple years ago. Every day that gave a popup "Your session needs to be refreshed) and it would reload all over again. Microsoft don't bother to make a OneNote desktop app for my platform and the web version is really terrible anyway (you can only search in one tab, not a whole notebook). So I moved to self-hosted Obsidian which I'm really happy with. Now I can basically see myself typing in a note from another client.
But replacing Microsoft for email is another topic.
I do remember explicitly telling them (because of course having agreed to do this they have no idea how and need our instructions) not to enable attestation because it's a bad idea, but you seem to be saying that it'll somehow be demanded (and then ignored) anyway and that was not my experience.
So, I guess what I'm saying here is: Are you really sure it's demanded and then ignored if you turn it off from the administrative controls? Because that was not my impression.
If you refused to provide make and model, IIRC you would fail the check whether enforcement was enabled or not. Then if enforcement was enabled and your AAGUID didn't match the list, you would see a different error code.
Either way, you're sending over an attestation. They understandably forbid attestation format "none" or self-signed attestations. It's possible that this has changed, but the doc page still seems to say they won't accept a device without a packed attestation, it's only that the AAGUID check can currently be skipped.
I understand the value of attestations in a corporate environment when you want to lock down your employees' devices. But that could simply have been handled through a separate standard for that use case.
https://github.com/keepassxreboot/keepassxc/issues/10407
> To be very honest here, you risk having KeePassXC blocked by relying parties
But having a choice about how you store your credentials shouldn't depend on the good faith of service providers or the spec authors who are doing their bidding anyway. It's a bit similar to sideloading apps, and it should probably be treated similarly (ie, make it a right for users).
People forget that one of the purposes of authentication is to protect both the end user and the service operator.
Pretty much every service out there has "don't share credentials" in their ToU. You don't have to like it, but you also don't have to accept the ToU.
My point was that freedom is not an absolute, it's balanced against other freedoms. It's hard to tell whether you agree with that or not.
Attestation provides a guarantee that the credential is stored in a system controlled by a specific vendor. It’s not “more” or “less” secure, it’s just what it literally says. It provides guarantees of uniformity, not safe storage of credentials. An implementation from a different vendor is not necessarily flawed! And properties/guarantees don’t live on some universal (or majority-applicable) “good-to-bad” scale, no such thing exists.
This could make sense in a corporate setting, where corporate may have a meaningful reason to want predictability and uniformity. It doesn’t make sense in a free-for-all open world scenario where visitors are diverse.
I guess it’s the same nearsighted attitude that makes companies think they want to stifle competition, even though history has plenty of examples how it leads to net negative effects in the long run despite all the short term benefits. It’s as if ‘00s browser wars haven’t taught people anything (IE won back then - and where is it now?)
How is it a fallacy? The rate of account compromises is a real metric that is affected by how good security there is for accounts.
Yes, the rate of account compromises is a metric we can define. But attestation doesn't directly or invariably improve this metric. It may do so in some specific scenarios, but it's not universally true (unless proven otherwise, which I highly doubt). In other words, it's not an immediate consequence.
It could help to try to imagine a scenario where limited choice can actually degrade this metric. For example, bugs happen - remember that Infineon vulnerability affecting Yubikeys, or Debian predictable RNG issue, or many more implementation flaws, or various master key leaks. The less diverse the landscape is, the worse the ripples are. And that's just what I can think of right away. (Once again, attestation does not guarantee that implementation is secure, only that it was signed by keys that are supposed to be only in possession of a specific vendor.)
Also, this is not the only metric that may possibly matter. If we think of it, we probably don't want to tunnel vision ourselves into oversimplifying the system, heading into the infamous "lies, damned lies, and statistics" territory. It is dangerous to do so when the true scope is huge - and we're talking about Internet-wide standard so it's mindbogglingly so. All the side effects cannot be neglected, not even in a name of some arbitrarily-selected "greater good".
All this said, please be aware that I'm not saying that lack of attestation is not without possible negative effects. Not at all, I can imagine things working either way in different scenarios. All I'm saying that it's not simple or straightforward, and that careful consideration must be taken. As with everything in our lives, I guess.
The meeting was about him unable to test the APK of the new version of their mobile app. He felt embarrassed, his mobile phone is enrolled in the MDM scheme that disallows side-loading of apps.
What I am trying to say is that assuming users are stupid carries a non-negligible risk that you will be that stupid user one day.
We have already been through this with many services suddenly demanding that you give them your phone number "for security".
This is far from very obvious, especially given that Apple have gone out of their way to not provide attestation data for keychain passkeys. Any service requiring attestation for passkeys will effectively lock out every iPhone user - not going to happen.
There are a bunch of service provider contexts where credential storage attestation is a really useful (and sometimes legally required!) feature.
Drop attestation from passkeys, and I become a promoter. Keep it, and I suggest people stay away.
If it's not something anyone intends to use on public services, this should be uncontroversial. Dropping attestation simplifies implementation, and makes adoption easier as a result.
> It seems like the requirements already diverged.
No, the requirements are _contextual_. This isn't a new idea.
What makes you think that the Webauthn standards are _only_ "targeted at running services for the general public"?
I would love to use public key cryptography to authenticate with websites, but enabling remote attestation is unacceptable. And pinky swears that attestation won't be used aren't good enough. I've seen enough promises broken. It needs to be systematic, by spec.
Passwords suck. It's depressing that otherwise good alternatives carry poisonous baggage.
If you make something possible, it will be used.
Sure, but that's not without tradeoffs. I come back to:
> Any service requiring attestation for passkeys will effectively lock out every iPhone user - not going to happen.
Because passkeys are designed to replace passwords across multiple different service contexts, that have different requirements. Just because there's no reason to use it for one use case doesn't mean it's not actually useful in a different one. See things like FIPS140 (which everyone ignores unless they're legally required not to).
Can you sketch out for me the benefit of a public-facing service deciding to require passkey attestation? What's the thought process? Why would they decide to wake up and say "I know, I'm going to require that all of my users authenticate just with a Yubikeys and nothing else"?
A misguided administrator is very likely to think "They can't use a malicious device to access our service".
What's the benefit for a private service?
I should need to install an enterprise authenticator app, which speaks webpki-enterprise, if you want to enable that shit.
That kind of thing can make a huge difference once this standard starts becoming e.g. required for government procurement.
Austria's governmental ID is linked to 5 approved tokens only.
That's not the right question. The right question is "what companies would be using passkey's if there was attestation on their security". To answer that question, you might look at the answer for a similar one on X509: "would we be doing banking over http if X509 didn't have attestation?".
Here's a Fido2 member (Okta) employee saying "If keepass allows users to back up passkeys to paper, I think we'll have to allow providers to block keepass via attestation." https://github.com/keepassxreboot/keepassxc/issues/10407#iss...
All because passkeys backup is deemed "too unsafe and users should never be allowed that feature, so if you implement it we'll kick you out of the treehouse."
The authoritarian nature of passkeys is already on full display. I hope they never get adopted and die.
I'll post the same response I replied to other on a different thread:
Wild that you (and a few others) continue to make these accusations about me in these comments (and in other venues).
1) I've been one of the most vocal proponents of synced passkeys never being attested to ensure users can use the credential manager of their choice
2) What makes you think I have any say or control over the hundreds of millions of websites and services in the world?
3) There is no known synced passkey credential manager that attests passkeys.
tl;dr attestation does not exist in the consumer synced passkey ecosystem. Period.
You may have "been one of the most vocal proponents of synced passkeys never being attested to ensure users can use the credential manager of their choice", but as soon as one such credential manager allows export that becomes "something that I have previously rallied against but rethinking as of late because of these situations".
There may not currently be attestation in the consumer synced passkey ecosystem, but in the issue thread you say "you risk having KeePassXC blocked by relying parties".
The fact that that possibility exists, and that the feature of allowing passkeys to be exported is enough to bring it up, is a huge problem. Especially if it's coming from "one of the most vocal proponents of synced passkeys never being attested", because that says a lot about whoever else is involved in protocol development.
> The fact that that possibility exists,
The possibility does not exist in the consumer synced passkey ecosystem. The post is from a year and a half ago.
Until we have full E2E passkey implementations that are completely untethered from the major players, where you can do passkey auth with 3 raspberry pi's networked together and no broader internet connection, the security minded folks who have to adopt this stuff are going to remember when someone in the industry publicly said "if you don't use a YubiKey/iPhone/Android and connect to the internet, ~someone~ might ban you from using your authenticator of choice."
This is already possible today. And since it's a completely open ecosystem, you can even build your own credential manager if you choose!
>which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
This is exactly why we need truly open standards, so people who believe they are acting for the greater good can't close their grubby hands over the ecosystem.
Making staff look like idiots in front of clients is a resume-generating-event.
His reply does give one aspect of it: passkey's are fragile. To be secure, they can't be copied around or written down on a piece of paper in case you forget, so when the hardware they are stored on dies, or you lose your Yubikey or is as he described the PC re-imaged, all the your logins die. That will never fly, and it's why passkeys are having a hard time being adopted despite them being better in every other way.
Passkey's solution to that is to make them copyable, but not let the user copy them. Instead someone else owns them, someone like Google or Apple, and they will do the copy to devices they approve of. That will only be to devices they trust to keep them secure I guess. But surprise, surprise, the only devices Apply will trust are ones sold to you by Apple. The situation is the same for everyone else, so as far as I know bitwarden will not let you copy a bitwarden key to anyone else. Bitwarden loudly proclaims they lets you export all your data, including TOTP - but that doesn't apply to passkeys.
So, right now, having a passkey means locking yourself into proprietary companies ecosystem. If they company goes belly up, or Google decides you've transgressed one of the many pages of terms, or you decide to move to the Apple ecosystem again you lose all your logins. And again, that won't fly.
The problem is not technological, it's mostly social. It's not difficult to imagine a ecosystem that does allow limited, and secure transfer and/or copying of passkeys. DNS has such a system for example. Anyone can go buy a DNS name, then securely move it between all registrars. There could be a similar system for passkeys.
Passkeys have most of the bits in place. You need attestation, so whoever is relying on the key knows it's secure. The browsers could police attestation as they do now for CA's. We have secure devices that can be trusted to not leak leak passkeys in the form of phones, smartwatches, and hardware tokens. But we don't have a certification system for such devices. And we we don't have is a commercial ecosystem of companies willing to sell you safe passkey storage that allows copying to other such companies. On the technological front, we need standards for such storage, standards that ensure the companies holding the passkeys for you couldn't leak the secrets in the passkeys even if they were malicious.
We are at a frustrating point of being 80% of the way there, but the remaining 20% looks to be harder than the first 80%.
Every time I’ve seen them actually attack user freedom, there was an embarrassingly obvious business angle. Like Chrome’s browser attestation that was definitely not to prevent Adblock, no sir.
But I fear it's worse. Based on how past open standards played out, I find it believable they do care - that there won't be an open ecosystem of password managers.
> But they’ve shown no evidence of hating user freedom on principle.
Yes, they did, just see Microsoft's crusade against Linux and the origin of the "embrace-extend-extinguish" term.
I recommend reading the MDN docs on Webauthn, they’re surprisingly accessible.
> Yes, they did, just see Microsoft's crusade against Linux and the origin of the "embrace-extend-extinguish" term.
The whole point of the trial that term came from was that Microsoft explicitly saw Linux as a material threat to their business. What threat are Google quashing by preventing you from using passkeys they don’t control?
Because big tech loves control. Just because you can't see the angle yet, it doesn't mean there isn't one now, or won't be one later. It has been shown time and time again that they will take all the freedom away from you that they can.
Even if there wasn't already an example, it's easy to turn control into a revenue stream at a later time.
> Even if there wasn't already an example, it's easy to turn control into a revenue stream at a later time.
I think you’ll have to justify or qualify this a bit. If Google forces every website on Chrome to have a red background, how do they turn that control into a revenue stream later on?
Chrome has already started kicking off extensions, see ublock.
I can't divine the future about how they will further their income streams.
As for blocking things that block ads; if you can’t see the monetary incentive for Google there then I don’t know what to tell you.
I didn’t ask you to divine the future. I said “I’ve not seen them do X without trying to get Y” (a statement about the past), and you still haven’t given me a remotely credible example.
I agree, why would BigTech care about those dozens of users. Screw those guys, they can use our password manager or they can get lost, we don't need them!
Bots using a custom password manager to share logins.
Now the Yubikey is just an API you can call, websites cannot tell the difference. You can't export keys, but a bot can add new keys after using existing keys to log in.
but I don't think attestation per-se is bad, if you are a employee from a company and they provide you the hardware and have special certification requirements for the hardware then attestation is totally fine
at the same time normal "private" users should never exposed to it, and for most situations where companies do expose users to it (directly or indirectly) it's often not much better then snake oil if you apply a proper thread analysis (like allowing banking apps to claim a single app can provide both the login and second factor required by law for financial transactions, except if you do a thread analysis you notice the main thread of the app is a malicious privilege escalation, which tend to bypass the integrity checks anyway)
But a lot of the design around attestation look to me like someone nudged it into a direction where "a nice enterprise features" turns into a "system to suppress and hinder new competition". It also IMHO should never have been in the category of "supported by passkey" but idk. "supported by enterprise passkey only" instead.
Through lets also be realistic the degree to which you can use IT standards to push consumer protection is limited, especially given that standard are made by companies which foremost act in their financial interest, hence why a working consumer protection legislation and enforcement is so important.
But anyway it's not just the specific way attestation is done, it's also that their general design have dynamics push to a consolidation on a view providers, and it's design also has elements which strongly push for "social login"/"SSO" instead of a login per service/app/etc. i.e. also pushes for consolidation on the side of login.
And if you look at some of the largest contributors you find
- those which benefit a ton from a consolidation of login into a few SSO providers
- those which benefit from a different from a login consolidation (consolidation of password managers) and have made questionably blog entries to e.g. push people to not just store password but also 2FA in the same password manager even through that does remove on of the major benefits of 2FA (making the password manager not a single point of failure)
- those which benefit a ton if it's harder for new hardware security key companies, especially such wich have an alternative approach to doing HSKs
and somehow we ended up with a standard which "happened" to provide exactly that
eh, now I sound like a conspiracy theorist, I probably should clarify that I don't think there had to be some nefarious influence, just this different companies having their own use case and over fitting the design on their use case would also happen to archive the same and is viable to have happened by accident
Perhaps I'm missing something, but I do think hardware "attestation per-se is bad. Just look at the debacle of SafetyNet/Play Integrity, which disadvantages non-Google/non-OEM devices. Hardware attestation is that on steroids.
As for corporate/MDM managed environments, what's wrong with client certificates[0] for "attestation"? They've been used securely and successfully for decades.
As for the rest of your comment, I think you're spot on. Thanks for sharing your thoughts!
Passkeys could have been an overall boon to society. But with attestation restricted to a set of corporate-blessed providers it is a Faustian bargain at best.
There is no attestation in the consumer synced passkey ecosystem. Period.
How do you expect a single person to be able to make an authoritative statement like that?
> The problems of Passkeys are more nuanced than just losing access when a device is lost (which actually doesn't need to happen depending on your setup).
Those are the solutions I'm familiar with; there may be others. If Android and Windows don't already solve this problem in similar ways--which they might!--it sounds like an open opportunity for them.
Edit: sure enough, Android supports it: https://support.google.com/chrome/answer/13168025?hl=en&co=G...
As does Windows: https://blogs.windows.com/windowsdeveloper/2024/10/08/passke...
More like abuelita gets robbed at gunpoint and made to unlock and clear out her bank account, then has no recourse at home because her device was taken. I live in a third world country and even 2FA simply isn't viable for me due to how frequent phone robberies are. I've had to do the process once and it was a nightmare, whereas with passwords I can just log into Bitwarden wherever and I'm golden
Relying on Google/Apple is no better, with the stories of people losing access to their (Google in particular) account, and not being able to recover or let alone even reach a human at Google to begin with.
Why not have a public service for this, instead of relying on big tech that can just revoke your account for any number of ToS "violations" without recourse? The solution for "normies" should not be rely on and trust Google with your entire digital identity.
State involvement may be better used in policing, too. Public repositories of leaked passwords (without usernames, of course) would do wonders, for example
Google frequently warns me that one of my passwords has compromised but I don't really care for those sites.
The State is always more difficult and dangerous to deal with than a private company.
Ridiculous.
Google can ban me (really just one specific digital instance of me) from their services. The government can throw me in jail, take all my property, fine me whatever amount they want, etc.
Please stop right there. I want a password manager that I fully control, and lives on my own infrastructure (including sync between devices). Not reliance on someone else's cloud.
I haven't used a phone 2fa forever, but it was a much better system than this "email me a code" BS.
But you're right, it's not perfect but has gotten better. Just in time to be of no use thanks to email BS.
What's 2fa token? Is that an AI thing? AI uses tokens. Or a crypto thing? Do you need one of them "nonfungible" tokens? And what's an authenticator? I have MS authenticator for work, but it uses 2 digit numbers, are those tokens?
They exist so if someone watches over your shoulder while typing your password, they don't gain access to anything.
And if I lose my phone, I only need to do the recovery flow with the printed codes for one account, rather than for all of my accounts.
Grandma, and Uncle Rob, and your cousins, and anyone else you have a long standing relationship with, can use your VaultWarden instance if you let them.
But! You now get to maintain uptime (Rob travels and is frequently awake at 3am your time) and make sure that the backups are working... and remember that their access to their bank accounts is now in your hands, so be responsible. Have a second site and teach your niece how to sysadmin.
Good luck. For some arcane reason, Bitwarden turned on email-based 2FA for my account last night and all of a sudden I'm locked out of my account for half a day. …mostly because I have greylisting enabled on my mail server, so emails don't arrive right away, but as it so happens I also had all my hardware stolen from me last weekend. Bootstrap is a real bitch.
You are describing the current status quo, without passkeys. This is already possible.
Well, except maybe for the "without recourse" part, because there are some legal and policy avenues available for dealing with this situation.
Yes, and I'm saying that part isn't accurate either for the story you're portraying with passkeys or for the status quo. That's not how account recovery flows work.
Given how common mandatory SMS 2FA is for banks, if thieves stole your unlocked phone, they have stolen your account too.
Relying on only SMS sounds like 1FA?
I set up a passkey for github at some point, and apparently saved it in Chrome. When I try to "use passkey for auth" with github, I get a popup from Chrome asking me to enter my google password manager's pin. I don't know what that pin is. I have no way of resetting that pin - there's nothing about the pin in my google profile, password manager page, security settings, etc.
The way to go is an encrypted password manager + strong unique random passwords + TOTP 2FA. It's human-readable. Yes, that makes it susceptible to phishing, but it also provides very critical UX that makes it universal and simple.
Especially since Google doesn’t allow you to change your personal default which is what convinced me to go and switch all my accounts off of Google SSO
I wouldn't have minded if we moved to a scheme like SSH logins with public and private keys I own either, that I can store securely but load as I please and again would work well with a local password manager.
With a password, I can write it down on a piece of paper and put it in my safe.
One of these systems is not like the other.
it's not quite new, as a dump example depending where in android contacts you click on a address it might always force open google maps (2/3 cases) or (1/3 cases) propelry goes through the intend system and gives users a choice
stuff like that has been constantly getting worse with google products, but it's not like Microsoft or apple are foreign to it
On my Windows laptop that is Windows Hello PIN, not sure about other OSs. And it can be disabled.
The problem is that I can physically show up at my local bank branch or at my job's IT helpdesk to get my account back, but I can't show up at the Googleplex or at Facebook's or Xitter's HQ and do the same. Device bound passkeys are very error prone for the latter scenario, since users will fail to account for that case.
With Amazon's live chat, someone was able to get into my account by providing an address in the same city as the destination of my latest Amazon order.
You see this with 2FA since "sorry lol you've lost your account forever" isn't an option, and it's trivial for users to lose their 2FA key unlike, say, access to their email.
Even services that use login via emailed link need to do it because people do lose email access. Far too many people use the email provided by their ISP as their only email service, which can be very bad if they move to someplace that ISP does not serve or simply want to switch to another ISP in their current area.
And once you set up a customer service pipeline for it, you might accidentally create a backdoor that's far worse than forgot-my-password email verification: https://medium.com/@espringe/amazon-s-customer-service-backd...
Email account access is the closest thing we have to ubiquitous identity on the web. Users that truly lose access to their email account are in a catastrophic situation before they even think of whether they can access your service.
So, I guess you set up some "emergency users". And maybe if you lose access to your account, you get customer support to mark your account as lost which sends an email to the address that you have on file (in case it's an attack started by someone other than the user).
And I suppose if N days pass without any login, one of your emergency users can generate a credential that they can pass to you to recover your account?
That and Apple will give you a very long one-time password meant to be printed that can restore access as well. This one is in a third undisclosed location for me.
No, at least not on its own. Let's not repeat the mistakes.
Password managers are the way to go and ONLY FOR RARE EXCEPTIONS we should use dedicated MFA, such as for email-accounts and financial stuff. And the MFA should ask you to set up at least 3 factors and ask you to use 2 or more. And if it doesn't support more or less all factors like printed codes, OS-independent authenticator apps and hardware keys like yubikey, then it should not be used.
Such as finding a dinosaur fossil of your families name clan.
Bitwarden the password manager includes a full passkey implementation, which doesn't involve any MFA.
No: - I can always export and import all my passwords from/into my password manager - My passwords always work independently of a password manager or any specific app/OS/hardware
That is not true for passkeys and makes them much more like tokens. Of course they don't have to be used in MFA, just like passwords.
This is only about your first paragraph, it doesn't affect your second.
Counter-example: I can write a password manager that will not allow you to export/import passwords.
There are cases where bitwarden doesn't work but chrome for example does. Easy to Google up.
For passwords however, I never heard of a case where a website only accepts passwords from a specific password manager - and how could they even do that right?
If the website accepts a password, then it can't prevent you from using the password manager you want. But if the website accepts FIDO2 passkeys, it's the same thing, isn't it?
For example: https://www.w3.org/TR/webauthn-2/#dictdef-authenticatorselec...
> If the website accepts a password, then it can't prevent you from using the password manager you want. But if the website accepts FIDO2 passkeys, it's the same thing, isn't it?
Unfortunately not...
Passkeys is an interface between your password manager and a website without all the fluff with filling or copy-pasting passwords.
I don't love them. I don't love passwords either.
But while I don't fear passwords, I fear passkeys. The reason is that it makes the tech even more intransparent. My password manager stops working, completely dies or I can't use it anymore for other reason? No problem, I can fallback to a paper list of passwords if I really have to. This transparency and compatibility is more important than people think.
Passkeys lack that. They can be an interface like you described, but only if everyone plays along and they can be exported. But since there is no guarantee (and in practice, they often cannot be exported either) they are not a replacement for passwords. They are a good addition though.
Unfortunately, many people don't understand that and push for passwords to begone.
And I would agree with that argument.
It's literally something like
hnkTKS7h2WCOBr3CxSKM51cSVKSkiKOSlQsMhtRZ0CU
stored in the password manager.Also they are too complicated for an ordinary user. A physical key is much simpler and doesn't require any setup or thinking, and can be used on multiple devices without any configuration. And doesn't require a cloud account.
Passwords have always been bad. The problem is that users can't remember them. So they rotate, like, 3 passwords.
Which means if fuckyou.com is breached then your bank account will be drained. Great.
On top of that, the three passwords they choose are usually super easy to guess or brute force.
With a password manager, users only need to remember one password, which means they can make said password not stupid. You can automatically log in too with your new super secure passwords you never need to see.
Its the perfect piece of software. Faster, easier, more secure, with less mental load.
I think you're rejecting good solutions out of hand.
Meanwhile...millions of users trusted LastPass. Twice.
I’m not going to rely on myself never making a mistake. I want a solution that protects me even during stressful moments where I have a lapse of judgement and forget to check.
In general, opening a malicious URL exposes the user to unnecessary risk, so the correct solution is not to assume the user has visited a malicious site (since that would already be high-risk), but rather to prevent opening of malicious URLs. The most obvious solution is to treat any untrusted content as questionable. So I very carefully examine every domain I visit - as I say to my kids: have a model about who owns the computer you're talking to. Domains matter.
Now, this works for me. I'm not cognitively impaired, I have high conscientiousness, probably from working in military and classified defense contexts way back when, but I'm not really sure to be honest, could just be my personality. But it works for me.
I get that you want that extra safeguard, but it's just not a dealbreaker for me, especially since I'm highly suspicious of browser add-ons and the security implications they bring in. I guess I'm just extremely selective about what add-ons I'll use.
You might find the KeePassXC docs about the feature [0] to be informative.
If you're going to complain that all a phisher has to do to capture a password is create a website with the same title as the official one, then my reply would be something like "Duh. That's what the browser plugin is for.".
[0] <https://keepassxc.org/docs/KeePassXC_UserGuide#_auto_type>
[1] ...optionally, and on by default...
I’m absolutely looking for a browser plugin. I would refuse to use an auto-type feature that only checks the window title instead of, as a browser plugin would do, the site’s domain.
I was mentioning how auto-type worked because it's useful information for those who either are unwilling to use a browser plugin, or are like myself and simply have no need for one.
/me wonders if this is a "recommend me a nice open source, offline password manager" question in disguise.
That was years ago, so I’m going to check it out again. Thanks for the pointer.
Update: One thing that stands out immediately is a confusing mess of three different projects, two of them unmaintained, which all call themselves KeePassX or KeePassXC, sometimes linking to each other’s documentation. How do I even tell I’m facing the correct KeePass(X(C)?)? project?
Yes, I’ll figure it out eventually but until then, it’s confusing. Also, if a password manager project needs to be forked over and over and over again (how can a holder of the keys to the kingdom possibly go MIA on three different occasions in basically the same project?), then does that tell us something about how the project is governed?
Well, [0] lists a single project called KeePassXC, with [1] as its homepage. Search engines list [1] and [2] as the top results for the query KeePassXC, for whatever that's worth. [3]
> Also, if a password manager project needs to be forked over and over and over again ... then does that tell us something about how the project is governed?
No?
KeePass is Windows-only software. So, some folks decided to write KeePassX, which ran on Linux, OSX, and Windows. They got bored of that after a decade or so, called it quits, and one of the preexisting forks [4] became the widely-used one.
> how can a holder of the keys to the kingdom possibly go MIA on three different occasions in basically the same project?
In addition to the history I wrote above, you are aware that KeePass is still receiving stable releases? According to [5], it looks like 2.59 was released just last month.
EDIT: Actually, where are you getting this "confusing mess of three different projects" from? When I search for "keepass", I get the official home pages for KeePass and KeePassXC as the top two results, the Wikipedia page, and then the Keepass project's SourceForge downloads page. When I search for "keepassx", I get the official homepages for KeePassX and KeePassXC, the wikipedia page, the KeePassXC Github repo, and an unofficial SourceForge project page for KeePassX.
[0] <https://keepass.info/download.html>
[1] <https://keepassxc.org/>
[2] <https://github.com/keepassxreboot/keepassxc/releases>
[3] And -because I'm a Linux user- not only do I have KeePassXC in my package manager, I also know that [1] is listed as its project homepage.
[4] ...which started like four years before KeePassX's final stable release...
[5] <https://sourceforge.net/projects/keepass/files/KeePass%202.x...>
When I searched for `keepassxc`, my search engine ranked eugenesan/keepassxc [0] higher than keepassxreboot/keepassxc [1], so the former was the first that I’d visit. GitHub says that eugenesan/keepassxc is 2693 commits ahead of keepassx/keepassx:master, so I assumed that eugenesan/keepassxc was a legitimate and meaningful fork of keepassx/keepassx. Maybe I’m entirely mistaken, and I was just tricked by a blunder of my search engine and eugenesan/keepassxc is just a random person’s fork? (But then again, if it’s just a random fork, then why does it show up at the top, and why so many commits ahead of keepassx?)
To add even more to the confusion, not only is eugenesan/keepassxc unmaintained, it also points to www.keepassx.org (why?), which in turn says it’s unmaintained, too.
If I was just mistaken and eugenesan/keepassxc is really just a random fork, then my earlier allegations are all moot. Thank you for clearing this up, and also for clarifying that the other (legitimate?) KeePassXC was a preexisting fork (so it would have been difficult for them and possibly even more confusing to users if they had taken over the abandoned KeePassX project).
I've tried DDG, Google, Bing, and Yandex. All of them rank official KeepassXC stuff in the top five results, and -with the exception of Bing- rank it above any other non-Wikipedia results. I didn't see this weird keepassx GitHub fork in the results from any of the search engines I tried.
> When I searched for `keepassxc`, my search engine ranked eugenesan/keepassxc [0] higher than keepassxreboot/keepassxc...
With the greatest of respect, I would expect someone who's sufficiently savvy to know what to do with a GitHub repo in their search result to also be sufficiently savvy to -at minimum- visit the homepage listed in the repo's About blurb and notice that [0] is the very first item in the list of "Latest News". I'd also expect that savvy someone to know to visit the repo's Releases page, notice that there are no published releases, and consider even more intensely that they might not be looking at the software they expected to see.
I can't explain why your search system is ranking this misleadingly-named GitHub repo so highly. AFAICT, noone with the repo owner's email address was ever involved in any public development on KeePassXC.
I’m using Kagi. They say they rely on several third-party search indexes. I can’t see which one they are using for which particular search request. What I do know is that the backends are of varying quality. However, after years and years of using Google (back when their search was still good), I got used to the fact that if they return a GitHub project as a top search result, then that project was usually meaningful.
> With the greatest of respect, I would expect someone who's sufficiently savvy to know what to do with a GitHub repo in their search result to also be sufficiently savvy to -at minimum- visit the homepage listed in the repo's About blurb and notice that [0] is the very first item in the list of "Latest News".
Forks sometimes don’t update the About blurb that they inherit, and I think that that’s exactly what happened in the bogus repo.
> I'd also expect that savvy someone to know to visit the repo's Releases page, notice that there are no published releases, and consider even more intensely that they might not be looking at the software they expected to see.
In this case, however, the Releases section said “13 tags.” Some projects don’t use GitHub’s Releases feature at all, and rely only on Git tags. It’s sometimes difficult to spot.
cf. pass(1)[0][1]
[0] https://www.passwordstore.org/
[1] No, it's not hosted in the cloud (i.e., on someone else's servers) and that's a good thing. It's FOSS and can be compiled for Android/IOS (and has, see [2][3][4], least for Android). The DB (just a GPG store) can also be shared across multiple devices.
[2] https://f-droid.org/packages/app.passwordstore.agrahn/
[3] https://play.google.com/store/apps/details?id=dev.msfjarvis....
[4] Not sure about IOS versions, I don't have any Apple devices.
I wish there was a stronger differentiation between syncable and device-bound passkeys. It seems like we're now using the same word for two approaches which are very different when it comes to security and user-friendliness.
And yes, giving granny unsyncable passkeys is a really bad idea, for so many reasons.
But there is no difference. I'd prefer if services just let me generate a passkey and leave it entirely up to me how I manage it. Whoever setup granny's device should have done so with a cloud based manager.
I think Google tries to make some confused distinction, or maybe that has more to do with FIDO U2F vs FIDO2. There you can add either a "passkey" or a "security key", but iirc I added my passkey on my security key so... yeah
A common experience is Chrome telling me to scan a QR code. But I know this is not a legitimate method to sign in on any service _I_ use. I also never know WHY I'm being told to "scan this QR code". I scan it, and my phone also has no idea what to do with it! The site has decided, by not finding a passkey where it expects it, that it MUST be on my phone.
That's but one example of the horrible implementation, horribly usability, and horrible guidance various sites/applications/browsers/implementations use.
- open website
- if not already logged in, log in to 1Password
- autofill password
- autofill TOTP
Now:
- open website
- if logged in to 1Password the Use Passkey usually shows up
- if not:
- log in to 1Password
- choose use passkey
- this almost always does nothing
- choose “use other method”
- choose “password”
- autofill that
- now there is another dialog to choose the 2fa method, choose Authenticator
- autofill that
Passkeys would be great if they actually made anything simpler on a computer. They work fine on the phone but that’s not where I spend most of my time.Now that I’m so paranoid about this, and not remembering which sites I have them for, I always dismiss the passkey prompt, then have to click several more times to get to the password login and fill it in with my password manager.
Apple Passwords now sufficiently good to replace 1Password for me and I’m slowly transitioning.
I don’t mind subscription models per se but there was something about subscription for your own passwords that made me refuse to jump the fence when 1Password switched to that model.
Would be a bit faffy if you’re a Chrome user.
Let me see if I can get it again
The implementation in Chromium browsers (I use Arc, so I can't speak to Chrome itself) is basically a chunkier-looking 1Password.
I also have a bunch of stuff in 1Password that doesn’t have a home in Apple Passwords, which would be a problem.
And yes, Chrome with Apple Passwords is annoying. At work I’m forced to use Chrome for some things, and I’ve been dabbling with Apple Passwords. Every time I launch the browser I have to put in a code to link the extension with Passwords. It’s very annoying.
2. If you get this often, why do you use 1Password, honest question.
1Password used to work decently well before 2020. Now I have ~ 2k items in 1Password, distributed among two accounts (work and personal). Additionally, my spouse and I have a shared 1Password vault via the Family plan.
There’s no way I’m going to migrate 2k items and two dozen devices to another vendor. If there were one that met my requirements to begin with.
For example, are my 2FA seeds going to migrate properly? How about the tags, attachments, sections, subsections, security questions and answers, inline Markdown notes, the HIBP integration, built-in overrides to fix known broken websites, workarounds I’ve learned for unfixed websites, shared vaults, recovering lost access to shared vaults, syncing, templates, custom integrations that I maintain [0], personal scripts, etc. etc.
Will it still be able to auto-fill into a web page? Into shitty, broken web pages? On Linux? On my Linux phone?
At the scale and depth at which 1Password is currently integrated into my spouse’s and my life, it’s difficult to consider migration anything less than a full weekend project.
I regret letting my spouse and myself lock into 1Password before it enshittified.
"Click a link in the email" is really bad because it's very difficult to know the mail and the link in it are legitimate. Trusting links in emails opens to door to phishing attacks.
The company doesn't care either because fraud is just the cost of doing business- ease of ordering> security.
abc.com
the link in the email is abc.com
track.monkey.exe/sus/path/spyware?c=behhdywbsncocjdb&b=ndbejsudndbd&k=uehwbehsysjendbdhjdodj
or something 2x–3x longer
And the answer was, I can find out if the email is from abc.com by looking at the link, which should also be abc.com
I don't click in "track.monkey.exe". I don't click tracking links. I pay a lot of money for my newsletter provider because I can turn off (most) tracking links.
Track someone else.
Visiting the bank is fine. But who do you visit to recover your Gmail password?
It can take months and it only guarantees a backup, not full access, but it's better than nothing.
In practice: in my case, anecdotally, they just did it. For some reason owning the backup email account was not enough for the automated workflow to unlock my account, but sending a letter threatening to sue under the GDPR somehow changed their minds.
To avoid getting locked out you could add 2-3 passkeys from different providers to each account. And/or use a passkey provider that allows backups, and back up your keys. But I doubt many people will have the discipline to do either of that.
For a lot of people, dealing with (now mostly digital) bureaucracies is a major stress in life. The biggest one, for some.
Its not just about invonvenience. Its sometimes about losing access to some, and just not having it for a while.
In terms of practical effect, a performance metric for a login system could be "% of users that have access at a given point." There can be a real tradeoff, irl, between legitimate access and security.
On the vendor side.. the one time passwords fallback has become a primary login method for some. Especially government websites.
Customer support is costly and limited in capacity. We are just worse at this than we used to be.
Digital identity is turning out to be a generational problem.
How many HN denizens are the de facto tech support for family members when they can’t login, can’t update, can’t get rid of some unwanted behavior, or just can’t figure stuff out?
I don’t blame them one bit. The tech world has presented them with hundreds of different interfaces, recovery, processes, and policies dreamed up by engineers and executives who assume most of their user base is just like them.
I am waiting for the era when using passkeys is not depending from some big tech company.
You can choose any credential manager you want to store your passkeys.
My expectations for how long I intend to be alive and using the internet is much longer than my expectations for the continued operation and service of any particular passkey management software.
I already had to jump ship from LastPass after they were hacked. Imagine if they hadn't allowed me to migrate my passwords.
If you lose all your data and your entire life because you lost your phone, no company is responsible.
But if you get hacked they are.
So they’ve come up with a solution that can destroy your entire life, but reduces the risk of corporate liability.
But yeah, keep carrying water for the entities that won’t come up with actual user focused solutions because it may cost them 0.01% of their profits.
> Website: is this Jimbob' phone
> Hardware: yes
And
> Website: I'll give you a dollar if you tell me something juicy about this user
> Hardware: Give this token to Microsoft and ask them
> Microsoft: Jimbob is most likely to click ads involving fancy cheeses, is sympathetic to LGBTQ causes, and attended a protest last week
With passwords and TOTP codes, I am in control of what information is exchanged. Passkeys create a channel that I can't control and which will be used against me.
(I chose Microsoft here because in a few months they're using the windows 10->11 transition to force people into hardware that locks the user out of this conversation, though surely others will also be using passkeys for similarly shady things).
I don't think you understand the protocol. The attestation object does not mean there is an authenticator attestation.
There is no authenticator / credential manager attestation in the consumer synced passkey ecosystem. Period.
It seems pretty clear that "where possible" parties besides the user are provided with information about the user (ostensibly about their device, but who knows what implementers will use this channel for)... so they can make a trust decision.
It's going to end up being a root-of-trust play, and those create high value targets which don't hold up against corruption, so you're going to end up with a cabal of auth-providers who use their privileged position to mistreat users (which they already do, but what'll be different is that this time around nobody will trust that you're a real human unless you belong at least one member of this cabal).
Folks seem to be hung up on the term "attestation" being in the response of a create call. If you look inside that object, there is another carve out for optional authenticator attestation, which is not used for consumer use cases.
I will keep repeating what I've said in the other comments. There is no credential manager attestation in the consumer synced passkey ecosystem. Period.
Suppose we hatch a conspiracy to take our users out of the "consumer synced passkey system". And into one where you can use the authentication ritual as a channel where you can pass me unique bits re: this user such that we can later compare notes about their behavior.
What about passkeys prevents us from doing this? How do we get caught, and by whom?
Btw this predates passkeys which should perhaps be the way to go from now on.
Point is, the damage will be likely local to a single or a handful of accounts.
If all the accounts are protected by two factor on my phone and I lose it or it bricks, then I'm done. It will be a total mess with no paths to recover, except restarting literally everything from scratch.
I have Google Auth app on my phone and every few months I consider using it, but then reconsider and stay with passwords.
And there is a significant benefit of not needing to worry about weak or repeated passwords, password leaks etc.
Overall that pattern feels significantly better to me than a normal password system, and MUCH better than the "we'll send you six digits to copy and paste" solution.
1) User goes to BAD website.
2) BAD website says “Please enter your email and password”.
3) BAD’s bots start a “Log in with email and password” on the GOOD website using the user’s email and password.
4) BAD now has full access to the user’s GOOD account.
In the OP's example, the user is logging in to BAD.com intentionally, but his GOOD.com account is still hacked into.
This is a lot harder for the user to catch on to.
- user has an account on GOOD.COM
- user has saved her password in her browser
- user navigates to BAD.COM
In this case autofilled passwords are safe and convenient since they alarm the user that she isn't at GOOD.COM.A clickable link sent in email mostly works too, it ensures that the user arrives at GOOD.COM. (If BAD sends an email too, then there is a race condition, but it is very visible to the user.)
Pin code sent in email is not very good when the user tries to log in to BAD.COM.
There is no password in these new flows. They just ask for email or phone and send you a code.
Bad website only needs to ask for an email. It logs into Good with a bot using that email. Good sends you the code. You put the code in bad. Bad finishes the login with that code.
At no point in time is a password involved in these new flows. It's all email/txt + code.
Many sites work like this now. Resy comes to mind.
It's just an email, and a six digit code they text you.
No, please, not as long as attestation is in the spec. I firmly believe that passkeys are intended to facilitate vendor lock-in and reduce the autonomy of end users.
Frankly, I do not trust any passkey implementation as much as I trust a GPG-encrypted text file.
1) BAD actor tries to create account at GOOD website posing as oblivious@example.com.
2) GOOD website requests public key from BAD.
3) BAD provides self-generated public key.
4) GOOD later asks BAD to prove that they control the private key.
5) BAD successfully proves they control the private key.
Unless you have step 3b where GOOD can independently confirm that the public key does indeed belong to oblivious. But even that is easily worked around.
passkeys use a unique keypair per account, there's no single public key that represents you.
But as DecoPerson points out, in the realm of account creation, your "verify the email address first" solution has its limits.
It is easy to conflate different aspects of trust and think they have the same solution.
Most of the time, re: granny, women are targeted a much greater amount because of supposed weakness and vulnerability (report 2/3, victim 2/3), yet males send much larger amounts of money. ($112 vs $205) [2] Too be fair though, old people do tend to lose more with scams. Granny would probably lose $300 on average vs $113 for a 18-24. Conflicting numbers on the money #'s though, so some of that depends on which survey you ask.
Old people also tend to write each other a lot of cautionary warning stories such as the AARP article on Stan Lee's swindling in old age (security guard, "senior adviser", "protector", and daughter). [3]
Old people get a bunch of grief, yet old people are actually less likely to fall for the scams.
Also, if she's a retiree in Miami Beach, more likely to be targeted (Adak, AK; Deepwater, NJ; then Miami Beach, FL are the worst for scams.)
[1] https://www.pewresearch.org/internet/2025/07/31/online-scams...
[2] https://bbbmarketplacetrust.org/wp-content/uploads/2025/02/N...
[3] https://www.aarp.org/entertainment/celebrities/stan-lee-elde...
They’re too opaque for my taste and I don’t like them.
There are lots of attack patterns. That is one. I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit, and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
> Passkeys is the way to go. ... I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
... I do not agree your story is justification for passkeys, or for letting banks trust passkeys for authentication purposes. I'd rather she not lose access to banking services in the first place: I don't think banks should be allowed to do that, and I do not think it should be possible for someone to "steal all her money" so quickly -- Right now you should have at least several days to fix such a thing with no serious inconvenience beyond a few hours on the phone. I think it is important to keep that, and for banking consumers to demand that from their bank.
A "granny" friend of mine got beekeeper'd last year[1] and her bank reversed/cancelled the transfers when she was able to call the next say and I (local techdude) helped backup/restore her laptop. I do not think passkeys would helped and perhaps made things much worse.
But I don't just disagree with the idea that passkeys are useful, or even the premise of a decision here between losing all their money and choosing passkeys, I also disagree with your priors: Having to visit a bank branch is a huge inconvenience for me because I have to fly to my nearest. I don't know how many people around here keep the kind of cash they would need on-hand if they suddenly lost access to banking services and needed to fly to recover them.
I think passkeys are largely security-theatre and should not be adopted simply if only so it will be harder for banks to convince people that someone should be able to steal all their money/access with the passkey. This is just nonsense.
[1]: seriously: fake antivirus software invoice and everything, and her and her kid who is my age just saw the movie in theatres in like the previous week. bananas.
It's looks almost the same as the log-in-with-big-tech flow that users are already used to.
> and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
You enter it on the website you are trying to log into and where you initiated the action, which in this scenario is the BAD website.
You and I think they are bullshit, but ... the problem is that bullshit is sometimes genuine.
I have got tired of how many times in recent years I have seen things that looked like phishing or had obvious UX-security flaws and reported them only to have got a reply from customer service that the emails and sites were genuine and that they have no intention of improving.
If janky patterns is the norm, then regular users will not be able to recognise the good-but-janky from the scams.
Now replace email a with text message sent from a short-code.
That's what phishing is predicated on, and it seems to be successful enough.
Then the banks wanted you to use the dongle to verify yourself on phone and it all went downhill from there.
Nearly every website tries to offer Google or Microsoft based sign in, "sign in partners" are commonplace.
Somehow this makes me think of Pascal's Wager...
You just got through describing an attack where the victim was not aware that a bad actor can trigger a bona fide password reset code at an arbitrary time. For your little table of threats, you posit that at least clicking the link goes to the bona fide web site.
But there's a separate little table of threats for the case where an attacker controls the timing of sending a fake email. I believe realtors have this problem-- an attacker hacks their email and hangs back until the closing date approaches, then sends the fake email when the realtor tells the client to expect one with the wire transfer number/etc.
"Click a link in the email" isn't much secure either for most part. You might end up following a link blindly which can lure you into revealing even more information
Passkeys aren't that great either cause almost everyone has to provide a account recovery flow which uses these same phishable methods.
The language in communication is probably the most important deterrent here, second to using signals in the flow to present more friction to the abuser. A simple check like presenting captcha like challenge to the user in case they are not authenticating from the same machine can go a long way to prevent these kind of attacks at scale
On the rare occasion that my password manager refuses to autofill, I take a step back and painstakingly try to understand why. This happens about once a year or twice.
But granny can't go to a bank because they closed down most of their offices. Since 99% of what you need a bank for can be done using their app it no longer made financial sense to have a physical presence in most smaller towns and villages.
Lots of elderly were complaining about this when it happened because they were too lazy to learn how to use the bank apps. Hell, they already started complaining when you could no longer withdraw money at the desk even before they closed down the offices. Apparently even learning to use something as simple as an ATM was too much effort for them.
> 2) BAD website says “We’ve sent you an email, please enter the 6-digit code! The email will come from GOOD, as they are our sign-in partner.”
Does that mean that GOOD must be a 3rd party identity provider like Facebook, Apple, Google etc?
When you insert the login code on BAD, BAD uses it to finish the login process on GOOD that they started “on your behalf”.
1. you got login credentials at GOOD
2. you're using the same email address there
They then tell you GOOD will send you a code that you have to enter on their website.
Then they enter your Email on GOOD and request a reset, which sends a mail with a code to you.
You then enter the code on their website.
Now that they have the code they can enter it on GOOD and they have your account.
My problem with passkeys is that there is no hardware attestation like there is with Yubikeys and similar.
This means for security conscious applications you have no way of knowing if the passkey you are dealing with is from an emulator or the real-deal.
Meanwhile with Yubikeys & Co you have that. And it means that, for example people like Microsoft can (and do) offer you the option to protect your cloudy stuff with AAGUID filtering.
And similar if you're doing PIV e.g. as a basis for SSH keys, you can attest the PIV key was generated on the Yubikey.
You can't do any of that with passkeys.
Device-bound passkeys which are used in workforce / enterprise scenarios are typically attested.
Attestation does not exist for consumer synced passkeys by design. It is an open ecosystem.
I still don’t really understand what recovery looks like for a lost passkey… especially if I lose all of them. Not everything has a physical location where an identity can be validated, like a bank. Even my primary bank isn’t local. I’d have to drive about 6 hours to get to a branch office.
I don't know, some would say taking an attack from trivial to virtually impossible is a bit more than a "tiny bit".
A bad practice is the shorten the code validity to a few minutes. This cannot really be justified and puts users under stress, which lessens security.
The discussion around passkeys, who is and isn't allowed to store them, almost killed them for me personally. I use them for very, very few services and I don't want to extend it.
Why would I put a secret code from GOOD.com into BAD.com? That's the core of the problem.
If you put a code you get from GOOD.com into BAD.com, it's like you put a password from GOOD.com into BAD.com - don't do that.
A password manager will protect me from doing the latter. There’s no way it can protect me from doing the former.
Any human can be tricked, no matter how smart they are. A bad actor just has to wait for the right moment. No amount of “don’t do that” can change that fact.
"Any human can be tricked, no matter how smart they are."
and
"A password manager will protect me from doing the latter."
Don't work together. Either everyone can be tricked or not.
It says "Everyone can be tricked" but I can't be tricked because I use a password manager.
There are many reasons why such lapses of judgements happen, even to people who don’t believe in authority. For example, the fact that any human can be tricked.
> Don't work together. Either everyone can be tricked or not.
The password manager protects me from filling my password into the wrong site.
The password manager will not protect me from BAD.com tricking me into handing them out a one-time code that GOOD.com sent me via email.
This point means the user is not paying attention: 1) User goes to BAD website and signs up. Steps 2-7 wouldn't be possible without 1.
1) User goes to BAD website and enter credentials
2) BAD website use GOOD website to check if credential is valid
3) Pwned
It is just MITM attack. The moment you go to BAD and enter credential (password or one time code) you are done.
Isn't this the same thing as BAD asking, let us know the code i.e. password that GOOD gave you? Why would one be inclined to give BAD (i.e. someone else) this info?
You get an email, providing you with a phishing link for miсrosoft.com (where the apparent c is actually the cyrillic "s", so BAD). In the background, they initiate a login to microsoft.com (GOOD), who then send you a 6 digit code from the actual microsoft.com. If you were fooled by the original phishing website, you have no reason to doubt the code or not enter it.
Imagine a "free porn, login here" website, when you put in your gmail address it triggers the onetime code from gmail (assuming it did that type of login) - thousands would give it up for the free porn.
To be fair, can we blame them? There are so many legitimate flows that redirect like it’s a sport. Especially in payments & authn, which is where it’s most important. Just random domains and ping pong between different partner systems.
And how do you access your password manager when your computer is locked ?
[0] My daily driver OS is https://qubes-os.org
I'd imagine a convincing enough modal would do the trick though, in a lot of cases.
if the BAD site itself looks legit, and has convinced a user to do the initial login in the first place, they won't hesitate to lie and say that this 2-factor code is part of their partnership with google etc, and tells you to trust it.
A normal user doesn't understand what is a 2factor code, how it works, and such. They will easily trust the phisher's site, if the phisher first breaks the user and set them up to trust the site in the beginning.
What google does is to send a notification to the user's phone telling them someone tried to access their account if this happened (or any new login to any new device you previously haven't done so on). It's a warning that require some attention, and depending on your state of mind and alertness, you might not suspect that your account is stolen even with this warning. But it is better than nothing, as the location of the login is shown to you, which should be _your own location_ (and not some weird place like cypress!).
the 2FA code in this case is in the email, not via an app. This email is triggered by BAD on their end, but it is sent by GOOD.
If the 2fa is _only_ via the authenticator app, then the BAD will need to convince the user to type in that 2fa code from the app into the BAD site (which is harder, as nobody else does this, so it should raise suspicions from the user at least).
I think this is what Raymond Chen calls the other side of the airtight hatch.
The game is already over. The user is already convinced the BAD website is the good website. The BAD website could just ask the user for the email and password already and the user would directly provide it. The email authenticaton flow doesn’t introduce any new vulnerability and in fact, may reduce it if the user actually signs in via a link in the email.
I see no reason not to use password + one of multiple 2FA methods so the user can regain control.
The prompts that show where the login is coming from are useless, too, because mapping from IP addresses to geographical locations is far from perfect. For example, my legit login attempts showed me all over my country map. If I’m in a corporate VPN already, its exit nodes may also be all over the map, and your legitimate login from, say, Germany may present itself as coming from Cyprus, which is shady as fuck.
If I seek to implement 2fa for my own service and have it be not theater and resistant to such phishing attacks, it gets difficult real fast.
Edit: See first reply, this is not a mitigation at all!
The problem being exploited by BAD is that your login account identifier (email in this case) is used in both GOOD (and BAD - accidentally or deliberately orchestrated), and 2-factor does not prevent this type of phishing.
Peope do read, if the email is short
On what grounds you say people dont read? Any evidence?
This premise seems flawed.
How can you possibly know from experience that something is “very understandable” if the only brain you have is your own?
How do you anticipate how other people with brains different from yours are going to behave in situations of cognitive impairment or extreme stress, things that happen in the real world?
But I am speaking of myself only. From experience receiving well designed message comparing to the experience with badly designed messages.
I am a data point of evidence supporing my view. The opinion that "people don't read" is a complete speculation, without convincing evidence.
The real problem that many services simply not include the warning in the message.
It was that “[t]hey only read what they need to finish what they are currently trying to do.”
Those are two different claims.
Do not share the code 3456
and will read the words, because they read left to right.The code should be in the same font as the rest of the text.
I’m an avid reader. But there are limits to what I can process, and our world has become so full of noise that it has become a coping strategy for brains to selectively ignore stuff if they feel it’s not important at the moment. That effect becomes even more pronounced as the brain deteriorates with age.
And more so if you receive them constantly.
But of course, you are entitled to your opinion, even if it's wrong.
(The OP says one time codes are worse than passwords. In case of fishing passwords fail the same way as one time codes.)
I was also sarcastic/provocative even in the prev comment, saying the GOOD site always includes a warning with the code making the attack impossible. A variation of the attack is very widely used by phone scammers: "Hello, we are updating intercomm on your appartment block. Please tell us your name and phone number. Ok, you will receive a code now, tell it to us". Yet many online services and banks still send one time codes without a warning to never share it!
The fishing point may also be used in defence of one time codes: if the GOOD service was using passwords instead of one time codes, the BAD could just initiated fishing attack, redirecting the user to a fake login page - people today are used to "Login with" flow.
1) User goes to BAD website and signs up (with their user and password). BAD website captures the user and password
2) BAD website shows a fake authentication error, and redirects to GOOD website. Users is not very likely to notice.
3) BAD uses user and password to login to GOOD’s website as the user. BAD now has full access to the user’s GOOD account.
OK, with a password manager the user is more likely to notice they are in BAD website. Is that the advantage?
Passkeys are stronger here because you can’t copy and paste a passkey into a bad website.
If the attacker's doing this to thousands of accounts - which I'm sure they are - they're going to be stealing accounts for free just by guessing.
I wrote up a security report and submitted it and they said that I hadn't sufficiently mathematically demonstrated that this is a security vulnerability. So your only option is to get spammed and hope your account doesn't get stolen, I guess.
You can enable it on account.microsoft.com > Account Info > Sign-in preferences > Add email > Add Alias and make it primary. Then click Change Sign-in Preferences, and only enable the alias.
I had to make my Outlook email primary again on my Microsoft account, unfortunately, because of how I use OneDrive. I send people share invitations and there are scenarios (or at least there were the last time I checked) where sending invitations from the primary account email is the only way to deliver the invite. If your external email alias is primary, they'll attempt to send an email from Outlook's servers that spoofs the alias email :/
...those will get "drive by" attacks no matter what.
Interesting that they're letting you alias it back to "coolkid5674321" again...
With the alias I no longer have this issue.
I guess the fix for this would be exponential backoff on failed attempts instead of a static quota of 4 a day?
10^6 digits = 1,000,000 possibilities
125,000 accounts x 4 attempts per account per day = 500,000 attempts per day
---
1-(1-1/1,000,000)^500,000 ≈ 39%
So every day they have a roughly 39% chance of success at 125,000 accounts.
---
At a million accounts:
1-(1-1/1,000,000)^(4×1,000,000) ≈ 98%
Pretty close to 1 account per day
Off by a factor of 4 but the concept stands.
---
And 125k accounts will be close to guaranteed to getting you one each week:
1-(1-1/1,000,000)^(7×4×125,000) ≈ 97%
> Pretty close to 1 account per day
No, this means there is a 98% chance you get _at least_ 1 account.
`1-1/1,000,000` is the probability you fail 1 attempt. That probability to the 4millionth is the probability you fail 4 million times in a row. 1 minus _that_ probability is that the probability that you _don't_ fail 4 million times in a row, aka that you succeed at least once.
The expected number of accounts is still number of attempts times the probability of success for 1 try, or: 4 accounts.
Imagine the extreme case, where they pinged one million accounts and then tried the same code (123456) for each one. Statistically, 1 of those 1,000,000 six-digit TOTP codes will probably be 123456
Adding 2FA was the solution
I couldn't find the method they were using in the first place, because for me it always asks for the password and then just logs me in (where were they finding this 6-digit email login option?!), but this apparently blocked that mechanism completely because I haven't seen another sign-in attempt from that moment onwards. The 2FA code is simply stored in the password manager, same as my password. I just wanted them to stop guessing that stupid 6-DIGIT (not even letters!) "password" that Microsoft assigns to the account automatically...
As an example: I've disabled the email and sms MFA methods because I have two hardware keys registered.
However, as soon as my account is added to an azure admin group (e.g. through PIM) an admin policy in azure forces those to 'enabled'.
It took me a long time debugging why the hell these methods got re-enabled every so often, it boils down to "because the azure admin controls for 'require MFA for admins' don't know about TOTP/U2F yet"
Imho it's maddening how bad it is.
Cheapest VPS is $5/month, residential proxies are $3/1Gb, which equals ~$200 / 5 years.
$3 per hacked account — is it good unit economy?
Did I click “Yes” to the attack the fifth time, or was the sixth the attack? Or was it just a “hiccup” in the system?
Do I cancel the migration job and start from the beginning or roll the dice?
It’s beyond idiotic asking a Yes/No question with zero context, but that was the default MFA setup for a few hundred million Microsoft 365 and Azure users for years.
“Peck at this button like a trained parrot! Do it! Now you are ‘secure’ according to our third party audit and we are no longer responsible for your inevitable hack!”
All of the prompts users get these days in an effort to add "security" have trained users to mindlessly say "yes" to everything just so they can access the thing they're trying to do on their computer; we've never had less secure users. The cookie tracking prompts should probably take most of the blame.
I know with the last major macOS update, nearly every app is now repeatedly asking if it can connect to devices on my network. I don't know? I've been saying yes just so I don't have stuff mysteriously break, and I assume most people are too. They also make apps that take screenshots or screen record nag you with prompts to continue having access to that feature. But how many users are really gonna do a proper audit, as opposed to the amount that will just blindly click "sure, leave me alone"?
On my phone, it keeps asking if I want to let apps have access to my camera roll. Those stupid web notifications have every website asking if it can send notifications, so everyone's parents who use desktop Chrome or an Android have a bunch of scam lotto site ad notifications and don't know how to turn them off.
Using a modern password manager, like 1password, is _easier_, safer, and faster than the stupid email-token flow. it takes a little bit of work and attention at first to setup across a couple devices, and verify it works.... but its really about the same amount of effort as keeping track of a set of keys for your house, car, and maybe a workplace.
If you make a copy of a door key when you move into a new place, you test the key before assuming it works. Same thing with a password manager. Save a password on your phone, test it on a different device, and verify the magic sync works. Same as a key copier or some new locks a locksmith may install.
Humans can do this. You don't need to understand crypto or 2fa, but you can click 'create new password' and let the app save some insanely secure password for a new site. Same with a passkey, assuming you don't save to your builtin device storage that has some horrible, hidden user interface around backing that up for when your phone dies.
And the irony is the old flow just works better! You let the password manager do the autofill, and it takes a second or two, assuming their is an email _and_ a password input. Passkeys can be even faster.
I'm as frustrated about this as you are, but there is a large class of people who will not or can not understand and implement the password-manager workflow.
Of the people I know who are not in a tech career i'd say about 80% have nothing but contempt and ignorant fatalism toward security. The only success I've had is getting one older relative to start writing account credentials down in a little paper notebook and making sure there are numbers and letters in the passwords.
I don't know the general situation, but, at least in our small town, people would go to the phone service shop just for account setup and recovery, since it's just too complicated. Password managers and passkeys don't make things simpler for them either –– I've never successfully conveyed the idea of a password manager to a non-tech person; the passkey is somehow even harder to explain. From my perspective it's both the mental model and the extra, convoluted UX that's very hard to grasp for them.
Until one day we come up with something intuitive for general audience, passwords and the "worse" one-time code will likely continue to be prominent for their simplicity.
It’s actually worse, since now the email account or the password get you in, vs. just the email account.
I disagree. The problem with the magic code is that you've trained the user to automatically enter the code without much scrutiny. If one day you're attempting to access malicious.com and you get a google.com code in your email, well you've been trained to take the code and plug it in and if you're not a smarty then you're likely to do so.
In contrast, email password recovery is an exception to the normal user flow.
I've got a little generic login tool that bits I write myself use for login, using this method, but it is not for anything sensitive or otherwise important (I just want to identify the user, myself or a friend, so correct preferences and other saved information can be applied to the right person, and the information is not easily scraped) - I call it ICGAFAS, the “I Couldn't Give A Factor” Auth System to make it obvious how properly secure it isn't trying to be!
Another issue that email based “authentication” like this (though one for the site/app admins more than the end user) has is the standard set of deliverability issues inherent with modern handling of SMTP mail. You end up having to use a 3rd party relay service to reduce the amount of time you spend fighting blocklists as your source address gets incorrectly ignored as a potential spam source.
Was about to post just this. This is the flow they use for account recovery so it's the weakest link in the chain anyway.
If you use sentences instead of randomly generated characters, the entropy (in bits/character) is lower, so 100 characters might well make sense.
Also, only developers who have no idea know what they're doing will feed plain-text passwords to their hasher. You should be peppering and pre-digesting the passwords, and at that point bcrypt's 72 character input limit doesn't matter.
It's easy for somebody who knows this to fix bcrypt, but silently truncating the input was an unforced error. The fact that it looks like and was often sold as the right tool for the job but isn't has led to real-world vulnerabilities.
It's a classic example of crypto people not anticipating how things actually get used.
(Otherwise, though, I agree)
You do not have to do any transformations on the input when using Argon2, while you must transform the input before using bcrypt. This was, again, an unnecessary and dangerous (careless) design choice.
KDF(password, salt XOR pepper, ...)
KDF(password + pepper, salt, ...)
KDF(password, AES128(salt, pepper), ...)
KDF(HMAC-SHA256(password, pepper), salt, ...)
...
And no, you cannot always pepper. To use a pepper effectively, you have to have a secure out-of-band channel to share the pepper. For a lot of webapps, this is as simple as setting some configuration variable. However, for certain kinds of distributed systems, the pepper would have to be shared in either the same way as the salt, or completely publicly, defeating its purpose. Largely these are architectural/design issues too (and in many cases, bcrypt is also the wrong choice, because a KDF is the wrong choice). I already alluded to the Okta bcrypt exploit, though I admit I did not fully dig into the details.The HMAC-SHA256 construction I showed above, and similar techniques, accomplishes both transforming the input and peppering the hash. However, the others don't transform the input at all or, in one case, transform it in a way even worse for bcrypt's use.
If your password was 123lookatme, you could type 123lookaLITERALLYANYTHING and it would succeed.
Nevermind. that pretty much all services treat the second factor as more secure than my 20 character random password saved in a local password safe. And those second factors are, lets see, plain text over SMS, plain text over the internet to an email address, etc, etc, etc.
I'm in the rental market right now, and Zillow not only has a log-in for the app, but to read messages in your inbox, you have to MFA again each time, and the time-out period is about an hour.
We're being annoyed to death.
This is madness.
Nothing like this could happen with any mainstream mail service like Gmail, where it's officially advertised that the accounts could never be reused.
The worst part about SMS is that not only is there the potential to be locked out permanently, but also you never know whether or not the service would allow login or password reset via SMS, thus, you never know if you're opening yourself to account takeover.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
- You are in front of the attacker site that looks like a legitimate site where you have an account (you arrived there in any way: Whatsapp link, SMS, email, whatever). Probably the address bar of your browser shows something like microsoft.minecraft-softwareupdate.com or something alike, but the random user can't tell it's fake. The page asks you to login (in order to steal your account).
- You enter the email address to login. They enter your email address in the legitimate site where you actually have an account.
- Legitimate site (for example Microsoft) sends you an email with a six digit code, you read the code, it looks legit (it is legit) and you enter it in the attacker site. They can now login with your account.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
Replace "can simply send your email address" with "can simply input your email address". An attacked inputs your email at login.example.com, which sends a code to your email. The attacker then prompts you for that code (ex. via a phishing sms), so you pass them the code that lets them into the account.
The article is not advocating against e-mail-driven URL-based password reset/login, whereby the user doesn't enter any code, but must follow a URL.
The six digit code can be typed into a phony box put up by a malicious web site or application, which has inserted itself between the user and the legitimate site.
The malicious site presents phony UI promoting the user to initiate a coded login. Behind the scenes, the malicious site does that by contacting the genuine site, and provoking a coded login. The user goes to their inbox and copies the code to the malicious site's UI. The site then uses it to obtain a session with the genuine site, taking over the user's account.
A SSL protected URL cannot be so easily intercepted. The user clicks on it and it goes to the domain of the genuine site.
1. It's pretty phishable. I think this is mostly solved, or at least greatly mitigated, by using a Slack-style magic sign-in link instead of a code that you have the user manually enter into the trusted UI. A phisher would have to get the user to copy-paste the URL from the email into their UI, instead of clicking the link or copy-pasting it into the address bar. That's an unusual enough action that most users probably won't default to doing it (and you could improve this by not showing the URL in HTML email, instead having users click an image, but that might cause usability problems). It's not quite fully unphishable, but it seems about as close as you can get without completely hiding the authentication secret from the user, which is what passkeys, Yubikeys, etc., do. I'd love to see the future where passkeys are the only way to log into most websites, but I think websites are reluctant to go there as long as the ecosystem is relatively immature.
2. It's not true multi-factor authn because an attacker only needs to compromise one thing (your inbox) to hijack your account. I have two objections to this argument:
a. This is already the case as long as you have an email-based password reset flow, which most consumer-facing websites are unwilling to go without. (Password reset emails are a bit less vulnerable to phishing because a user who didn't request one is more likely to be suspicious when one shows up in their inbox, but see point 1.)
b. True multi-factor authn for ordinary consumer websites never really worked, and especially doesn't work in the age of password managers. As long as those exist, anyone who possesses and is logged into the user's phone or laptop (the usual prerequisites for a possession-based second factor) can also get their password. Most websites should not be in the business of trying to use knowledge-based authentication on their users, because they can't know whether the secret really came from the user's memory or was instead stored somewhere, the latter case is far more common in practice, and only in the former case is it truly knowledge-based. Websites should instead authenticate only the device, and delegate to the device's own authentication system (which includes physical possession and likely also a lock secret and/or biometric) the task of authenticating the user in a secure multi-factor way.
* Mobile email clients that open links in an embedded browser. This confuses some people. From their perspective they never stay logged in, because every time they open their regular browser they don’t have a session (because it was created in the embedded browser) and have to request a login link again.
* Some people don’t have their email on the device they want to log in on.
Sending codes solves both of these problems (but then has the issues described in the article, and both share all the problems with sending emails)
The reason why magic links don't usually work across devices/browsers is to be sure that _whoever clicks the link_ is given access, and not necessarily whoever initiated the login process (who could be a bad actor)
If done naively with a simple magic link, yes.
> and if the user happens to click the link they've just given the attacker access to their account
Worse: if the user's UA “clicks the link” by making the GET request to generate a preview. The user might not even have opened the message for this to happen.
> Wouldn't that be incredibly insecure?
It can be mitigated somewhat by making the magic link go to a page that invites the user to click something that sends a post request. In theory the preview loophole might come into play here if the UA tries to be really clever, but I doubt this will happen.
Another option is to give the user the option to transfer the session to the originating UA, or stay where they are, if you detect that a different UA is used to open the magic link, but you'd have to be carful wording this so as to not confuse many users.
You mean something like a popover preview that appears when the user hovers over a link?
Isn’t there a way to configure the `a` element so the UA knows that it shouldn’t do that?
That, or a background process that visits links to check for malware before the user even sees the message.
> Isn’t there a way to configure the `a` element so the UA knows that it shouldn’t do that?
If sending just HTML you could include rel="nofollow" in the a tag to discourage such things, bit there is no way of enforcing that and no way of including it at all if you are sending plain text messages. This has been a problem for single-use links of various types also. So yes, but not reliably so effectively no.
Off the cuff suggestions for improving UX in secure flows just make things worse.
Magic links are better than codes, but they don't work well for cross-device sign-in. What Nintendo does is pretty great: If I buy something on my switch, it shows me a QR code I take a picture of with my phone and complete the purchase there.
I agree it is "mostly solved" in that there are good examples out there, but this is a long way from the solution being "best practices" that users can expect the website/company to take security seriously.
> a. This is already the case as long as you have an email-based password reset flow
I hard-disagree:
If I get an email saying "Hi you are resetting your password, follow these directions to continue" and I didn't try to reset my password I will ignore that email.
If I have to type in random numbers from my email every few days, I'm probably going to do that on autopilot.
These things are not the same.
> anyone who possesses and is logged into the user's phone or laptop (the usual prerequisites for a possession-based second factor) can also get their password.
I do not know what kind of mickey-mouse devices you are using, but this is just not true on any device in my house.
Accessing the saved-password list on my computer or phone requires an authentication step, even if I am logged-in.
I also require second-authentication for mail and a most other things (like banking, facebook, chats, etc) since I do like to let my friends just "use my phone" to change something on spotify or look up an address in maps.
> Most websites should not be in the business of trying to use knowledge-based authentication on their users, because they can't know whether the secret really came from the user's memory or was instead stored somewhere
They can't know that anyway, and pretending they do puts people at risk of sophisticated attackers (who can recover the passkey) and unsophisticated incompetence on behalf of the website (who just send reset links without checking).
> Websites should instead authenticate only the device, and delegate to the device's own authentication system
I disagree: Websites have no hope of authenticating the device and are foolishly naive to try.
except I'm a user, not a device
>>I<< want to be authenticated, not my specific device that I'm going to switch at some point
If you enter your username, password, and totp, and the website tells you you've logged in from some device halfway across the planet you've never heard of, you probably have a problem.
If what you mean is that you have no online accounts then you are a few steps ahead of me. I will get there eventually but have some things to take care of first. Congrats on disconnecting from the internet though. I assume this site is your last holdout? I am envious if so. This site will also be my last online presence.
Consider this scenarioUser initiates login on your site.
1. You generate a code_verifier (random) and a code_challenge = SHA256(code_verifier) and store the code_verifier in the browser session (e.g., local/session storage, secure cookie, etc.).
2. You send the code_challenge to the server along with the email address.
3. Server sends the email with a login code to the user, recording the challenge (associated with the email).
4. User receives the email and enters the code on the same device/session.
Client sends the code + code_verifier to the server. Server verifies: Code is correct. SHA256(code_verifier) == stored code_challenge.
The end result is that The code cannot be used from another device or browser unless that device/browser initiated the flow and has the code_verifier.
A combination of the above and a login link might help. But ultimately, the attacker will be relying on the gullibility of the user. The user will have to not check the urls
Assuming the bot knows to send a code_challenge and send the code_verifier together with the verification code
But then again, GOOD can just also ensure that their otp can only be completed from the GOOD domain/origin. That would shore things up at least.
An attacker can register a domain like office375.com, clone Microsoft's login page, and relay user input to the real site. This works even with various forms of MFA, because the victim willingly enters both their credentials and second factor into a fake site. Push-based MFA is starting to show IP and location data, but a non-technical user likely won’t notice or understand the warning and a sophisticated attacker will just use a VPN matching the users' location anyways.
Passkeys solve this problem through origin enforcement. Your browser will not let you use a passkey for an origin that the passkey was not created for. If they did, you could relay those challenges as well (still better than user + pass as the challenges are useless after first use).
While the premise is correct -- it's easy to complain but the author also provides zero recommendations on what is a better form of MFA.
It's about email as single factor auth, which has become very trendy of late. You just enter your email address, no password, and the email you a code. Access to your email is the only authentication.
That's not MFA. MFA stands for multi-factor authentication. If the authentication only requires a code sent to an email OR phone number, that's just a single factor.
- Enter an email address or phone number
Thats not just email, that's also SMS.
I must be in the wrong bubble, I have not encountered any site that does this since the 2000s. It was a minor trend around then IIRC.
Patreon can do that too, depending on how you sign up.
The entire email login flow is completely retarded. It’s not even secure.
Email link to reset is better, email link + another auth (usually sms) is even better.
Its super odd if you land on facebook.com-profilesadfg.info/login thinking its just Facebook and try to login but get a "password reset" email. Most people would be confused as they don't want to reset their password.
Having it for every login means that just missing the website URL, everything else is 100% legit.
It’s about single factor, password logins, using a one-time-token
The very first bullet point states: Enter an email address or phone number
That insinuates email OR SMS.
It doesn't just mention email only.
The authentication factors of a multi-factor authentication scheme may include: 1. Something the user has: Any physical object in the possession of the user, such as a security token (USB stick), a bank card, a key, a phone that can be reached at a certain number, etc. 2. Something the user knows: Certain knowledge only known to the user, such as a password, PIN, PUK, etc. 3. Something the user is: Some physical characteristic of the user (biometrics), such as a fingerprint, eye iris, voice, typing speed, pattern in key press intervals, etc.
Email and phone are both in category one, comprising only one unique factor.
If you have access to the phone, you can log in. OR if you have access to the email account, you can log in.
You don't need to know the user's password, you only need access to one of these inboxes and nothing else. One-factor authentication, but worse, because there are multiple attack surfaces.
I appreciate most people log in and stay logged in but I frequently switch Spotify accounts and I use passwords to log in, instead of letting me choose password or a 6 digit code, every time I try and change account a needless 6 digit code is generated and sent to a shared inbox, a huge waste of resources and storage. In addition to being a security concern as flagged throughout this thread.
Try multi-account containers so no need to log out? (Or Island on Android?)
With email pasting the number into a random website is the expected flow and there is basically no protection (some phones have basic protections for SMS auth but even this only works if you are signing in on the same device).
It's "terrible" because the author can describe exactly one phishing vector?..
Have you ever tried resetting a password before? Passwords have a similar phishing vector, plus many other problems that magic links and one-time login codes don't have.
If six-digit login codes are less secure than passwords, the reasons why are certainly not found in this article.
(I do agree with you about backups being essential, but my conclusion was "the idea is fundamentally flawed," rather than "it's one tweak away from greatness.")
https://github.com/keepassxreboot/keepassxc/issues/10407
Of course, they might just block you for not being on a whitelist of approved providers anyway.
> I've already heard rumblings that KeepassXC is likely to be featured in a few industry presentations that highlight security challenges with passkey providers, the need for functional and security certification, and the lack of identifying passkey provider attestation (which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
> The reason we're having a conversation about providers being blocked is because the FIDO Alliance is considering extending attestation to cover roaming keys.
> From this conversation it sounds like the FIDO Alliance is leaning towards making it possible for services to block roaming keys from specific providers.
The entire issue is about doing the minimum possible of not exporting it in plaintext. Nothing is stopping you from decrypting it and posting it on your Twitter if you so wish. Just don't have the password manager encourage bad practices. How it that unreasonable?
And by the way, if and when something like that does happen, what's the user supposed to do if they suddenly find their passkey provider has been blocked?
> To be very honest here, you risk having KeePassXC blocked by relying parties (similar to #10406).
From the linked https://github.com/keepassxreboot/keepassxc/issues/10406 > | no signed stamp of approval from on high > see above. Once certification and attestation goes live, there will be a minimum functional and security bar for providers.
> | RP's blocking arbitrary AAGUIDs doesn't seem like a thing that's going to happen or that can even make a difference > It does happen and will continue to happen because of non-spec compliant implementations and authenticators with poor security posture.
Is your argument that despite being doused with gasoline I can't complain because I'm not currently on fire?
(You have every right do douse yourself in gasoline. No one is taking that way from you. Just say away from everyone else)
KeePassXC is at risk of being blocked for making it easy to back up the passkeys. I don't see where that's been disproven or explained, other than saying "well attestation isn't enforced yet" -- that is, the metaphorical gasoline (provider AAGUIDs) hasn't yet been ignited (blocking of provider AAGUIDs)
> The entire issue is about doing the minimum possible of not exporting it in plaintext. Nothing is stopping you from decrypting it and posting it on your Twitter if you so wish. Just don't have the password manager encourage bad practices.
I don't disagree with this in principle, but it does warn you and realistically, what is the threat model here? It seems more like a defense-in-depth measure rather than a 5-alarm fire worthy of threatening to blacklist a provider. Maybe focus energy instead on this? (3+ year workstream now I guess?)
>> Sounds like the minimal export standard for portability needs to be defined as well.
> This is all part of the 2+ year workstream.
--
The more I get exposed to this topic, the less I'm convinced it was designed around people in the real world, e.g. https://news.ycombinator.com/item?id=44821601. Sure is convenient that it's so so easy to get locked into a particular provider, though!
Perhaps there ought to be a well-known, "ACME" password-changing API such that a password manager could hypothetically change every password for every service contained in an automated fashion.
It's copied over from FIDO hardware keys where each device type needed to be identifiable so higher tier ones could be required or unsecured development versions could be blocked.
What a crock, to not bother coming up with a way to make passkeys portable and then threaten to ban providers who actually thought about how humans might use them in the real world
Because these passkeys are stored in the Cloud and synced to your providers account (i.e. Google/Apple/1Password etc), they can't support attestation. It leads to a scenario where Relying Parties (the apps consuming the passkey), cannot react to incidents in passkey providers.
For example: If tomorrow, 1Password was breached and all their cloud-stored passkeys were leaked, RP's have no way to identify and revoke the passkeys associated with that leak. Additionally, if a passkey provider turns out to be malicious, there is no way to block them.
https://www.ledger.com/blog/strengthen-the-security-of-your-...
https://github.com/LedgerHQ/app-security-key/issues/6
https://github.com/LedgerHQ/app-security-key/issues/7
Is Trezor implementation more mature?
What's missing is a standardized format for the export.
With a username and password field, these are automatically correctly filled by Safari.
With sites that only offer an email field, I have to manually fill it.
(Note that I tend to use different emails for different sites; if you only ever use one email this might not be a problem).
Most leaked passwords online come initially from leaked hashes, which bad actors use tools like hashcat to crack.
If your user has a password like "password123" and the hash gets out, then the password is effectively out too, since people can easily lookup the hash of previous cracked passwords like "password123".
So even 7 years ago bcrypt was only the 3rd recommended option.
"But, seriously: you can throw a dart at a wall to pick one of these... In practice, it mostly matters that you use a real secure password hash, and not as much which one you use.
This is not good for the user.
Suggestions welcome if anyone has them.
FWIW when I was researching this for my own accounts I believe I saw in passing that someone had figured out a way to extricate the TOTP secret from VIP Access to use in a standard TOTP app. I didn't look into it much though since none of my current accounts require it and it just seemed something to avoid.
pip install python-vipaccess looks like it'll provision new token, form which you can then use the secret in a regular TOTP app.
Wonder if that could be used to sidestep the proprietary app
looks like you can!
As for the method itself.. IMO they're certainly phishable, but I don't think they're any more phishable than a typical username/password prompt.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
An attacker can also simply present a login prompt, say a Google-looking one, and a user will just enter their credentials.
This is why phishing-resistant authentication is the one true path forward.
If anyone would be interested I could write it up? I was surprised what a nice user flow it is and how easy it was to achieve.
Email magic links are more phishing resistant - the email contains a link that authenticates the device where the link was clicked. To replicate the same attack, the user would have to send the entire link to the attacker, which is hopefully harder to socially engineer.
But magic links are annoying when I want to sign in from my desktop computer that doesn't have access to my email. In that case OTP is more convenient, since I can just read the code from my phone.
I think passkeys are a great option. I use a password manager for passkeys, but most people will use platform-provided keys that are stuck in one ecosystem (Google/Apple/MS). You probably need a way to register a new device, which brings you back again to email OTP or magic link (even if only as an account recovery option).
If they actually are passwords, yes, my password manager is a better UX than having to fetch my phone, open SMS, wait for the SMS, like good grief it's all so slow.
(In the 2FA form, I'd prefer TOTP over SMS-OTP, but the difference is less there.)
Passkeys would be so much easier, convenient and so much more secure. I really don't understand why they go for this.
I’m pretty sure we can prevent this by issuing some kind of proof of agreement (with sender and recipient info) thru email services. Joining a service becomes submitting a proof to the service, and any attempt to contact the user from the service side must be sealed with the proof. Mix in some signing and HMAC this should be doable. I mean, IF we really want to extend the email standard.
How does this scheme stop you from putting a legitimate code from a legitimate sender into an illegitimate website?
https://support.microsoft.com/en-us/account-billing/how-to-g...
1. Mobile phone numbers are not secure. SIM jacking is a thing, and a 6 digit code is not impossible to guess (it's only 1 in a million).
2. Sending codes/links via email is problematic as described by the article.
3. Inconsistent "best practices" confuse users, and frustrate them.
1. authentication via password: accounts stolen by criminals and then inaccessible to the user.
2. authentication via passkey: accounts lost by users because passkeys have friction, to say the least, when devices are lost/stolen/transferred.
It seems that big providers would much rather scenario 2.
Just like sending large files over the internet.
They do provide Google sign-in, but I've had issues with Google sign-in during traveling too often to consider it a legitimate option.
It's also terrible UX. Emails don't always arrive right away. (Especially if you've enabled greylisting.)
A passphrase is basically like a password in the sense that I can lose it, but it's not like a password in the sense that I can actually memorise it. (Or rather, all of them)
I prefer my passwordstore workflow.
I remember two passwords, the rest is kept save for me and unlocked when I need them.
It's not perfect, but it's by far the least worse solution of them all.
The "Simply" music apps use a four digit code, sent by mail. And that never changes.
Easy account sharing! It is a feature!
So I asked my friend Miss Chatty [1] about it. Hopefully this will help anyone who is as confused as I was.
https://chatgpt.com/share/68947f35-0a10-8012-9ae9-adadc3df8b...
[1] Siri and Alexa get to have cool names, so why can't ChatGPT?
Let’s be honest all forms of auth suck and have pros and cons.
The real solution is detect weird logins because users cannot be trusted. That’s why we build for them!
A better workflow is to send the user a link where they can set their initial password themselves.
Here is what I do when the user logs in and email verification is needed:
1. Generate a UUID on the server.
2. Save the UUID on the client using the Set-Cookie response header.
- The cookie value is symmetrically encrypted and authenticated via HMAC or AES-GCM by the server before it is set, such that it can only be decrypted by the server, and only if the cookie value has not been tampered with. This is very easy to do in hapi.js and any other framework that has encrypted cookies.
- Use all the tricks to safeguard the cookie from being intercepted and cloned. For example, use a name with the __Host- prefix and these attributes: Secure; HttpOnly; SameSite=Lax;
3. The server sends an email to the user with a link like https://site.com/verify?code=1234, where 1234 is the UUID.
4. The user clicks the link and has their email verified.
- When the link is clicked, the browser sends the Cookie header automatically, the server decrypts it and compares it to the UUID in the URL and if that succeeds, the email has been verified. Again, this is very easy in hapi.js, as it handles the decryption step.
- Including the UUID in the magic link signals that there is _supposed_ to be a cookie present, so if the cookie is missing or it doesn't match, we can alert the user. It also proves knowledge of the email, since only the email has access to the UUID in unencrypted form.
5. The server unsets the cookie, by responding with a Set-Cookie header that marks it as expired.
6. The server begins a session and logs the user in, either on the page that was opened via the link or the original page that triggered the verification flow (whichever you think is less likely to be an attacker, probably the former).
Note that there are some tradeoffs here. The upside is that the user doesn't need to remember or type anything, making it harder to make mistakes or be taken advantage of. The downside is that the friction of having to use the same device for email and login may be a problem in some situations. Also, some email software may open a different browser when the link is clicked, which will cause the cookie to be missing. I handle this by detecting the missing cookie and showing a message suggesting the user may need to copy-paste the link to their already open browser, which will work even if they open a new tab to do it (except for incognito mode, where some browsers use a per-tab cookie jar).
Lastly, no cookie is 100% safe from being stolen and cloned. For example, a social engineering attack could involve tricking the user into sharing their link and Set-Cookie header. But we've made it much more difficult. They need two pieces of information, each of which generally can't be intercepted, or used even if intercepted, by intermediary sites.
Roughly the same security for password-login with email recovery. The only difference is that this makes the attack surface larger because the user is frequently using email.
The only secure login is through 1. a hardware device and 2. a solution where both the user/service are "married" and can challenge each other during the login process. This way, your certificate of authentication will also check that the site you are connecting to is who it says it is.
1. There's very low consistency in implementation, so while I understand the problems passkeys solve, it seems like every vendor has chosen different subproblems of the problem space to actually implement. Does it replace your password? Does it replace MFA? Can I store multiple passkeys? Can I turn off other forms of MFA? Do I still need to provide my email address when I sign in (Github actually says No to this)?
2. The experience of passkeys was supposed to be easier and more secure than passwords for users who struggle to select good passwords, but all I've observed is: Laypeople whose passwords have never been compromised, in 20 years of computing, now deeply struggling to authenticate with services. Syncing passwords or passkeys between devices is something none of these people in my life have figured out. I still know two people in their late 20s who use a text file on their computer and Evernote to manage their passwords. What is their solution for passkeys? They don't know. They're definitely using them though. The average situation I've seen is: "What the heck is this how do I do this I guess I'll just click save passkey on this iOS prompt" and then they can never get back into that service. The QR code experience for authenticating on desktop using mobile barely works on every Windows machine I've seen.
3. There is still extremely low support among password managers for exporting passkeys. No password managers I've interacted with can do it. Instead its to my eyes become another user-hostile business decision; why should we prioritize a feature that enables our users to leave my product? "Oh FIDO has standardized the import/export system its coming" Yeah we've also standardized IPv6. Standards aren't useful until they're used. "Just create new passkeys instead of exporting" as someone who has recently tried to migrate from 1Password to custom-hosted Bit/Vaultwarden: This is the reason why I gave up. By the way, neither of these products support exporting passkeys.
It might end up being like USB-C where its horrible for the first ten years, but slowly things start getting better and the vision becomes clear. But I think if that's the case: We The Industry can't be pulling an Jony Ive Apple 2016 Macbook Pro and telling users "you have to use these things and you have no other option [1]". Apple learned that lesson. I'm also reasonably happy with how Apple has implemented Passkeys (putting aside all the lockin natural to using Apple products, at least its expected with them). But no one else learned that lesson from them.
[1] https://www.cnet.com/tech/your-microsoft-passwords-will-vani...
1. You go to evil.example.com, which uses this flow.
2. It prompts you to enter your email. You do so, and you receive a code.
3. You enter the code at evil.example.com.
4. But actually what the evil backend did was automated a login attempt to, like, Shopify or some other site that also uses this pattern. You entered their code on evil.example.com. Now the evil backend has authenticated to Shopify or whatever as you.
For the username + password hack to work, the evil.example.com would have to look like Shopify, which is definitely more suspicious than if it's just a random legitimate-looking website.
1. A password manager shouldn't be vulnerable to putting your password in a phishing site.
2. If your password is leaked, an attacker can't use it without the TOTP.
Someone who doesn't use a password manager won't get the benefits of #1, so they can be phished even with a TOTP. But they will get the benefits of #2 (a leaked password isn't enough)
Passkeys assume/require the use of a password manager (called a "passkey provider")
* Electronic mail (the technology)
* An email message
* An email address
* An email inbox
In this example they mean email address.
bank.com sends you verification email, which you expect from foo.com as part of the sign-up verification process. For some bat shit crazy reason, you ignore that the email came from bank.com and not foo.com and you type in the secret code from the email into the foo.com to complete the sign up process.
And bam! the foo.com got into your bank account.
A complete nonsense but because it works in 0.000000000000001% of the time for some crazy niche cases in the real world, let's talk about it.
like similar to if you get a "your login" yes/no prompt on a authentication app, but a bit less easy to social engineer but a in turn also suspect to bruteforce attacks (similar to how TOTP is suspect to it)
through on the other hand
- some stuff has so low need of security that it's fine (like configuration site for email news letters or similar where you have to have a mail only based unlock)
- if someone has your email they can do a password reset
- if you replace email code with a login link you some cross device hurdles but fix some of of social enginering vectors (i.e. it's like a password reset on every login)
- you still can combine it with 2FA which if combined with link instead of pin is basically the password reset flow => should be reasonable secure
=> eitherway that login was designed for very low security use cases where you also wouldn't ever bother with 2FA as losing the account doesn't matter, IMHO don't use it for something else :smh: