Okta – Username Above 52 Characters Security Advisory
144 points
2 months ago
| 8 comments
| trust.okta.com
| HN
fanf2
2 months ago
[-]
<< The Bcrypt algorithm was used to generate the cache key where we hash a combined string of userId + username + password. During specific conditions, this could allow users to authenticate by only providing the username with the stored cache key of a previous successful authentication. >>

https://man.openbsd.org/crypt

<< The maximum password length is 72. >>

So if the userid is 18 digits, the username is 52 characters, and the delimiters are 1 character each, then the total length of the non-secret prefix is 72, and bcrypt will drop the secret suffix.

You aren’t supposed to put more than the salt and the password into trad unix password hashes.

reply
gzer0
2 months ago
[-]
Here's how I see it:

Core issue (okta's approach):

  * They concatenated userId + username + password for a cache key
  * Used BCrypt (which has a 72-byte limit)
  * The concatenation could exceed 72 bytes, causing the password portion to be truncated
Why this is problematic:

  * BCrypt is designed for password hashing, not cache key generation
  * Mixing identifiers (userId, username) with secrets (password) in the same hash
  * Truncation risk due to BCrypt's limits
Password storage should be separate from cache key generation. Use a random salt + appropriate hash function and for cache keys - use HMAC or KDF w/appropriate inputs
reply
3np
2 months ago
[-]
That should also mean that ca 50-52 character usernames are likely easily bruteforcable. Which makes the preconditions wider than those stated in the publication.
reply
fanf2
2 months ago
[-]
50 letters is 235 bits which is not at all bruteforceable.
reply
akerl_
2 months ago
[-]
They’re saying that if a username is 50 characters, only 2 characters of the password are used to in the cache key. And a 2 character password is very bruteforceable.
reply
magicalhippo
2 months ago
[-]
Potentially ignorant question, why would they go for bcrypt over say HKDF[1], especially since they mix in public data like the username and potentially userid?

[1]: https://datatracker.ietf.org/doc/html/rfc5869

reply
ronsor
2 months ago
[-]
Why do we need a KDF for a cache key? Won't a normal cryptographic hash function (or its HMAC variant) suffice?
reply
fanf2
2 months ago
[-]
If the cache gets leaked, you don’t want any miscreants to be able to bruteforce passwords from the cache keys.
reply
ronsor
2 months ago
[-]
Do we need to put the password in the cache key?
reply
sebastialonso
2 months ago
[-]
Can't believe the answers you're getting. The answer's a big fat NO. If you find yourself in that situation, there's something very incorrect with your design.
reply
magicalhippo
2 months ago
[-]
So how would you design it instead?
reply
its-summertime
2 months ago
[-]

    key = anyhash(uuid+username)
    if (result := cache.get(uuid+username)):
        if hash_and_equality(password, result.password_hash):
            return result.the_other_stuff
    # try login or else fail
reply
magicalhippo
2 months ago
[-]
Some insight into why this is good and why including the password as input in the derivation of the cache key is terrible would be appreciated.
reply
its-summertime
2 months ago
[-]
With no password in key: mildly cleaner to drop entries on password change, even if the cache didn't get the command to drop the key, the next login would override the old key's value anyhow, instead of potentially a key per password that was valid in the short period around a password change

Of course, if you have any validness of old sessions / passwords around a password change, you are doing something wrong.

My personal wondering is, considering KDF is meant to be expensive, why is IO more expensive to the point it needs a cache?

reply
magicalhippo
2 months ago
[-]
Thanks, good points.

> why is IO more expensive to the point it needs a cache

The advisory mentions it's only exploitable if the upstream auth server is unresponsive. So it seems to be mainly for resilience.

reply
ptcrash
2 months ago
[-]
If you want to validate a username/password authn attempt against a cache, then yes the username and password have to be someone in the mix.
reply
magicalhippo
2 months ago
[-]
If the user changes password it invalidates the cache entries automatically, so you avoid stale credentials exploiting the cache

At least that's my immediate thought, could be wrong.

reply
paulddraper
2 months ago
[-]
Isn't the whole point of using bcrypt is that you can't bruteforce the password?
reply
a-dub
2 months ago
[-]
why would someone in 2024 reach for bcrypt for building a secure hash key?
reply
heliosyne
2 months ago
[-]
Because bcrypt is still viable. Its cost factor is easily scaled to commodity performance, keeping the attack cost high.

The main attack vector these days is GPU-based compute. There, SHA* algorithms are particularly weak because they can be so efficiently computed. Unlike SHA algorithms, bcrypt generates high memory contention, even on modern GPUs.

Add in the constraint of broad support, low "honest use" cost, and maturity (extensive hostile cryptanalysis), bcrypt stays as one of the better choices even 25 year later.

That said, bcrypt's main limitation is it has a low memory-size cost. There are some newer algorithms that improve on bcrypt by increasing the memory-size cost to more than is practical even for FPGA attacks.

More importantly, bcrypt didn't actually fail here. The vulnerability happened because okta didn't use it correctly. All crypto is insecure if you use it wrong enough.

reply
a-dub
2 months ago
[-]
after all of the headaches in the late 90s/early 2000s with truncating password hash functions, i'm just a little surprised that this sort of thing would still be an issue.

i understand performance concerns and design trade offs, but i would expect a secure hashing function in 2024 to do proper message scheduling and compression or return errors when truncations are happening.

i suppose 90s culture is hip again these days, so maybe this does make sense?

reply
mkj
2 months ago
[-]
It seemed the best option I could find for a rp2040 microcontroller when I went looking recently? Perhaps not for okta...
reply
EasyMark
2 months ago
[-]
okta has been around longer than a year and momentum keeps a lot of companies from changing anything until catastrophe strikes
reply
pquerna
2 months ago
[-]
per <https://trust.okta.com/security-advisories/okta-ad-ldap-dele...>

2024-07-23 - Vulnerability introduced as part of a standard Okta release

This issue is not an "okta is old" issue. this was new code written in 2024 that used a password hashing function from 1999 as a cache key.

reply
crest
2 months ago
[-]
Bcrypt is still perfectly usable for its original purpose. They just picked/wrote a bad implementation that silently truncated inputs longer than the maximum input length. Would you also ask why they picked AES (a cipher from 1998) when the error was with the user (e.g. picking fixed/too short key)?
reply
marginalia_nu
2 months ago
[-]
> You aren’t supposed to put more than the salt and the password into trad unix password hashes.

To be fair, they're basically salting with the userid and username. Still unorthodox to be sure.

reply
fanf2
2 months ago
[-]
The salt is a separate input to the algorithm that is used differently and usually more restricted than the password.
reply
njtransit
2 months ago
[-]
The goal of a salt is to prevent lookup attacks. Since the user id is unique to each user, it prevents the use of pre-computed lookup tables like a salt would.
reply
heliosyne
2 months ago
[-]
The security of salting is twofold. Yes, it defeats the common rainbow table. But if the salt is known, a rainbow table for that salt can be computed. The security of salting depends on the salt being unknown.

If the salt is externally known, which the username and userID necessarily are, then the rainbow table for that account can be computed entirely offline, defeating the point of salting.

reply
chiph
2 months ago
[-]
You're both right, but are coming at this from different directions. In the past a rainbow table was intended to reveal as many passwords on a system as possible once you got a copy of the passwords. If one of them happened to be a high-value account, great. But maybe access to an ordinary account is good enough for their (nefarious) purposes.

It's also possible to build a rainbow table when you already know an account is high-value and have the salt. You can't go download that rainbow table - you'll have to compute it yourself, so the cost to the attacker is higher. But if the account is valuable enough to justify targeting specifically, you'll do it.

reply
MattPalmer1086
2 months ago
[-]
The primary purpose of salting is to prevent precomputation being used to attack all users (e.g. rainbow tables). Even when specific salts are known they have already done this job.

Salts are not intended to be secrets.

If you want to treat a salt as if it was a private key, that would only provide additional protection for the very specific circumstance where the user hash is compromised, but the corresponding salt was not.

reply
notpushkin
2 months ago
[-]
> then the rainbow table for that account can be computed entirely offline

So you basically bruteforce the password for a specific account before you get the actual hash but after you know the hashing scheme? I don’t see how this helps with any sort of attack though.

reply
lmz
2 months ago
[-]
No. Such a rainbow table is per-username and non reusable. Which is the point of salting.
reply
heliosyne
2 months ago
[-]
No, rainbow tables are hash-input specific. They're user-specific only if the salt is user-unique. Usernames aren't normally part of the hash input because they're assumed-public knowledge.

You can test this for yourself by creating a user account, then editing the master password database and manually changing the username without recalculating the password hash. The password will still work. If the username was part of the hash input, the password would fail.

reply
lmz
2 months ago
[-]
You complained about them salting using the public username as salt. You asserted that this makes the rainbow table computable offline. I asserted that this didn't matter much for security since the table for H(username || secret) is username specific and not reusable for other usernames. Since precomputed rainbow tables consume quite a bit of space it is rather unlikely that anyone would have such a table stored for any random username.
reply
vlovich123
2 months ago
[-]
You can still build a rainbow table for a specific username if you want to do a targeted attack.
reply
heliosyne
2 months ago
[-]
I didn't complain about anyone. The Okta vulnerability isn't because of public salts.
reply
marginalia_nu
2 months ago
[-]
That's fair.

Though you can salt a hash using a function that does not take a distinct salt input by just concatenating the salt with the value. This is a relatively common practice, but of course only works if there is no truncation of the salted input.

reply
0x457
2 months ago
[-]
I mean yes overall, but why would put delimeters into hash? you just smash bytes together.
reply
jrockway
2 months ago
[-]
Username: x@example.xyz Password: .com/!@#$% Concatenated: x@example.xyz.com/!@#$%

Username: x@example.xyz.com Password: /!@#$% Concatenated: x@example.xyz.com/!@#$%

reply
Dylan16807
2 months ago
[-]
A delimiter fixes the problem if you're sure the delimiter character can never be inside the username and password. Better would be to prefix the length of each field. Better still would be separately hashing each field and concatenating the results.
reply
jrockway
2 months ago
[-]
Personally, I like a fixed uint8 or uint16 representing the length of each segment. Then, there are no forbidden characters or quoting required. Maybe I want to have \0 in my password.
reply
akira2501
2 months ago
[-]
I tend to use '\0' as a delimiter for this reason.
reply
Dylan16807
2 months ago
[-]
You still need to make sure nulls can't show up, and you need to consider possible truncation scenarios caused by those nulls and make sure they won't cause silent failures at any point.
reply
akira2501
2 months ago
[-]
> You still need to make sure nulls can't show up

Which is very easy to do without losing any desired functionality as opposed to delimiters in the ASCII character range.

> and you need to consider possible truncation scenarios

In particular hashing libraries worth using never have this problem.

> and make sure they won't cause silent failures at any point.

They literally only need to exist in the data to one function call. Afterwards they are not needed or significant.

reply
brianshaler
2 months ago
[-]
Re the second quote and response:

One pattern I bump up against from time to time is the delta between using a perfectly defensible technique for a given use-case (safe delimiters when constructing an input for a specific function) versus a desire to have each decision be driven by some universal law (e.g. "if you're streaming data between services, using null bytes as delimiters might not be safe if consuming services may truncate at null bytes, so NEVER use null bytes as delimiters because they can be unsafe")

It's not even a matter of one "side" being right or wrong. You can simultaneously be right that this is perfectly safe in this use-case, while someone else can be right to be concerned ("need to consider possible") because the code will forever be one refactor or copy/paste away from this concatenated string being re-used somewhere else.

reply
Dylan16807
2 months ago
[-]
> In particular hashing libraries worth using never have this problem.

I'll note that the reason we're here in the first place is that they were using a password hash library with a completely unacceptable API.

reply
0x457
2 months ago
[-]
By why does it matter? It's a one-way hash isn't?

Also, we're talking about user_id not user_email, so it should be the same length always. Well, unless you're silly and using databases sequence for IDs.

reply
jadengis
2 months ago
[-]
This is obviously a huge mistake by Okta (for the love of God understand how your crypto functions work before you apply them) but at the same time, a crypto function with a maximum input length that also auto-truncates the data sounds like bad API design. You are basically asking for someone to goof up and make a mistake. It's much better to implement these things defensively so that the caller doesn't inadvertently make a mistake. Especially with a hashing algorithm, because there is no way to verify that the result is correct.
reply
sebastialonso
2 months ago
[-]
Agree with the spirit of the argument, but I disagree about the bad design. BCrypt has its trade-offs, you are expected to know how to use it when using it, specially if by choice.

It's like complaining about how dangerous an axe is because it's super sharp. You don't complain, you just don't grab the blade section, you grab it by the handle. And

reply
thiht
2 months ago
[-]
If passing more than 72 bytes to a function makes it silently fail, it IS bad design, especially for a sensitive, security-related function. The first condition in the function should be `if len(input) > 72 then explicitly fail`

Not letting people use your API incorrectly is API design 101.

To be clear this is not the fault of the bcrypt algorithm, all algorithms have their limitations. This is the fault of bcrypt libs everywhere, when they implement bcrypt, they should add this check, and maybe offer an unsafe_ alternative without the check.

reply
appplication
2 months ago
[-]
There is no other answer than this. Silent failures are never acceptable, even if documented. Because despite what we want to believe about the world, people don’t read the docs, or read them and forget, or read them and misunderstand.
reply
echoangle
2 months ago
[-]
If your crypto library works like an Axe and the methods aren’t prefixed with “unsafe_”, the library is bad. I would expect an exception when a hashing function gets an argument that’s too long, not just dropping of the excess input. Who thinks that’s the best choice?
reply
mplewis
2 months ago
[-]
Passing something that isn’t a password + salt into bcrypt is the mistake here.
reply
ytpete
2 months ago
[-]
Even that sounds potentially dangerous to me now, since it effectively means that some extra-long "correct horse battery stapler"-style passwords could be left effectively unsalted. I mean yeah, 78 chars is an awfully long password but for some famous book or movie quotes maybe not outside the realm of possibility. Or if languages using double-byte characters effectively halve that cutoff then it really becomes an issue...
reply
djbusby
2 months ago
[-]
Concise write up; not surprising that cache played a part.

Can't tell if it's issue with BCrypt or with the state-data going into the key, or combo-cache lookup tho.

reply
ptcrash
2 months ago
[-]
I think it's more of a logic problem. I suspect the engineers made a false assumption that bcrypt can hash a trivial amount of data like some other hashing algos.
reply
Forbo
2 months ago
[-]
I'm really sick of companies disclosing this shit late Friday afternoon.

Go fuck yourselves.

Sincerely, Everyone in the industry

reply
pluc
2 months ago
[-]
It's the second time they do that in a few weeks too, _and_ it's not on their security page [1] which promises transparency.

[1] https://trust.okta.com/

reply
slama
2 months ago
[-]
It is listed on their security advisories page, which you can navigate to from that link:

https://trust.okta.com/security-advisories/

reply
cyberax
2 months ago
[-]
Thank you for your feedback. Next time, we'll disclose it on Saturday evening.

-- With Love, Okta.

reply
chanux
2 months ago
[-]
Also, are there any repercussions for this kind of stuff? I don't know, fines from the organizations they get compliance certifications from or something.
reply
_hyn3
2 months ago
[-]
No repercussions, sadly.

Those compliance companies are (mostly) all just checking a box. It's (mostly) security theater from people who wouldn't know security if it bit them in the nether regions.

Even if that wasn't true, there's probably no box in any compliance regime that says "Yes, we loudly promulgate our security failures from the nearest rooftop on 10am on a weekday" (and it's always five o'clock somewhere, right?)

If it helps (I know it doesn't), the Executive Branch likes to do this with poor job number revisions, too, lol

reply
hoffs
2 months ago
[-]
geee, nobody is targeting you
reply
Forbo
1 month ago
[-]
Doesn't have to be targeted for it to be shitty behavior.
reply
err4nt
2 months ago
[-]
They had literally 1 job: secure authentication. This isn't the first time Okta has had a gaffe on a level that should cause any customer to reconsider. What's that saying, "Fool me once, shame on thee, fool me twice, shame on me". Don't get fooled by Okta a second, third, or fourth time.
reply
_hyn3
2 months ago
[-]
Why is anyone actually using Okta for anything these days?

IMO, better to choose point solutions and combine them.

reply
demarq
2 months ago
[-]
Wasn’t there a project posted here that can spot this things automatically.

It’s was a fuzzer of some sort

reply
Animats
2 months ago
[-]
This is written in C, right?
reply
tedunangst
2 months ago
[-]
Your rewrite of bcrypt in not C is unlikely to support longer passwords.
reply
_hyn3
2 months ago
[-]
NotC, the language that entire operating systems haven't been written in.
reply
lanstin
2 months ago
[-]
Probably Java. This is not a memory vulnerability but a protocol vulnerability.
reply
lelanthran
2 months ago
[-]
> This is written in C, right?

What's your point? That rewriting `bcrypt` in something else magically fixes this?

AIUI, the issue is that `bcrypt` only uses the first 72 bytes of the input to create a hash.

reply
drbig
2 months ago
[-]
The issue is with the user mistaking bcrypt for a general-purpose digest hashing tool.

It's like using a flat-head screwdriver as a hardwood chisel and then the handle breaks off after the third strike.

reply