https://news.ycombinator.com/item?id=48129789
Tuesday, 12 May 2026 - "Here are the links, yes, two vulnerabilities this time [YellowKey] [GreenPlasma] [...] Next patch tuesday will have a big surprise for you Microsoft"
Wednesday, 13 May 2026 - "I can't wait when I will be allowed to disclose the full story, I think people will find my crashout very reasonable and it definitely won't be a good look for Microsoft."
Author's blog: https://deadeclipse666.blogspot.com/
First post in March 2026 is "[...] someone violated our agreement and left me homeless with nothing. They knew this will happen and they still stabbed me in the back anyways, this is their decision not mine."
I'm not sure what to make of it, is this someone essentially "leaking" things from the inside? Sure sounds like it, and others are able to reproduce the results.
I've watched genius-level IQ people get fired time and again because they don't know how to work with others at a basic kindergarten level.
Most large companies — including Microsoft [1] — have an internal affairs call center where you can anonymously report issues of malfeasance — assuming that's what happened here.
[1] https://www.microsoft.com/en-us/legal/compliance/sbc/report-...
The secret here seems to be that Microsoft caches the key somewhere even when it's supposed to be only in the TPM! That's a pretty big revelation IMO.
Not what happened here (I reserve my judgment wrt the promised TPM+PIN exploit).
In the default TPM-only mode of BitLocker, the secret is in fact in the TPM, which will (as instructed by Windows upon key creation) release it to the correct OS running on the correct computer. Notably not in the picture is any user-provided data: measured boot is the only protection. It is only the correct programming of the OS that makes it request an account password (completely unrelated to the disk-encryption cryptography) before letting the user poke at the disk, which the OS can at that point already decrypt.
Well, turns out the programming is such that if you ask politely it’ll just pop an Administrator(?) shell.
Yes this is the one I'm referring to.
I have noticed it myself, it has happened to me that my system rebooted to install updates and it did not pass through the blue TPM pin entry screen at that point. That was a big red flag for me. A normal reboot always does that, even a 'hot' reboot.
In TPM-only mode, I only see the screen—which asks for an recovery key that serves an alternative to the TPM-borne secret, not for whatever you are calling the “TPM PIN” here—whenever I update the firmware or the bootloader (the latter from the other side of the dual-boot setup). Otherwise it boots straight to the login screen, which meshes with the measured-boot-only theory of operation I’ve described above. There’s nothing nefarious in this part, even if I think it exposes an unwisely large attack surface (e.g. the USB stack). I suspect you simply reboot so rarely you’re never hitting the happy path.
I can relate and empathize. And also provide this suggestion based on my own similar experience: if you can't provide evidence (e.g. doctor's diagnosis) that you are "special" or "not capable of that", then they don't have to care and will take steps to force you out. I wish you all the best.
You might be a 100x rockstar developer. You might even be the best software engineer in the world.
But the vast majority of good software is built by teams of people. It doesn't matter how good you are if you can't play nice with others.
I'd rather have a team of "merely" good engineers than one "rockstar" creating a toxic work culture. Fuck that noise.
This generally means the person might not leave their cubicle much or give feedback frequent enough, but this doesn't mean they are not motivated to help others or share knowledge. One can approach and ask a question and get tons of help immediately.
How I know? That's me. I look like a cave dweller from a distance, but I'm not. The only difference I have is human interaction sometimes drains me a lot, so I just concentrate and work, yet everybody get their help immediately if they need them.
Also, no, I don't bite or belittle people. On the contrary.
Assuming the worst in others is bad. If I worked with you, I'd be looking for somewhere else the moment I found out how you think about me.
Remember. People don't leave bad jobs, but bad managers.
Yes if you put a someone who can't work on a team on a team and expect team work then that will not work. But that's obvious, so then don't do that. Expecting a homogeneous workforce isn't realistic or optimal.
And I didn't say I'm not capable of being part of a team. Just that I need to have my own responsibilities within a team. I can't deal with micromanagement or excessive coordination like 'standups' every day.
The guy saying that he has been accused of "not being a team player" isn't literally quoting his management here. He's summarizing that his immediate supervisors don't like him because he's unwilling to enter in some patronage like relationship with them.
The fact that you gave the benefit of the doubt to some faceless employer here instead of an actual person recounting his experiences is really sad and maybe ought to be reason for you to rethink your biases to jump to the conclusion that this guy is a toxic loner. Sounds like you're projecting hard here from some other experience.
And I can see others already blaming them for relying on the vulnerability for living expenses, but if we can hold the hyper-rationalization for a second, we shouldn't be against the person who expected an organization with more money than God to uphold a deal for relative peanuts, right?
Like yes we all get that large orgs make spending $5 very hard, many claps for being the in-group, but their frustration would be understandable.
It's like suggesting someone was relying on a lottery ticket to payout to survive.
Acknowledged how orgs work, separated blaming the org from sympathizing with their reaction, tried to separate the prudence of their actions from the sticky situation they'd still be left in by the orgs actions...
But it was for naught: people are really ingrained in a weird "might-makes-right" model of corporate operations. "Larry Ellison is a lawnmower" was supposed to be a jeremiad but now it's more like a guiding principle that we browbeat anyone for questioning.
You're assuming that there was a deal that wasn't upheld. I don't think we have enough information to assess that. This person's blog posts do read as being somewhat unstable. There's even someone in the comments seemingly genuinely trying to be helpful: "Just wondering if you’re BiPolar (like me) and see a different reality than what is real. Been there."
Someone with a vulnerability worth as much as a two bedroom apartment?
Or that entire holy trinity.
Had a job at MSFT once, but is now struggling to earn money at all and is posting heart breaking stuff on Twitter. https://x.com/WeirdQuadratic
Hope she finds a way out and a more stable and fun job in the future.
Me not hiring someone doesn't mean the skills aren't valuable.
This could be rewritten as “because they aren’t you”, which is true but not a meaningful or educational answer.
Sure. And that’s a meaningful answer to the question.
“people with values different from yours, presumably” is a condescending nonanswer.
If someone has this kind of exploit and can't get a bug bounty for it, and desperately needs the money, he can sell it for 100k+ in a shady black market
I've been pretty convinced this is SandboxEscaper for awhile now.
Whether this is a backdoor or not boils down to whatever your usual proclivities about "bug or backdoor" are; it's not like "if microsoft = 1 hack bitlocker" like the tech press seem to love to report.
This is a bug in the NTFS transaction log replay functionality in the Windows Recovery Environment WinRE, where it will read NTFS transaction logs from an external volume and apply them to the mounted filesystem. This allows the attacker to perform an authentication bypass against WinRE. With BitLocker without PIN or Password, _any_ authentication bypass becomes a disk encryption bypass, since the disk is unsealed by the bootloader (this architectural "flaw" is true for Linux with the same configuration, as well, like Ubuntu installed with their newish Hardware Disk Encryption checkbox in the installer).
In lieu of additional evidence, whether you think the NTFS transaction log issue is a planted backdoor or a simple enumeration bug depends on your conspiracy theory level, like most things in exploit development. To me, it seems like a plausible bug. The weaknesses in boot-time unseal are well known and obvious and this is just one of many, so I don't see it as an earth-shattering revelation, although it is a fun bug.
A fun next step would be to look at different fstx versions to see if it’s just something that was patched or refactored out at some point. At that point it could be a patch-door (ie an organic bug where the patch was held back by interference), but again, that would be a crappy setup due to the propensity for Windows vulnerability engineers to use binary diffing - if you had the exploit and the power to hold back the patch, it would be way better to hold it back everywhere.
1. A bug was introduced that affects both, and the bug never make it back into the 11 branch
2. There's conditional logic in RE that triggers the issue
3. 11 introduced new behavior that never make it to RE, causing the bug
The fact that 10 is seemingly unaffected is telling. #2 seems very unlikely, because it suggests new conditional logic was added and not tested. #3 seems unlikely because I can't understand why the binaries would be different anyway. #1 seems unusual because it suggests there's no canonical source of truth for the code, which feels very unlikely for bitlocker of all things (where you want everything speaking the same language).
If there's any benign explanation, I suspect it's likely due to incompetence. This feels like such a strange problem to have. I suspect the follow-ups you suggest are going to happen very soon and we'll know more.
We know how PIN-locked BitLocker works, and it requires unwrapping using a key sealed behind a TPM PIN policy and stretching it using the PIN itself. So we can deduce that this would require that:
* The attacker was able to bypass the TPM PIN sealing policy _and_ brute-force the stretching applied to the decrypted key. Brute-forcing the stretch is plausible on a "lots of expensive stuff" timeline but not an easy attack. Bypassing TPM PIN policy across multiple platforms would be something quite incredible. Given that TPMs are implemented by multiple vendors across multiple fundamental architectural approaches, and aren't based on a universal reference implementation, it would be rather bizarre to find a mistake in many or all of them.
* There is a secret volume key stored on a volume which can be decrypted by another mechanism. This would be a backdoor, but seems vanishingly unlikely given the amount of research which has been applied against BitLocker historically.
* The attacker is at some point able to inject something which allows them to observe the victim applying the PIN. There could be an attack here but it isn't nearly as interesting.
If Microsoft wanted a backdoor they don't need to put it in the WinRE environment. They can sign payloads that will pass the TPM and unlock bitlocker, without needing to store anything on your disk.
The published exploit doesn’t affect Bitlocker with a PIN, without which Bitlocker isn’t secure anyway. The original author claims they have an exploit that also works with a PIN, but hasn’t provided any proof of that.
I have never seen a company where they require the pin for bitlocker.
>In a normal WinRE session, you have a X:\Windows\System32 directory that has a winpeshl.ini file in it
>However, with the YellowKey exploit, it looks like Transactional NTFS bits on a USB Drive are able to delete the winpeshl.ini file on ANOTHER DRIVE
Interesting. I dont know about this environment - some kind of naive file handle contructing/passing? But then, why require a key press during winre reboot?
I wonder how patachable this is. The thousands of winre thumb drives are certainly out of reach; maybe the bitlocker side update the access permissions? Would it require unenc/reenc?
Seems like lots more to follow
The part that isn't mentioned is that the win re is privileged because windows stores a decryption key in the TPM that allows win re to decrypt the disk even without the recovery key. That's why the attack requires win re in the first place, rather than booting into an ubuntu live cd or whatever. This also means you don't have to patch all the winRE thumbdrives out there because their secureboot signatures can simply be revoked, meaning they can't pass TPM validation anymore, therefore they won't be able to decrypt any disks.
WinRE runs internally, not from a thumb drive, which is why the bootloader will unseal the disk for it (just like if you have a systemd recovery set up on a Linux distribution). It doesn't have a separate key or anything, it's just allowed to use the "main" one, by design. Microsoft just need to patch the WinRE partition in a normal Windows Update to fix the NTFS transaction log driver; no Secure Boot revocation or TPM-related changes are necessary (which is good for them, because _that_ would be a disaster).
By and large this whole thing is orthogonal to BitLocker overall; boot-time unsealed BitLocker is vulnerable to any post-bootloader auth bypass by design, and this is a goofy post-bootloader auth bypass bug.
The researcher claims a way to bypass PIN too but hasn't revealed it.
If they put a backdoor into FDE it would make more sense to advise people to stop using windows at all and using Linux instead. If they put a backdoor in FDE you can be sure there is not just one backdoor in the operating system itself. You shouldn't trust proprietary software at all. You shouldn't even trust open source if it isn't properly audited.
Anything in particular that makes you wary? I'm aware of the 2016 and 2020 audits (https://ostif.org/the-veracrypt-audit-results/ is the 2016 one, I believe), but those seemed to suggest things were getting better over time. Curious what other signals to look for.
This has got to be the most surprising encryption-related comment I've ever read from you. Please tell us what you're thinking about VeraCrypt. What would you say about TrueCrypt v7.1a, the last known good release?
They can't use age or any other "right answer" tools. I'm talking about people who don't know their own username, people who don't know that their Windows password is the one they use to log into Windows. "Is that for my email?" Just getting them to use a password manager is like arm wrestling an aligator. If VeraCrypt isn't the best option for them, then what is?
Generally I’d say this is what Sharepoint or Box or a more workflow-specific platform is for. You generally don’t want sensitive data living on individual people’s workstations in an enterprise context, you want it somewhere that you can enforce security settings.
If I trust them to provide my FDE software, I certainly trust them when they say I shouldn’t use it.
An independent audit of the last version of TrueCrypt was published about a year after the discontinuation. It did not find any significant security issues or backdoors.
I wouldnt of used the citation needed here meme personally, but i think its clear the poster is just asking why it should not be trusted.
Securing Microsoft products is busy work while waiting to have it undercut by the next wave of MS’s insane tech debt and greed. And now backdoors!
You can enable ADP for E2E encrypted backups, but it's probable not going to help you much, because the people you are communicating with likely didn't.
This is not to defend Microsoft, more to say that all these companies were part of PRISM.
That just sounds like a fundamental issue with security in general, not specific to Apple/Microsoft.
I have found that even many tech people have incorrect beliefs about these things, like assuming that iCloud Backups are E2E encrypted by default or that disabling Allow Apps to Request to Track disables trackers inside apps.
But you are defending MS, conflating a bunch of things, mainly full disk encryption and cloud backups.
There's a big difference between Apples cloud backup which has documented behavior and a backdoor. I'm also fairly confidant in Apple's full disk encryption, they've gone to court to defend it. There also a lot more data points we can use to judge Apple vs Microsoft on privacy and security, and MS comes out looking bad.
Another example is WhatsApp on Android, by default when backups are enabled, they are stored unencrypted in Google Drive. A good counter-example is Signal, which opts out of backups on iOS and Android and the only option is to do E2E backups to their own servers.
I'm also fairly confidant in Apple's full disk encryption, they've gone to court to defend it.
FWIW, in the last leaked report, iPhone was not an issue AFU for Cellebrite (macOS is most likely even easier due to looser security):
https://discuss.grapheneos.org/d/14344-cellebrite-premium-ju...
Though I suppose then I have to give a negative % to all the systems that have insecure online backups. This whole area is a train wreck really.
Signal is slowly, very slowly, moving toward providing real backups and cross-device transfers
I understand why you’d believe Signal still can’t deliver that, because they had been ignoring the user demands for years.
But there is real progress now
https://support.signal.org/hc/en-us/articles/9708267671322-S...
Obviously Signal don't owe me anything. I'm not paying for the product and I appreciate what it does offer and makes available for free. But it would be much better if it also supported local backups under the user's control.
"now"?
Shall we have a discussion about the excuse Microsoft gave as to why keys they claimed, back then, were "secondary keys" belonging to Microsoft, were called ..._NSAKEY when a version of Windows NT shipped, by mistake, with debug symbols on?
One time, just freaking one time, a version of Windows shipped with debug symbols on and, by chance, there had to be cryptographic keys named "NSAKEY" in there.
Yeah.
Now that people constantly turning a blind eye on the wrongdoings of the state are of course going to say that it's totally normal and just repeat the, carefully crafted, excuses from Microsoft from back, that it was totally not a backdoor etc.
This won't work because the TPM will only give you the keys if you're booting an "approved" OS, specifically the PCR states that the encryption keys are bound to.
>or if you have to buy a $5 microcontroller and solder it to certain pins on the main board to sniff the TPM keys.
That only works with dTPMs. fTPMs aren't vulnerable to this, and are far more popular than dTPMs.
Also can recover data without my mainboard.
Maybe a hybrid (secureboot-TPM+phrase) slot for day to day to also prevent against evil maid attacks, and another slot with a backup passphrase would be acceptable.
It's not an either-or. You can combine TPM with passwords which makes it far more secure than password alone. A TPM can enforce password guessing limits, otherwise a password needs to be absurdly long to be secure against GPU bruteforcing attacks. It also prevents someone from swapping out the bootloader with a backdoored version that steals your passwords.
>Also can recover data without my mainboard.
You're supposed to keep a backup of the encryption key when using TPM, in case it fails.
No. I have already explained it here: https://news.ycombinator.com/item?id=48133491
But you can configure Linux LUKS in the exact same way.
This doesn't seem an attack on BitLocker so much as it is an attack on the secure boot chain.
The value of PIN-less unlock is if your threat model is limited to the disk being disposed of or removed from the machine or otherwise separated from the TPM.
Entering a PIN is inconvenient or impossible if more than one user regularly uses the device. Hence, control to validate access is transferred to a trusted OS component.
https://deadeclipse666.blogspot.com/2026/05/were-doing-silen...
The author claims to be able to bypass TPM + PIN protection, but I seriously doubt it because that would require breaking or exploiting the TPM itself. Perhaps the author was referring to existing fTPM flaws but even then, brute-forcing the PIN would still be required because on BitLocker, the wrapped VMEK depends on the PIN, which brings me to the "backdoor" topic. As I have already mentioned, exploits have been found in AMD fTPMs in the past (https://arxiv.org/abs/2304.14717). This flaw is particularly severe on Linux/cryptenroll because the TPM returns the actual FVEK, unlike BitLocker, where the VMEK itself depends on the PIN. This cryptenroll flaw has been known for years and remains unfixed on cryptenroll (https://github.com/systemd/systemd/pull/27502). Yet, I see no one yelling and crying "backdoor", or accusing Lennart of being compromised. Cryptography, especially when combined with hardware security, is inherently not easy — and people make mistakes.
Prevailing theory is they were pressured to put in a backdoor and couldn't disclose it, so they had to make a seemingly ridiculous statement (because who in their right mind would trust bitlocker) to call attention that "something is very wrong"
Alternately, they don't want people to rely on abandonware for security.
Also, despite the conspiracy theories of backdoors I'm not aware of any bitlocker exploits that work on TPM + pin, which is the intended "secure" configuration[1]. All exploits rely on TPM-only (ie. ez-mode), which is basically the security equivalent of running https/ssh without certificates and blindly accepting whatever keys shows up.
[1] https://learn.microsoft.com/en-us/windows/security/operating...
The way one would backdoor something like Bitlocker is to encrypt the disk encryption key with a (post-quantum) public key for which only the backdoor owner has the private key for, and then put it on a place on disk that is unused by the filesystem.
I do not want someone stealing my laptop on a train ride potentially being able to have all of that data.
With a proper real backup strategy, i have everything save. I do not need easy access to a hard drive from a broken computer.
But hey you do you :)
Everyone's threat model is different, but some are better than others, and maybe we shouldn't equate taking time to explain why with throwing stones.
The Snowden leaks revealed that the NSA is flummoxed on how to tackle variable character lengths. However, they've cracked rot26 using custom ASIC supercomputers, so it should be considered insecure even though it's twice as good as rot13.
I want the crypto-shredding retirement of each storage device. I don't assume I can delete/scrub/overwrite at the time a device goes out of service. I have a box of older HDDs that I still have to get around to destroying properly, because they exist from before the days of practical FDE.
Every machine is encrypted, unlocked per login.
Encryption is basically free so.
An advantage of encryption is that it makes it easier to give away or resell devices. With recent encryption schemes (well the ones on Linux, given this article), I feel confident that overwriting the encryption keys gets me close enough to not leaking my data once I get rid of an old hard drive.
* Your preferred memorized passphrase and will never be written down anywhere.
* A random key you can print and store in a box somewhere.
Then if your backup paper gets lost, you can revoke/replace it without having to abandoned your memorized favorite.
Just choose a good quality one....
* Split the recovery key in two, store each half with a different friend. (If you're feeling fancy, XOR the halves and store that with a third friend, then any two out of three will work.)
* Sneak the key into something you know friends/family won't throw away while you're still alive, like stuck to the back of a sentimental photo in a frame.
____
That said, I think I'm wandering from the original "accumulating dusty old drives in a box" scenario, which has a simpler solution: Keep a growing old_drives_keys.txt file on your current (encrypted) main device.
If you keep it in a dark environment that's not super humid the ink should last a really long time. Even in non-optimal conditions (NY summers with high humidity, etc.) I've had regular pen ink last for decades with no signs of fading away.
But I'm sure that some of the millions of things that I've missed as windows has become what it has become makes this simplicity seem like a scifi absurdity. I don't think that they can even log into their own computers without asking Microsoft for permission over the network. I'm sure the idea of encryption must have been overcomplicated to the point of absurdity in order to trap customers too, I just don't know about it.
I suppose you should just count your blessings (of ignorance) and be available to help your friends with cryptsetup if they decide to flee windows.
I suppose this makes some sense for home computers (burglars and police raids are rare) but for a laptop, you really don't want thieves getting all your details.
Ironically -- this probably was paranoid a few years ago, but now -- "ChatGPT, use this prepared prompt to extract all useful info from this hard drive"
No one should have their data encrypted and kept from them without consent unless they do something. Microsoft does that now. They may not be requring a monetary ransom like others, but it is a ransom nevertheless.
I know this is controversial. Bitlocker helps protect one's property and information when used intentionally. And that being impacted is a shame.
I recently (last week) had to drive over to a parent's house and "fix" their (pre-online accounts) win 11 computer used for sewing because it had become a blue screen saying aka.ms was required. They did not know how it happened and are not very technical users so I imagine they were tricked by some click-through dialog. It is not something they would ever do intentionally. All that computer ever does is run sewing pattern/control software.
https://support.microsoft.com/en-us/windows/find-your-bitloc...
There other 2 options are enterprise or online account (the very thing we're talking about) don't apply in this context.
If it's a backdoor, that's a serious fraud against their customers.
If you really think this will be prosecuted as fraud, then you'll be shocked by how American courts handle these sorts of things.
The ONLY control that mitigates this risk is disk encryption, and it is perniciously misleading to ship a sabotaged product on which these legally consequential decisions get made around the world- based on the specific assurance the product is designed and marketed to provide.
If true, it is a specific outrage against the laws of several countries, medical and other research ethics, public health, and the social contracts people have with their institutions. If MS is given impunity for this, a lot of regulation is not worth the paper it is written on.
before arguing further, I recommend looking at the breach notification sections of the laws in these major economies: https://www.dlapiperdataprotection.com/
At the point where you're able to mount the EFI partition and effectively modifying the bootloader, it's game over anyway - just run `manage-bde -unlock`, you already have to be root to mount the EFI partition.
What does this even mean? Nobody is using multiple encryption schemes on top of each other, are they?
If you want to encrypt some data that gets stored persistently somewhere on your machine, rather than invent an application-specific encryption scheme for that data alone, instead use a mainstream full-partition encryption mechanism, then store the data as plaintext within said partition.