Tinycolor supply chain attack post-mortem
156 points
15 hours ago
| 9 comments
| sigh.dev
| HN
darkamaul
13 hours ago
[-]
> That repo still contained a GitHub Actions secret — a npm token with broad publish rights.

One of the advantages of Trusted Publishing [0] is that we no longer need long-lived tokens with publish rights. Instead, tokens are generated on the CI VM and are valid for only 15 minutes.

This has already been implemented in several ecosystems (PyPI, npm, Cargo, Homebrew), and I encourage everyone to use it, it actually makes publishing a bit _easier_.

More importantly, if the documentation around this still feels unclear, don’t hesitate to ask for help. Ecosystem maintainers are usually eager to see wider adoption of this feature.

[0] https://docs.pypi.org/trusted-publishers/

reply
realityking
56 minutes ago
[-]
I had missed the announcement that this is now available for npm: https://github.blog/changelog/2025-07-31-npm-trusted-publish...

Guess I know what I’ll be doing this weekend.

reply
procaryote
2 hours ago
[-]
What we need now is a flag in repos for projects that use this kind of thing, so you can easily prohibit dependencies that don't
reply
mnahkies
13 hours ago
[-]
I think the point around incorporating MFA into the automated publishing flow isn't getting enough attention.

I've got no problem with doing an MFA prompt to confirm publish by a CI workflow - but last I looked this was a convoluted process of opening a https tunnel out (using a third party solution) such that you could provide the code.

I'd love to see either npm or GitHub provide an easy, out the box way, for me to provide/confirm a code during CI.

reply
tcoff91
13 hours ago
[-]
Publishing a package involves 2 phases: uploading the package to npmjs, and making it availble to users. Right now these 2 phases are bundled together into 1 operation.

I think the right way to approach this is to unbundle uploading the packages & publishing packages so that they're available to end-users.

CI systems should be able to build & upload packages in a fully automated manner.

Publishing the uploaded packages should require a human to log into npmjs's website & manually publish the package and go through MFA.

reply
mnahkies
12 hours ago
[-]
Completely agree tbh, and that would be one of my preferred approaches should npm be the actor to implement a solution.

I also think it makes sense for GitHub to implement the ability to mark a workflow as sensitive and requiring "sudo mode" (MFA prompt) to run. It's not miles away from what they already do around requiring maintainer approval to run workflows on PRs.

Ideally both of these would exist, as not every npm package is published via GitHub actions (or any CI system), and not every GitHub workflow taking a sensitive action is publishing an npm package.

reply
klysm
4 hours ago
[-]
npm should require this with packages that have a large enough blast radius
reply
arp242
12 hours ago
[-]
I'm feeling that maybe the entire concept of "publishing packages" is something that's not really needed? Instead, the VCS can be used as a "source of truth", with no extra publishing step required.

This is how Go works: you import by URL, e.g. "example.com/whatever/pkgname", which is presumed to be a VCS repo (git, mercurial, subversion, etc.) Versioning is done by VCS tags and branches. You "publish" by adding a tag.

While VCS repos can and have been compromised, this removes an entire attack surface from the equation. If you read every commit or a diff between two tags, then you've seen it all. No need to also diff the .tar.gz packages. I believe this would have prevented this entire incident, and I believe also the one from a few weeks ago (AFAIK that also only relied on compromised npm accounts, and not VCS?)

The main downside is that moving a repo is a bit harder, since the import path will change from "host1.com/pkgname" to "otherhost.com/pkgname", or "github.com/oneuser/repo" to "github.com/otheruser/repo". Arguably, this is a feature – opinions are divided.

Other than that, I can't really think of any advantages a "publish package"-step adds? Maybe I'm missing something? But to me it seems like a relic from the old "upload tar archive to FTP" days before VCS became ubiquitous (or nigh-ubiquitous anyway).

reply
acdha
12 hours ago
[-]
There’s also a cost that installs take much longer, you need the full toolchain installed, and are no longer reproducible due to variations in the local build environment. If everything you do is a first-party CI build of a binary image you deploy, that’s okay but for tools you’re installing outside of that kind of environment it adds friction.
reply
procaryote
2 hours ago
[-]
As a lot of these npm "packages" are glorified code snippets that should never have been individual libraries, perhaps this would drive people to standardise and improve the build tooling, or even move towards having sensibly sized libraries?
reply
gedy
11 hours ago
[-]
Agreed, in the JS world? Hell no. Ironically, doing a local build would itself pull in a bunch of dependencies, whereas now you can at least have one built dependency technically.
reply
drdrey
15 hours ago
[-]
> A while ago, I collaborated on angulartics2, a shared repository where multiple people still had admin rights. That repo still contained a GitHub Actions secret — a npm token with broad publish rights. This collaborator had access to projects with other people which I believe explains some of the other 40 initial packages that were affected.

> A new Shai-Hulud branch was force pushed to angulartics2 with a malicious github action workflow by a collaborator. The workflow ran immediately on push (did not need review since the collaborator is an admin) and stole the npm token. With the stolen token, the attacker published malicious versions of 20 packages. Many of which are not widely used, however the @ctrl/tinycolor package is downloaded about 2 million times a week.

I still don't get it. An admin on angulartics2 gets hacked, his Github access is used to push a malicious workflow that extracts an npm token. But why would an npm token in angulartics2 have publication rights to tinycolor?

reply
hinkley
13 hours ago
[-]
I have admin rights on someone else’s npm repo and I’ve done most of the recent releases. Becoming admin lit a fire under me to fix all of the annoying things and shitty design decisions that have been stuck in the backlog for years so most of the commits are also mine. I don’t want my name on broken code that “works”.

I had just about convinced myself that we should be using a GitHub action to publish packages because there was always the possibility that publishing directly via 2FA, that one (or specifically I) could fuck up and publish something that wasn’t a snapshot of trunk.

But I worried about stuff like this and procrastinated on forcing the issue with the other admins. And it looks like the universe has again rewarded my procrastination. I don’t know what the answer is but giving your credentials to a third party clearly isn’t it.

reply
baobun
9 hours ago
[-]
npm has had support for package-scoped publish tokens (with optional 2FA enforcement) for a few years by now. So in case of compromise, the blast radius would be a single package.

The OP gave the GH repo too broad permissions. There is no good reason for the repo CI workflow to have full access to everything under their account.

reply
tetha
14 hours ago
[-]
> But why would an npm token in angulartics2 have publication rights to tinycolor?

Imo, this is one of the most classical ways organizations get pwned: That one sin from your youth years ago comes to bite you in the butt.

We also had one of these years ago. It wasn't the modern stack everyone was working to scan and optimize and keep us secure that allowed someone to upload stuff to our servers. It was the editor that had been replaced years and years ago, and it's replacement had also been replaced, the way it was packaged wasn't seen by the build-time security scans, but eventually someone found it with a URL scan. Whoopsie.

reply
Terr_
14 hours ago
[-]
Thinking of biology, the reason often given for the disappearance of "unused" genes/base-pairs is that there's a metabolic cost to keeping them around and copying them on every cell division, so they vanish from a form of passive attrition.

I wonder if someday we'll find there's also a more active process, which resembles "remove old shit because it may contain security vulnerabilities."

reply
STRiDEX
15 hours ago
[-]
Sorry if that wasn't clear. This was a token with global publish rights to my npm packages.
reply
Scaevolus
14 hours ago
[-]
I was confused too. Was it your npm token stored in angulartics2 as a Github Actions secret, so it could publish new angulartics2 versions?
reply
STRiDEX
14 hours ago
[-]
Yes, exactly.
reply
rectang
14 hours ago
[-]
Two-factor auth for publishing is helpful, but requiring cryptographically signed approval by multiple authors would be more helpful. Then compromising a single author wouldn't be enough.
reply
tcoff91
14 hours ago
[-]
Many packages have only 1 author.
reply
KronisLV
13 hours ago
[-]
I'm not sure why we never got around to more human in the loop with 2FA when it comes to this sort of stuff: "Oh, you want to publish a new package? Okay, confirm it on this app on your device/phone to make sure." Surely a button press on a pre-approved device wouldn't be too hard, pretty much how every user initiated online banking payment over here goes like.

I once heard from a sysadmin that didn't want to automate certificate renewal and other things, because he believed that doing so would take away useful skills or some inner knowledge of how the system works. Because of the human error risk, I thought that was stupid, but when it comes to approval processes, I think it makes sense. Especially because pushing code doesn't necessarily mean the same thing as such an approval, or the main device that you push code from could also get compromised, using your phone as 2FA could save you.

Then again, maybe I'm also stupid and the way we build our software is messed up on a fundamental level with all of the dependencies and nobody being able to practically audit all of the code they import, given deadlines, limited skills and resources and so on. Maybe it's all just fighting against a windmill.

reply
rectang
12 hours ago
[-]
I don’t think the current state of software development is irredeemable.

Ongoing downstream review of all dependency code is practical for only a tiny fraction of projects; for most projects using publisher reputation as a proxy for package safety is reasonable.

What’s not working is the low-standards package managers where inconveniencing authors is never acceptable because the whole enterprise is built on popularity with authors — you can’t trust that what those package managers give you reflects author intent.

reply
chrisweekly
14 hours ago
[-]
and (as in this case), that 1 author may use a single token to authz publishing many packages
reply
rectang
14 hours ago
[-]
The conclusion I'm coming to is that depending on packages which only have a single author is problematic. There are too many ways that packages published by one person can be compromised.

Packages which don't have approval and review by a reliable third party shouldn't be visible by default in a package manager.

reply
bigiain
9 hours ago
[-]
How many of your dependencies have 2nd level dependencies which have even deeper dependencies on ZX Utils, or NX (or left_pad.js)?

(right now I don't know the answer to that for the stuff I'm responsible for, but I'm in the process of researching and setting up and configuring the sort of tools needed to automate that.)

reply
Hackbraten
14 hours ago
[-]
How are you supposed to gain collaborators for a project that no one can possibly find?
reply
rectang
14 hours ago
[-]
There are ways, but at a high level, I don't care. I hate how modern package managers have come to value author convenience over downstream user security.
reply
Hackbraten
13 hours ago
[-]
Fair enough.

In the meantime, I'm trying to do my part through occasional random spot inspections when there's an update to a package, and encourage others to do the same for swarm coverage.

reply
x0x0
13 hours ago
[-]
That's a lot of entitlement for things you haven't paid a cent for; not just multiple authors but trusted 3rd parties; approval and review; etc.
reply
rectang
12 hours ago
[-]
I’ve done all those things myself (past ASF member where all that and more was SOP), so I realize what I’m asking for. It’s not crazy for authors of small packages to form small collectives and serve as each others’ trusted third parties.

In any case, if the choice is “frequent supply chain compromise, take it or leave it”, the answer is of course “leave it”.

If we need to pay for curated packages because the problems with NPM are endemic, that’s not unreasonable.

reply
x0x0
8 hours ago
[-]
> It’s not crazy for authors of small packages to form small collectives and serve as each others’ trusted third parties.

Yeah, there's that insane entitlement. More demands for others' time and labor, plus the conflation between you demanding labor vs if people don't agree to your free labor demands, they're pro supply chain compromise.

reply
rectang
5 hours ago
[-]
In a general discussion forum, I have floated some approaches for hardening distribution which have proven effective in other communities. If NPM can harden their systems using other mechanisms, then more power to them.
reply
cyphar
3 hours ago
[-]
While multiple authors' signatures would be nice, a lot of these kinds of attacks would be solved if there was any signature verification being done of the commits, tags, or generated artefacts at all.

People like to complain about distribution packaging being obtuse, but most distributions have rich support for verifying that package sources were signed by a key in a keyring that is maintained by the distribution. My (somewhat biased) view is that language package managers still do not provide the same set of features for validation that (for instance) rpmbuild does.

The release process for runc has the following safeguards:

    * As the upstream maintainer of runc, our releases are all signed with one of a set of keys that are maintained in our repo[1]. Our tags are also signed by one of the same keys. In my case, my key is stored in a Yubikey and so cannot easily be exfiltrated.
    * Our release scripts include a step which validate that all of the keys in that keyring file are valid (sub)keys registered to the GitHub account of a maintainer[2]. They also prompt the person doing the signing to check that the list looks reasonable before signing anything[3].
    * Distributions such as openSUSE have a copy of the keyring file[4] and the build system will automatically reject the build if the source code archive is not signed. Our official binary releases are also signed and so can be validated in a similar manner.
Maybe there are still gaps in this setup, and I would love to hear them. But I think this setup would have blocked this kind of attack at several stages. I personally don't like the idea of signing releases in CI -- if you really want to build your binaries in CI, that's fine, but you should always require a maintainer to personally sign the binaries at the end of the process.

For language package managers that do not support such a workflow, trusted publishing is a less awful setup than having long-lived publishing keys that may be incorrectly scoped (as happened in this case) but it still allows someone who gains access to your GitHub account (such as by stealing your cookies) to publish updated versions of your package with very little resource. GitHub supports setting a mandatory timeout for trusted publishing but the attacker could easily disable that. If someone got access to my GitHub account, it would be a very bad day but distributions would not accept the new releases because their copy of our keyring would not include the attackers keys (even if they added them to my account).

Disclaimer: I work at SUSE, though I will say that I would like for OBS to have nice support for validating checksums of artefacts like Arch and Gentoo do (you can /theoretically/ do it with OBS services or emulate it with forcelocal -- and most packages actually store the archive in OBS rather than pulling it at build time -- but it would be nice to do both).

[1]: https://github.com/opencontainers/runc/blob/v1.4.0-rc.1/runc... [2]: https://github.com/opencontainers/runc/blob/v1.4.0-rc.1/scri... [3]: https://github.com/opencontainers/runc/blob/v1.4.0-rc.1/scri... [4]: https://build.opensuse.org/projects/openSUSE:Factory/package...

reply
rectang
1 hour ago
[-]
Very well said — I agree with all points. But is NPM culturally averse to such mechanisms, and will they reject them as an imposition on authors even as the pace of successful supply chain attacks accelerates?

I think that one hole is that even if you require signatures, not all authors will adhere to best practices and some will still be compromised.

Also, five-dollar-wrench attacks remain feasible, although I’m uncertain if we’ve seen them in the real world.

reply
cyphar
1 hour ago
[-]
I think the five-dollar-wrench attack has a similar risk profile to a maintainer introducing a security flaw (intentionally or not) -- unless you are actively auditing the code of your dependencies you are ultimately trusting the upstream maintainer (as well as their personal security) at some level. Except in the most extreme scenarios I think that this kind of trust is irreducible -- if you don't trust the upstream maintainer you shouldn't use their software.

The main issue I have is that these ecosystems add so many other layers of trust you need to have that are unnecessary (trust that source forges like GitHub won't ever be compromised, trust that the access control of said source forges won't ever be compromised, trust that the per-language package repos won't ever be compromised, trust that API keys won't be leaked without being discovered quickly, etc etc).

reply
indigodaddy
14 hours ago
[-]
Anyone know of a published tool/script to check for the existence of any of the vulnerable npm packages? I don't see anything like that in the stepsecurity page.
reply
retlehs
13 hours ago
[-]
This won’t protect against everything, but it still seems like a good idea to implement:

https://github.com/danielroe/provenance-action

reply
indigodaddy
13 hours ago
[-]
Yep I did see that, but I'm not planning on pushing anything, just want a tool to scan for any of the offending packages. Could make my own but feel like somebody must have already made something (and probably better than I can)
reply
dflock
12 hours ago
[-]
- [supply-chain-security · GitHub Topics · GitHub](https://github.com/topics/supply-chain-security)

- [GitHub - safedep/vet: Protect against malicious open source packages](https://github.com/safedep/vet)

- [GitHub - AikidoSec/safe-chain](https://github.com/AikidoSec/safe-chain)

- npm audit

reply
indigodaddy
12 hours ago
[-]
vet and safe-chain look good thanks! I'm just dabbling with Node only (no experience really), so haven't used npm audit but will see how that works too. Appreciate the links.
reply
sibeliuss
13 hours ago
[-]
`npm audit` for known issues
reply
bikeshaving
14 hours ago
[-]
> Local 2FA based publishing isn’t sustainable...

Why is local 2FA unsustainable?! The real problem here is automated publishing workflows. The overwhelming majority of NPM packages do not publish often enough or have complicated enough release steps to justify tokens with the power to publish without human intervention.

What is so fucking difficult about running `npm publish` manually with 2FA? If maintainers are unwilling to do this for their packages, they should reconsider the number of packages they maintain.

reply
STRiDEX
14 hours ago
[-]
That's fair, I'm referring to the number of mistakes that happen with local publishing. Publishing the wrong branch, not building from latest etc
reply
skydhash
14 hours ago
[-]
So add a wrapper for that, a quick script that checks which branch and revision you are publishing from. The issue here is publishing from a CI you do not control that well and with automated events.
reply
paxys
14 hours ago
[-]
You can run the exact same script locally as you do in CI, with the only difference being the addition of a 2FA prompt.
reply
STRiDEX
13 hours ago
[-]
That's a good point, I would lose package provenance that way. I guess that is fine since it didn't prevent anything here.

I can look into that.

reply
cyberax
14 hours ago
[-]
> exfiltrated a npm token with broad publish rights

I freaking HATE tokens. I hate them.

There should be a better way to do authentication than a glorified static password.

An example of how to do it correctly: Github as a token provider for AWS: https://aws.amazon.com/blogs/security/use-iam-roles-to-conne... But this is an exception, rather than a rule.

reply
chatmasta
14 hours ago
[-]
These machine-to-machine OIDC flows seem secure, and maybe they are when they’re implemented properly, but they’re really difficult to configure. And I can’t shake the feeling that they’re basically just “tokens with more moving parts,” at least for a big chunk of exploitation paths. Without a human in the loop, there’s still some “thing” that gets compromised, whether it’s a token or something that generates time-limited tokens.

In the case of this worm, the OIDC flow wouldn’t even help. The GitHub workflow was compromised. If the workflow was using an OIDC credential like this to publish to npm, the only difference would be the npm publish command wouldn’t use any credential because the GitHub workflow would inject some temporary identity into the environment. But the root problem would remain: an untrusted user shouldn’t be able to execute a workflow with secret parameters. Maybe OIDC would limit the impact to be more fine-grained, but so would changing the token permissions.

reply
jerf
12 hours ago
[-]
"Without a human in the loop, there’s still some “thing” that gets compromised, whether it’s a token or something that generates time-limited tokens."

Speaking knowingly reductionistically and with an indeterminate amount of sarcasm, one of the hardest problems in security is how to know something without knowing something. The first "knowing something" is being able to convince a security system to let you do something, and the second is the kind that an attacker can steal.

We do a lot of work trying to separate those two but it's a really, really hard problem, right down at its very deepest core.

I know I was amused 5-10 years ago as we went through a lot of gymnastics. "We have an SSH password here that we use to log in to this system over there and run this process." "That's not secure, because an attacker can get the password. Move that to an SSH key." "That's not secure, an attacker can get the key. Move the key into this secret manager." "That's not secure, an attacker can get into the secret manager. Move it to this 2FA system." "That's not secure, an attacker can get the 2FA token material, move it to...."

There are improvements you can make; if nothing else a well-done 2FA system means an attacker has to compromise 2 systems to get in, and if they are non-correlated that's a legit step up. But I don't think there's a full solution to "the attacker could" in the end. Just improvements.

reply
groggler
10 hours ago
[-]
A sk- key with no user presence test to use and pin to update is pretty perfect in my book.. Anything less and the authentication can too easily be permanently stolen out of pointless soft protections any more and the decisions are overly complicated hoops for whatever they were supposed to deliver.
reply
tetha
13 hours ago
[-]
Hence you need to start thinking about threat models and levels of compromise, even in your build system.

If I control the issuing and governance of these short-lived secrets, they very much help against many attacks. Go ahead and extract an upload token for one project which lives for 60 seconds, be my guest. Once I lose control how these tokens are created, most of these advantages go away - you can just create a token every minute, for any project this infrastructure might be responsible for.

If I maintain control about my pipeline definition, I can again do a lot of work to limit damage. For example, if I am in control, I can make sure the stages running untrusted codes have as little access to secrets as possible, and possibly isolate them in bubblewrap, VMs, ..., minimize the code with access to publishing rights. Once I lose control about the pipeline structure, all that goes away. Just add a build step to push all information and secrets to mastodon in individual toots, yey.

To me, this has very much raised questions about keeping pipeline definitions and code in one repository. Or at least, to keep a publishing/release process in there. I don't have a simple solution there, especially for OSS software with little infrastructure - it's not an easy topic. But with these supply chain attacks coming hot and fast every 2 weeks, it's something to think about.

reply
chatmasta
13 hours ago
[-]
But the token wasn’t the primary source of compromise here. It was the GitHub workflow which had the token embedded into it. There was no need for the actor to exfiltrate the token from the workflow to somewhere else, because they could simply run arbitrary code within the workflow.

It would have made little difference if the environment variable was NPM_WEBIDENTITY instead of NPM_TOKEN. The workflow was still compromised.

reply
cyberax
13 hours ago
[-]
Universal OIDC tokens would slow down the lateral expansion and make it more difficult.

You won't be able to exfiltrate a token that allows you to publish an NPM package outside of a workflow, the infection has to happen during a build on GH.

reply
er4hn
14 hours ago
[-]
Well the idea behind tokens is that they should be time and authZ limited. In most cases they are not so they degrade to a glorified static password.

Solutions like generating them live with a short lifetime, using solutions like oauth w/ proper scopes, biscuits that limit what they can do in detail, etc, all exist and are rarely used.

reply
undecidabot
14 hours ago
[-]
Trusted publishing is a thing now for many package registries, including npm: https://github.blog/changelog/2025-07-31-npm-trusted-publish...
reply
pabs3
3 hours ago
[-]
mTLS aka TLS client certs seems like the way to go.
reply
skydhash
14 hours ago
[-]
As another sibling comment have put it, it probably should be short lived or behind a manual verification (passphrase, 2fa,…)
reply
1oooqooq
11 hours ago
[-]
if the ci job force pushed something deep in the git history tree?
reply
waterTanuki
7 hours ago
[-]
Something somewhere needs to change because the status quo just isn't working. Yes, we can cheer on the benefit of OIDC tokens and zero-trust solutions in CI pipelines on HN all we want, but the fact is there's a significant number of library developers out there with millions of package downloads per week that will refuse to do anything about security until they're compromised or npm blocks them from publishing until they do.

And then there's other non-sensical proposals like spelunking deep into projects some which could be over a decade old and just rip out all the dependencies until there's nothing but a standard library is left. Look, I'm all for a better std lib, I think reducing the number of dependencies we have is good. But just saying "you should reduce dependencies" will do nothing concrete to fix the problem which already exists, because it's much easier said than done.

So either tens of thousands or hundreds of thousands of developers stop using npm, and everyone refactors their projects to add more code and strip dependencies, or npm starts enforcing things like 2FA and OIDC for package developers with over X number of weekly downloads, and blocks publishing for those that don't follow the new security rules. I think it's clear which solution is more practical to implement. The only other option is for npm to completely lose its reputation and then we wind up with XKCD 927 again.

reply