https://github.com/santrancisco/pmw
It has a few "features" which allowed me to go through a repository quickly:
- It prompts user and recommend the hash, it also provides user the url to the current tag/action to double check the hash value matches and review the code if needed
- Once you accept a change, it will keep that in a json file so future exact vesion of the action will be pinned as well and won't be reprompted.
- It let you also ignore version tag for github actions coming from well-known, reputational organisation (like "actions" belong to github) - as you may want to keep updating them so you receive hotfix if something not backward compatible or security fixes.
This way i have full control of what to pin and what not and then this config file is stored in .github folder so i can go back, rerun it again and repin everything.
action@commit # semantic version
Makes it easy to quickly determine what version the hash corresponds to. Thanks.
Example:
uses: ncipollo/release-action@440c8c1cb0ed28b9f43e4d1d670870f059653174 #v1.16.0
And for anything that previously had @master, it becomes the following with the hash on the day it was pinned with "master-{date}" as comment:
uses: ravsamhq/notify-slack-action@b69ef6dd56ba780991d8d48b61d94682c5b92d45 #master-2025-04-04
Trying to get the same behavior with renovate :)
You don't need to audit every line of code in your dependencies and their subdependencies if your dependencies are restricted to only doing the thing they are designed to do and nothing more.
There's essentially nothing nefarious changed-files could do if it were limited to merely reading a git diff provided to it on stdin.
Github provides no mechanism to do this, probably because posts like this one never even call out the glaring omission of a sandboxing feature.
Also the secrets are accessible only when a workflow is invoked from trusted trigger ie. not from a forked repo. Not sure what else can be done here to protect against compromised 3rd party action.
You may do a "docker build" in a pipeline which does need root access and network access, but when you publish a package on pypi, you certainly don't need root access and you also don't need access to the entire internet, just the pypi API endpoint(s) necessary for publishing.
Likewise, similar to modern smart phones asking if they should remove excess unused privs granted to certain apps, GHAs should likewise detect these super common overprovisionings and make it easy for maintainers to flip those configs, e.g., "yes" button
Linux processes have tons of default permissions that they don't really need.
The problem with these "microprograms" have always been that once you delegate so much, once you are willing to put in that little effort. You can't guarantee anything.
If you are willing to pull in a third party dependency to run git diff, you will never research which permissions it needs. Doing that research would be more difficult than writing the program yourself.
Many setup Actions don’t support pinning binaries by checksum either, even though binaries uploaded to GitHub Releases can be replaced at will.
I’ve started building in house alternatives for basically every third party Action we use (not including official GitHub ones) because almost none of them can be trusted not to do stupid shit.
GitHub Actions is a security nightmare.
https://github.com/testifysec/witness-run-action/tree/featur...
GitHub Actions is particularly vulnerable to a lot of different vectors, and I think a lot of folks reach for the self-hosted option and believe that closes up the majority of them, but it really doesn't. If anything, it might open more vectors and potentially scarier ones (i.e., a persistent runner could be compromised, and if you got your IAM roles wrong, they now have access to your AWS infrastructure).
When we first started building Depot GitHub Actions Runners [0], we designed our entire system to never trust the actual EC2 instance backing the runner. The same way we treat our Docker image builders. Why? They're executing untrusted code that we don't control.
So we launch a GitHub Actions runner for a Depot user in 2-5 seconds, let it run its job with zero permissions at the EC2 level, and then kill the instance from orbit to never be seen again. We explicitly avoid the persistent runner, and the IAM role of the instance is effectively {}.
For folks reading the Wiz article. This is the line that folks should be thinking about when going the self-hosted route:
> Self-hosted runners execute Jobs directly on machines you manage and control. While this flexibility is useful, it introduces significant security risks, as GitHub explicitly warns in their documentation. Runners are non-ephemeral by default, meaning the environment persists between Jobs. If a workflow is compromised, attackers may install background processes, tamper with the environment, or leave behind persistent malware.
> To reduce the attack surface, organizations should isolate runners by trust level, using runner groups to prevent public repositories from sharing infrastructure with private ones. Self-hosted runners should never be used with public repositories. Doing so exposes the runner to untrusted code, including Workflows from forks or pull requests. An attacker could submit a malicious workflow that executes arbitrary code on your infrastructure.
One nitpick:
> Self-hosted runners should never be used with public repositories.
Public repositories themselves aren't the issue, pull requests are. Any infrastructure or data mutable by a workflow involving pull requests should be burned to the ground after that workflow completes. You can achieve this with ephemeral runners with JIT tokens, where the complete VM is disposed of after the job completes.
As always the principle of least-privilege is your friend.
If you stick to that, ephemeral self-hosted runners on disposable infrastructure are a solid, high-performance, cost-effective choice.
We built exactly this at Sprinters [0] for your own AWS account, but there are many other good solutions out there too if you keep this in mind.
(Self-hosted runners are great for many other reasons, not least of which is that they're a lot cheaper. But I've seen a lot of people confuse GitHub Actions' latent security issues with something that self-hosted runners can fix, which is not per se the case.)
[1]: https://docs.github.com/en/actions/security-for-github-actio...
If you are looking for ways to identify common (and uncommon) vulnerabilities in Action workflows, last month GitHub shipped support for workflow security analysis in CodeQL and GitHub Code Scanning (free for public repos): https://github.blog/changelog/2025-04-22-github-actions-work....
The GitHub Security Lab also shared a technical deep dive and details of vulnerabilities that they found while helping develop and test this new static analysis capability: https://github.blog/security/application-security/how-to-sec...
> Double-check to ensure this permission is set correctly to read-only in your repository settings.
It sounds to me like the most secure GH Action is one that doesn't need to exist in the first place. Any time the security model gets this complicated you can rest assured that it is going to burn someone. Refer to Amazon S3's byzantine configuration model if you need additional evidence of this.
I wonder how we get out of the morass of supply chain attacks, realistically.
https://github.com/crev-dev https://bootstrappable.org/ https://lwn.net/Articles/983340/
But when you're the unlucky one and need to search for a fix, and you're checking hardware/distro/date details in whatever forums or posts, and that's when you notice that the problems don't actually ever stop.. it just hasn't happened to you lately.
If he has one of those crappy computers it could be, but when I read about it happening it was entirely due to users MANUALLY deleting the UEFI files, did not happen upgrading.
So, the story seems still wrong to me.
Will be a good compliment to Github's Immutable Actions when they arrive.
Surely it's simple: use a base OS container, install packages, run a makefile.
For deployment, how can you use pre-made deployment scripts? Either your environment is bespoke VPS/on-prem, In which case you write your deployment scripts anyway, or you use k8s and have no deployment scripts. Where is this strange middleground where you can re-use random third party bits?
For example the last place I worked had a mono repo that contained ~80 micro services spread across three separate languages. It also contained ~200 shared libraries used by different subsets of the services. Running the entire unit-test suite took about 1.5 hours. Running the integration tests for everything took about 8 hours and the burn-in behavioral QA tests took 3-4 days. Waiting for the entire test suite to run for every PR is untenable so you start adding complexity to trim down what gets run only to what is relevant to the changes.
A PR would run the unit tests only for the services that had changes included in it. Library changes would also trigger the unit tests in any of the services that depended on them. Some sets of unit tests still required services, some didn't. We used an in-house action that mapped the files changed to relevant sets of tests to run.
When we updated a software dependency, we had a separate in-house action that would locate all the services that use that dependency and attempt to set them attempt to set them to the same value, running the subsequent tests.
Dependency caching is a big one and frankly Github's built-in cacheing is so incredibly buggy and inconsistent it can't be relied on... So third party there. It keeps going on:
- Associating bug reports to recent changes
- Ensuring PRs and issues meet your compliance obligations around change management
- Ensuring changes touching specific lines of code have specific reviewers (CODEOWNERS is not always sufficiently granular)
- Running vulnerability scans
- Running a suite of different static and lint checkers
- Building, tagging, and uploading container artifacts for testing and review
- Building and publishing documentation and initial set of release notes for editing and review
- Notifying out to slack when new releases are available
- Validating certain kinds of changes are backported to supported versions
Special branches might trigger additional processes like running a set of upgrade and regression tests from previously deployed versions (especially if you're supporting long-term support releases).
That was a bit off the top of my head. Splitting that from a mono-repo doesn't simplify the problem unfortunately it just moves it.
But if you think about it, the entire design is flawed. There should be a `gh lock` command you can run to lock your actions to the checksum of the action(s) your importing, and have it apply transitively, and verify those checksums when your action runs every time it pulls in remote dependencies.
That's how every modern package manager works - because the alternative are gaping security holes.
I also found this open source tool for sandboxing to be useful: https://github.com/bullfrogsec/bullfrog
I believe a sufficiently sophisticated attacker could unwind the netfilter and DNS change, but in my experience every action that you're taking during a blind attack is one more opportunity for things to go off the rails. The increased instructions (especially ones referencing netfilter and DNS changes) also could make it harder to smuggle in via an obfuscated code change (assuming that's the attack vector)
That's a lot of words to say that this approach could be better than nothing, but one will want to weigh its gains against the onoz of having to keep its allowlist rules up to date in your supply chain landscape
I don't think any egress filtering could properly block everything, given actions will need to interact with Github APIs to function and it would always be possible to exfiltrate data in any private repo hosted on Github. While some solutions can access the outbound HTTP requests payload before it gets encrypted using eBPF, in order to detect egress to untrusted Github org/repos, this isn't a silver bullet either because this relies on targeting specific encryption binaries used by the software/OS. A sophisticated attack could always use a separate obscure or custom encryption binaries to evade detection by eBPF based tools.
So like you say, it's better than nothing, but it's not perfect and there are definitely developer experience tradeoff in using it.
PS: I'm no eBPF expert, so I'd be happy if someone can prove me wrong on my theory :)
Works great with commit hooks :P
Also working on a feature to recursively scan remote dependencies for lack of pins, although that doesn’t allow for fixing, only detection.
Very much alpha, but it works.
Disclaimer: No conflict of interest just a happy user.
This combined with people having no clue how to write bash well/safely is a major source of security issues in these things.
Maybe I'm overly pedantic, but this whole section seems to miss the absolutely most obvious way to de-risk using 3rd party Actions, review the code itself? It talks about using popularity, number of contributors and a bunch of other things for "assessing the risk", but it never actually mentions reviewing the action/code itself.
I see this all the time around 3rd party library usage, people pulling in random libraries without even skimming the source code. Is this really that common? I understand for a whole framework you don't have time to review the entire project, but for these small-time GitHub Actions that handle releases, testing and such? Absolute no-brainer to sit down and review it all before you depend on it, rather than looking at the number of stars or other vanity-metrics.
> However, only hash pinning ensures the same code runs every time. It is important to consider transitive risk: even if you hash pin an Action, if it relies on another Action with weaker pinning, you're still exposed.
You need to recursively fork and modify every version of the GHA and do that to its sub-actions.
You'd need something like a lockgile mechanism to prevent this.
At Namespace (namespace.so), we also take things one step further: GitHub jobs run under a cgroup with a subset of privileges by default.
Running a job with full capabilities, requires an explicit opt-in, you need to enable "privileged" mode.
Building a secure system requires many layers of protection, and we believe that the runtime should provide more of these layers out of the box (while managing the impact to the user experience).
(Disclaimer: I'm a founder at Namespace)
Self-hosted runners feel more secure at first since they execute jobs directly on machines you manage. But they introduce new attack surfaces, and managing them securely and reliably is hard.
At Ubicloud, we built managed GitHub Actions runners with security as the top priority. We provision clean, ephemeral VMs for each job, and they're fully isolated using Linux KVM. All communication and disks are encrypted.
They’re fully compatible with default GitHub runners and require just a one-line change to adopt. Bonus: they’re 10× more cost-effective.