A big part of this for us is transparency. That’s why every image ships with VEX statements, extensive attestations, and all the metadata you need to actually understand what you’re running. We want this to be a trustworthy foundation, not just a thinner base image.
We’re also extending this philosophy beyond base images into other content like MCP servers and related components, because the more of the stack that is verifiable and hardened by default, the better it is for the ecosystem.
A few people in the thread asked how this is sustainable. The short answer is that we do offer an enterprise tier for companies that need things like contractual continuous patching SLAs, regulated-industry variants (FIPS, etc.), and secure customizations with full provenance and attestations. Those things carry very real ongoing costs, so keeping them in Enterprise allows us to make the entire hardened catalog free for the community.
Glad to see the conversation happening here. We hope this helps teams ship software with a stronger security posture and a bit more confidence.
Chainguard came to this first (arguably by accident since they had several other offerings before they realized that people would pay (?!!) for a image that reported zero CVEs).
In a previous role, I found that the value for this for startups is immense. Large enterprise deals can quickly be killed by a security team that that replies with "scanner says no". Chainguard offered images that report 0 CVEs and would basically remove this barrier.
For example, a common CVE that I encountered was a glibc High CVE. We could pretty convincingly show that our app did not use this library in way to be vulnerable but it didn't matter. A high CVE is a full stop for most security teams. Migrated to a Wolfi image and the scanner reported 0. Cool.
But with other orgs like Minimus (founders of Twistlock) coming into this it looks like its about to be crowded.
There is even a govt project called Ironbank to offer something like this to the DoD.
Net positive for the ecosystem but I don't know if there is enough meat on the bone to support this many vendors.
Paying for something “secure” comes with the benefit of risk mitigation - we paid X to give us a secure version of Y, hence its not our fault “bad thing” happenned.
The question I'd be interested in is, outside of markets where there's a lot of compliance requirements, how much demand is there for this as a paid service...
People like lower CVE images, but are they willing to pay for them. I guess that's an advantage for Docker's offering. If it's free there is less friction to trying it out compared to a commercial offering.
With Bitnami discontinuing their offer, we recently switched to other providers. For some we are using a helm chart and this new offer provides some helm charts but for some software just the image. I would be interested to give this a try but e.g. the python image only various '(dev)' images while the guide mentions the non-dev images. So this requires some planning.
EDIT: Digging deeper, I notice it requires a PAT and a PAT is bound to a personal account. I guess you need the enterprise offering for organisation support. I am not going to waste my time to contact them for an enterprise offer for a small start-up. What is the use case for CVE hardened images that you cannot properly run in an CICD and only on your dev machine? Are there companies that need to follow compliance rules or need this security guarantee but don't have CICD in place?
The enterprise hardened images license seems to be a different offering for offline mirroring or more strict compliance…
The main reason for CVE hardened images is that it’s hard to trust individuals to do it right at scale, even with CI/CD. You’re having to wire together your own scan & update process. In practice teams will use pinned versions, delays in fixing, turn off scanning, etc. This is easy mode
Offering image hardening to custom images looks like a reasonable way for Docker to have a source of sustained income. Regulated industries like banks, insurers, or governmental agencies are likely interested.
Bait and switch once the adoption happens has become way too common in the industry.
It's what the people who created OG Docker are building now
> Is Docker sunsetting the Free Team plan?
> No. Docker communicated its intent to sunset the Docker Free Team plan on March 14, 2023, but this decision was reversed on March 24, 2023.
Not a problem for casual users but even a small team like mine, a dozen people with around a dozen public images, can hit the pull limit deploying a dozen landscapes a day. We just cache all the public images ourselves and avoid it.
https://www.docker.com/blog/revisiting-docker-hub-policies-p...
There's an excellent reason: They're login gated, which is at best unnecessary friction. Took me straight from "oh, let me try it" to "nope, not gonna bother".
https://www.docker.com/blog/security-that-moves-fast-dockers...
Note: I work at Docker
This would be like expecting AWS to protect your EC2 instance from a postinstall script
There's a "Make a request" button, but it links to this 404-ing GitHub URL: https://github.com/docker-hardened-images/discussion/issues
oh well. hope its good stuff otherwise.
But, we pay for support already.
Nice from docker!
Chainguard still has better CVE response time and can better guarantee you zero active exploits found by your prod scanners.
(No affiliation with either, but we use chainguard at work, and used to use bitnami too before I ripped it all out)
I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).
We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.
The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.
so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.
We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.
All this info is available publicly and shown in our console, many of them you can see here: https://github.com/wolfi-dev/advisories
You can take this example: https://github.com/wolfi-dev/advisories/blob/main/amass.advi... and see the timestamps for when we detected CVEs, in what version, and how long it took us to patch.
Do with that knowledge what you may.