In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.
So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.
docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.
Would love to hear your thoughts and use cases!
1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.
2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.
1. Yeah agreed, it's a bit of a mess that we have at least three different file system layouts for images and two image stores in the engine. I believe it's still not too late for Docker to achieve what you described without breaking the current model. Not sure if they care though, they're having hard times
2. Hm, push-to-cluster deployment sounds clever. I'm definitely thinking about a distributed image store, e.g. embedding unregistry in every node so that they can pull and share images between each other. But triggering a deployment on push is something I need to think through. Thanks for the idea!
EDIT: why I think it's important because on automations that are developed collaboratively, "pussh" could be seen as a typo by someone unfamiliar with the feature and cause unnecessary confusion, whereas "push-over-ssh" is clearly deliberate. Think of them maybe as short-hand/full flags.
Rename the file to whatever you like, e.g. to get `docker pushoverssh`:
mv ~/.docker/cli-plugins/docker-pussh ~/.docker/cli-plugins/docker-pushoverssh
Note that Docker doesn't allow dashes in plugin commands.> What's that extra 's' for?
> That's a typo
Until one day I made that typo.
It's a bummer docker still doesn't have an API to explore image layers. I guess their plans to eventually transition to containerd image store as the default. Once we have containerd image store both locally and remotely we will finally be able to do what you've done without the registry wrapper.
But yes, an API would be ideal. I've wasted far too much time on this.
Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.
You can use skopeo, crane, regclient, BuildKit, anything that speaks OCI-registry on the client. Although you will need to manually run unregistry on the remote host to use them. 'docker pussh' command just automates the workflow using the local Docker.
Just check it out, it's a bash script: https://github.com/psviderski/unregistry/blob/main/docker-pu...
You can hack your own way pretty easily.
Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.
I invested just 20 minutes to setup a .yaml workflow that builds and pushes an image to my private registry on ghcr.io, and 5 minutes to allow my server to pull images from it.
It's a very practical setup.
I need this in my life.
I built skate out of that exact desire to have a dokku like experience that was multi host and used a standard deployment configuration syntax ( k8s manifests ).
Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`
And loading it looks like this: `docker load -i /path/to/my-app.tar`
Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.
Hence the value.
Edit: that thing exists it is uncloud. Just found out!
That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.
@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?
From "About Docker Content Trust (DCT)" https://docs.docker.com/engine/security/trust/ :
> Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images.
export DOCKER_CONTENT_TRUST=1
cosign > verifying containers > verify attestation: https://docs.sigstore.dev/cosign/verifying/verify/#verify-at.../? difference between docker content trust dct and cosign: https://www.google.com/search?q=difference+between+docker+co...
If there's desire for an option to specify `--disable-content-trust` during push and/or pull I'll happily add it. Please file an issue if this is something you want.
[1]: https://github.com/mkantor/docker-pushmi-pullyu/blob/12d2893...
What does it do if there's no signature?
Do images built and signed with podman and cosign work with docker; are the artifact signatures portable across container CLIs docker, nerdctl, and podman?
Sign the container image while pushing, verify the signature on fetch/pull:
# Sign the image with Keyless mode
$ nerdctl push --sign=cosign devopps/hello-world
# Sign the image and store the signature in the registry
$ nerdctl push --sign=cosign --cosign-key cosign.key devopps/hello-world
# Verify the image with Keyless mode
$ nerdctl pull --verify=cosign --certificate-identity=name@example.com --certificate-oidc-issuer=https://accounts.example.com devopps/hello-world
# You can not verify the image if it is not signed
$ nerdctl pull --verify=cosign --cosign-key cosign.pub devopps/hello-world-bad
Answering my own question: I think it's because you want to avoid the `docker pull` side of the equation (when possible) by having the registry's backing storage be the same as the engine's on the remote host.
This is a prerequisite for what I want to build for uncloud, a clustering solution I’m developing. I want to make it possible to push an image to a cluster (store it right in the docker on one or multiple machines) and then run it on any machine in the cluster (pull from a machine that has the image if missing locally) eliminating a registry middleman.
This is next level but I can imagine distributing resource usage across the cluster by pulling different layers from different peers concurrently.
My workflow in my homelab is to create a remote docker context like this...
(from my local development machine)
> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"
Then I can do...
> docker context use mylinuxserver
> docker compose build
> docker compose up -d
And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.
No fuss, registry, no extra applications needed.
Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.
But if you are working with others on things that matter, then you’ll find you want your images to have been published from a central, documented location, where it is verified what tests they passed, the version of the CI pipeline, the environment itself, and what revision they were built on. And the image will be tagged with this information, and your coworkers and you will know exactly where to look to get this info when needed.
This is incompatible with pushing an image from your local dev environment.
Other than "it's convenient and my use case is low-stakes enough for me to not care", I can't think of any reason why one would want to build images on their production servers.
But if we're talking about hosts that run production-like workloads, using them to perform potentially cpu-/io-intensive build processes might be undesirable. A dedicated build host and context can help mitigate this, but then you again face the challenge of transferring the built images to the production machine, that's where the unregistry approach should help.
Being able to run a registry server over the local containerd image store is great.
The details of how some other machine's containerd gets images from that registry to me is a separate concern. docker pull will work just fine provided it is given a suitable registry url and credentials. There are many ways to provide the necessary network connectivity and credentials sharing and so I don't want that aspect to be baked in.
Very slick though.
Both approaches are inferior to yours because of the load on the server (one way or another).
Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.
The size of my images are tiny, the extra complexity is unwarranted.
Then of course I'm not a 1000 people company with 1GB docker images.
FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.
Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?
I built Unregistry for Uncloud but I belive Kamal could also benefit from using it.
#!/bin/bash
set -euo pipefail
IMAGE_NAME="my-app"
IMAGE_TAG="latest"
# A temporary Docker registry that runs on your local machine during deployment.
LOCAL_REGISTRY="localhost:5000"
REMOTE_IMAGE_NAME="${LOCAL_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
REGISTRY_CONTAINER_NAME="temp-deploy-registry"
# SSH connection details.
# The jump host is an intermediary server. Remove `-J "${JUMP_HOST}"` if not needed.
JUMP_HOST="user@jump-host.example.com"
PROD_HOST="user@production-server.internal"
PROD_PORT="22" # Standard SSH port
# --- Script Logic ---
# Cleanup function to remove the temporary registry container on exit.
cleanup() {
echo "Cleaning up temporary Docker registry container..."
docker stop "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
docker rm "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
echo "Cleanup complete."
}
# Run cleanup on any script exit.
trap cleanup EXIT
# Start the temporary Docker registry.
echo "Starting temporary Docker registry..."
docker run -d -p 5000:5000 --name "${REGISTRY_CONTAINER_NAME}" registry:2
sleep 3 # Give the registry a moment to start.
# Step 1: Tag and push the image to the local registry.
echo "Tagging and pushing image to local registry..."
docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${REMOTE_IMAGE_NAME}"
docker push "${REMOTE_IMAGE_NAME}"
# Step 2: Connect to the production server and deploy.
# The `-R` flag creates a reverse SSH tunnel, allowing the remote host
# to connect back to `localhost:5000` on your machine.
echo "Executing deployment command on production server..."
ssh -J "${JUMP_HOST}" "${PROD_HOST}" -p "${PROD_PORT}" -R 5000:localhost:5000 \
"docker pull ${REMOTE_IMAGE_NAME} && \
docker tag ${REMOTE_IMAGE_NAME} ${IMAGE_NAME}:${IMAGE_TAG} && \
systemctl restart ${IMAGE_NAME} && \
docker system prune --force"
echo "Deployment finished successfully."
What would be nicer instead is some variation of docker compose pussh that pushes the latest versions of local images to the remote host based on the remote docker-compose.yml file. The alternative would be docker pusshing the affected containers one by by one and then triggering a docker compose restart. Automating that would be useful and probably not that hard.
The connection back to HQ only lasts as long as necessary to pull the layers, tagging works as expected, etc etc. It's like having an on-demand hosted registry and requires no additional cruft on the remotes. I've been migrating to Podman and this process works flawlessly there too, fwiw.
My plan is to integrate Unregistry in Uncloud as the next step to make the build/deploy flow super simple and smooth. Check out Uncloud (link in the original post), it uses Compose as well.
github.com/dagger/dagger/modules/wolfi@v0.16.2 |
container |
with-exec ls /etc/ |
stdout
What's interesting here is that the first line demonstrates invocation of a remote module (building a Wolfi Linux container), of which there is an ecosystem: https://daggerverse.dev/We use to ‘clone’ across deployment environments and across providers outside of the build pipeline as an adhoc job.
The unregistry container provides a standard registry API you can pull images from as well. This could be useful in a cluster environment where you upload an image over ssh to one node and then pull it from there to other nodes.
This is what I’m planning to implement for Uncloud. Unregistry is so lightweight so we can embed it in every machine daemon. This will allow machines in the cluster to pull images from each other.
"docker save | ssh | docker load transfers the entire image, even if 90% already exists on the server"
I have spent an absolutely bewildering 7 years trying to understand why this huge gap in the docker ecosystem tooling exists. Even if I never use your tool, it’s such a relief to find someone else who sees the problem in clear terms. Even in this very thread you have people who cannot imagine “why you don’t just docker save | docker load”.
It’s also cathartic to see Solomon regretting how fucky the arbitrary distinction between registries and local engines is. I wish it had been easier to see that point discussed out in the open some time in the past 8 years.
It always felt to me as though the shape of the entire docker ecosystem was frozen incredibly fast. I was aware of docker becoming popular in 2017ish. By the time I actually stated to dive in, in 2018 or so, it felt like its design was already beyond question. If you were confused about holes in the story, you had to sift through cargo cult people incapable of conceiving that docker could work any differently than it already did. This created a pervasive gaslighty experience: Maybe I was just Holding It Wrong? Why is everyone else so unperturbed by these holes, I wondered. But it turns out, no, damnit - I was right!
the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command
I'm most familiar with on-prem deployments and quickly realised that it's much faster to build once, push to registry (eg github) and docker compose pull during deployments.
docker context create my-awesome-remote-context --docker "host=ssh://user@remote-host"
docker --context my-awesome-remote-context build . -t my-image:latest
This way you end up with `my-image:latest` on the remote host too. It has the advantage of not transferring the entire image but only transferring the build context. It builds the actual image on the remote host.I would presume it's something akin to $(ssh -L /var/run/docker.sock:/tmp/d.sock sh -c 'docker -H unix:///tmp/d.sock save | docker load') type deal
As a mitigation docker-pushmi-pullyu caches pushed layers between runs[1]. More often than not I'm only changing upper layers of previously-pushed images, so this helps a lot. Also, since everything happens locally the push phase is typically quite fast even with cache misses (especially on an SSD), especially compared to the pull phase which is usually going over the internet (or another network).
[1]: https://github.com/mkantor/docker-pushmi-pullyu/pull/19/file...
docker-pushmi-pullyu does an extra copy from build host to a registry, so it is just the standard workflow.
I think Spegel does what I want (= serve images from the local cache as a registry), I might be able to build from that. It is meant to be integrated with Kubernetes though, making a simple transfer tool probably requires some adaptation.
Also have a look at https://spegel.dev/, it's basically a daemonset running in your k8s cluster that implements a (mirror) registry using locally cached images and peer-to-peer communication.
I took inspiration from spegel but built a more focused solution to make a registry out of a Docker/containerd daemon. A lot of other cool stuff and workflows can be built on top of it.
I personally run a small instance with Hetzner that has K3s running. I'm quite familiar with K8s from my day job so it is nice when I want to do a personal project to be able to just use similar tools.
I have a Macbook and, for some reason I really dislike the idea of running docker (or podman, etc) on it. Now of course I could have GitHub actions building the project and pushing it to a registry, then pull that to the server, but it's another step between code and server that I wanted to avoid.
Fortunately, it's trivial to sync the code to a pod over kubectl, and have podman build it there - but the registry (the step from pod to cluster) was the missing step, and it infuriated me that even with save/load, so much was going to be duplicated, on the same effective VM. I'll need to give this a try, and it's inspired me to create some dev automation and share it.
Of course, this is all overkill for hobby apps, but it's a hobby and I can do it the way I like, and it's nice to see others also coming up with interesting approaches.
You should be able to run unregistry as a standalone service on one of the nodes. Kubernetes uses containerd for storing images on nodes. So unregistry will expose the node's images as a registry. Then you should be able to run k8s deployments using 'unregistry.NAMESPACE:5000/image-name:tag' image. kubelets on other nodes will be pulling the image from unregistry.
You may want to take a look at https://spegel.dev/ which works similarly but was created specifically for Kubernetes.
> Linux via Homebrew
Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.
Are there any good alternatives?
I tried it, but I have not been able to easily replicate our Homebrew env. We have a private repo with pre-compiled binaries, and a simple Homebrew formula that downloads the utilities and installs them. Compiling the binaries requires quite a few tools (C++, sigh).
I got stuck at the point where I needed to use a private repo in Nix.
Perfectly doable with Nix. Ignore the purists and do the hackiest way that works. It's too bad that tutorials get lost on concepts (which are useful to know but a real turn down) instead of focusing on some hands-on practical how-to.
This should about do it and is really not that different nor difficult than formulas or brew install:
git init mychannel
cd mychannel
cat > default.nix <<'NIX'
{
pkgs ? import <nixpkgs> { },
}:
{
foo = pkgs.callPackage ./pkgs/foo { };
}
NIX
mkdir -p pkgs/foo
cat > pkgs/foo/default.nix <<'NIX'
{ pkgs, stdenv, lib }:
stdenv.mkDerivation {
pname = "foo";
version = "1.0";
# if you have something to fetch
# src = fetchurl {
# url = http://example.org/foo-1.2.3.tar.bz2;
# # if you don't know the hash, put some lib.fakeSha256 there
# sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
# };
buildInputs = [
# add any deps
];
# this example just builds in place, so skip unpack
unpackPhase = "true"; # no src attribute
# optional if you just want to copy from your source above
# build trivial example script in place
buildPhase = ''
cat > foo <<'SHELL'
#!/bin/bash
echo 'foo!'
SHELL
chmod +x foo
'';
# just copy whatever
installPhase = ''
mkdir -p $out/bin
cp foo $out/bin
'';
}
NIX
nix-build -A foo -o out/foo # you should have your build in '/out/foo'
./out/foo/bin/foo # => foo!
git add .
git commit -a -m 'init channel'
git add origin git@github.com:OWNER/mychannel
git push origin main
nix-channel --add https://github.com/OWNER/mychannel/archive/main.tar.gz mychannel
nix-channel --update
nix-env -iA mychannel.foo
foo # => foo!
(I just cobbled that up together, if it doesn't work as is it's damn close; flakes left as an exercise to the reader)Note: if it's a private repo then in /etc/nix/netrc (or ~/.config/nix/netrc for single user installs):
machine github.com
password ghp_YOurToKEn
> Compiling the binaries requires quite a few tools (C++, sigh).Instantly sounds like a whole reason to use nix and capture those tools as part of the dependency set.
> Instantly sounds like a whole reason to use nix and capture those tools as part of the dependency set.
It's tempting, and I tried that, but ran away crying. We're using Docker images instead for now.
We are also using direnv that transparently execs commands inside Docker containers, this works surprisingly well.
I'm just sad that Nix is often dismissed as intractable, and I feel that's mostly because tutorials get too hung up on concept rabbit holing.
DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d
It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.
Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.
For 3rd party images like `postgres`, etc., then yes it will pull those from DockerHub or the registry you configure.
But in this method you push the source code, not a finished docker image, to the server.
I am now convinced that this is a hidden docker feature that too many people aren't aware of and do not understand.
docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)![0] https://docs.podman.io/en/stable/markdown/podman-image-scp.1...
Docker and containerd also store their images using a specific file system layout and a boltdb for metadata but I was afraid to access them directly. The owners and coordinators are still Docker/containerd so proper locks should be handled through them. As a result we become limited by the API that docker/containerd daemons provide.
For example, Docker daemon API doesn't provide a way to get or upload a particular image layer. That's why unregistry uses the containerd image store, not the classic Docker image store.
I am a bystander to these technologies. I’ve built and debug’ed the rare image, and I use docker desktop on my Mac to isolate db images.
When I see things like these, I’m always curious why docker, which seems so much more beaurecratic/convoluted, prevails over podman. I totally admit this is a naive impression.
First mover advantage and ongoing VC-funded marketing/DevRel
> Save/Load - `docker save | ssh | docker load` transfers the entire image, even if 90% already exists on the server
docker save $image | bzip2 | ssh "$host" 'bunzip2 | docker load'