Show HN: Run LLMs in Docker for any language without prebuilding containers
20 points
4 days ago
| 4 comments
| github.com
| HN
I've been looking for a way to run LLMs safely without needing to approve every command. There are plenty of projects out there that run the agent in docker, but they don't always contain the dependencies that I need.

Then it struck me. I already define project dependencies with mise. What if we could build a container on the fly for any project by reading the mise config?

I've been using agent-en-place for a couple of weeks now, and it's working great! I'd love to hear what y'all think

killingtime74
2 hours ago
[-]
This weekend I got Gemini to do the same for me by scripts but in Firecracker VMs rather than Docker
reply
sshine
10 hours ago
[-]
reply
verdverm
9 hours ago
[-]
these are a dime a dozen now, many different takes, I have my own too
reply
mistrial9
8 hours ago
[-]
> Intercepts all commands via Node.js instrumentation
reply
KolmogorovComp
9 hours ago
[-]
Thanks for sharing, works well
reply
verdverm
9 hours ago
[-]
I've been working on something similar, but geared towards integration with VS Code. Builds on CUE + Dagger via https://github.com/hofstadter-io/hof/tree/_next/examples/env

This allows you to not only run current commands in a containerized environment, but also any point in history

> .go-version

I would not call this idiomatic Go, you can get the version my project requires from my go.mod file. Imo, a single file with the inputs your tool needs would be preferable to a bunch of files with a single line, but ideally it can infer that on it's own by looking at the language files that already exist

reply