For folks actually using these tools day-to-day:
What’s your default setup?
Have you had any "learned the hard way" moments?
What tradeoff (safety vs convenience vs parallelism) has mattered most in practice?
I'm less interested in theoretical best practices than what's actually holding up under real use.
A big lesson for us is that you still need to be careful even in a sandbox.
We've been running Claude/Codex/Gemini in sandboxed YOLO mode and have seen some interesting bypass attempts. [1]
A few examples:
- created fake npm tarballs and forged SHA‑512s in our package‑lock.json
- masked failures with `|| true`, making blocked operations look successful
- cloned a workspace, edited the clone, then replaced the workspace w the clone to bypass file‑path deny rules
So, we’ve learned to default to verbose logging, patch bypasses as we see them, and try to keep iteration loops short.
- SandVault (https://github.com/webcoyote/sandvault) runs the AI agent in a low-privilege account
- ClodPod (https://github.com/webcoyote/clodpod) runs the AI agent inside a MacOS VM
In both cases I map my code directories using shares/mounts.
I find that I use the low-privilege account solution more because it's easier to setup and doesn't require the overhead of a full VM
Most common is deleting files etc but if you're using git and have backups it's barely noticeable
I have more important things to waste my time on than writing absurd sandboxes to run AI agents without guardrails in. What even?
Backups are great when you know you need to restore.
Of course, AI is not a real person, and it does make mistakes that you or I probably would not. However, this class of mistake—deleting completely unrelated directories—does not appear to be a common failure mode. (Something like deleting all of ~ doesn’t count here—that would be immediately noticeable and could be restored from a backup.)
(Disclaimer, I’m not OP and I wouldn’t run Claude with —-dangerously-skip-permissions on my own system)
I wanted something like Claude code web with access to more models / local LLMs / my monorepo tooling, so far it's been great.
The output is a PR so it's hard for it to break anything.
The biggest benefit is probably that it makes it easier to start stuff when I'm out - feels like a much better use of downtime like I'm not waiting to get home to start a session after I have an idea.
The monorepo tooling is a bit win too, for a bunch of things I just have 1 way to do it and clear instructions for them to use the binaries that get bundled into new sessions so it gets things "right" more often.
Keen to give firecracker another go though. Last I explored that it still felt pretty rough. (on UX not tech quality)
I don't run Claude Code in YOLO mode, I just approve commands the first time I'm asked about them.
Using them since July I haven't found any problem with data loss and the clanker have not tried to delete my $HOME.
Who'd have imagined remote code execution as a service would have caught on as much as it has!
After a bit of tinkering I was able to get it to all run fine in Firejail, I wrote a guide here https://softwareengineeringstandard.com/2025/12/15/ai-agents...
Fairly basic, limits the agents write access to my projects, all of which are backed up in git.
On step 2, it's only jailing VS Code. Shouldn't it also jail the Git repo you're working on (and disable `git push` somehow), as well as all the env libs?
Also, isn't the point of this to auto approve everything?