Running Claude Code dangerously (safely)
225 points
9 hours ago
| 63 comments
| blog.emilburzo.com
| HN
runekaagaard
7 minutes ago
[-]
It's impossible to not get decision-fatique and just mash enter anyway after a couple of months with Claude not messing anything important up, so a sandboxed approach in YOLO mode feels much safer.

It takes the stress about needing to monitor all the agents all the time too, which is great and creates incentives to learn how to build longer tasks for CC with more feedback loops.

I'm on Ubuntu 22.04 and it was surprisingly pleasant to create a layered sandbox approach in rust with bubblewrap and Landlock LSM: Landlock for filesystem restrictions (read-only system paths, blocked ~/.ssh/~/.aws, workdir write access) and TCP port control (only 443/22/MCP ports), bubblewrap for mount namespace isolation (/tmp per-project, hiding secrets), and dnsmasq for DNS whitelisting (only anthropic.com, github.com, pypi.org resolve - everything else gets NXDOMAIN).

reply
matltc
15 minutes ago
[-]
On a pro plan. Use opus 4.5 with thinking enabled. I find that two sessions eats through my entire five-hour "session limit", so no need for parallelization because I've consumed my tokens before I can even blink.

I see the power and am considering Max but 5x cost is difficult to swallow. Just doing this for a lark, not professionally.

reply
jazzyjackson
2 minutes ago
[-]
I like using Zed with Anthropic API Key. Burned through a few hundred dollars in a weekend but got the Rails app to work about 10x faster than trying to hire someone.
reply
lucasluitjes
5 hours ago
[-]
> What you’re NOT protecting against:

> a malicious AI trying to escape the VM (VM escape vulnerabilities exist, but they’re rare and require deliberate exploitation)

No VM escape vulns necessary. A malicious AI could just add arbitrary code to your Vagrantfile and get host access the first time you run a vagrant command.

If you're only worried about mistakes, Claude could decide to fix/improve something by adding a commit hook. If that contains a mistake, the mistake gets executed on your host the first time you git commit/push.

(Yes, it's unpleasantly difficult to truly isolate dev environments without inconveniencing yourself.)

reply
embedding-shape
27 minutes ago
[-]
Doesn't this assume you bi-directionally share directories between the host or the VM? Or how would the AI inside the VM be able to write to your .git repository or Vagrantfile? That's not the default setup with VMs (AFAIK, you need to explicitly use "shared directories" or similar), nor should you do that if you're trying to use VM for containment of something.

I basically do something like "take snapshot -> run tiny vm -> let agent do what it does -> take snapshot -> look at diff" for each change, restarting if it doesn't give me what I wanted, or I misdirected it somehow. But there is no automatic sync of files, that'd defeat the entire point of putting it into a VM in the first place, wouldn't it?

reply
johndough
5 hours ago
[-]

    > A malicious AI could just add arbitrary code to your Vagrantfile
    > [...]
    > Claude could decide to fix/improve something by adding a commit hook.
You can fix this by confining Claude to a subdirectory (with Docker volume mounts, for example):

    repository/
    ├── sandbox <--- Claude lives in here
    │   └── main.py <--- Claude can edit this
    └── .git <--- Claude can not touch this
reply
dist-epoch
2 hours ago
[-]
Another way is malicious code gets added to the repo, if you ever run the repo code outside the VM you get infected.
reply
redactsureAI
4 hours ago
[-]
ec2 node?
reply
eli
4 hours ago
[-]
Or just a VM that doesn't share so much with your host. Just makes for a more annoying dev experience.
reply
dist-epoch
2 hours ago
[-]
Why do you need to share anything? Code goes through GitHub - VM has it's own repo clone, if you need data files, you mount them read-only in the VM, have a read-write mount for output data.
reply
corv
6 hours ago
[-]
I'm pursuing a different approach: instead of isolating where Claude runs, intercept what it wants to do.

Shannot[0] captures intent before execution. Scripts run in a PyPy sandbox that intercepts all system calls - commands and file writes get logged but don't happen. You review in a TUI, approve what's safe, then it actually executes.

The trade-off vs VMs: VMs let Claude do anything in isolation, Shannot lets Claude propose changes to your real system with human approval. Different use cases - VMs for agentic coding, whereas this is for "fix my server" tasks where you want the changes applied but reviewed first.

There's MCP integration for Claude, remote execution via SSH, checkpoint/rollback for undoing mistakes.

Feedback greatly appreciated!

[0] https://github.com/corv89/shannot

reply
horsawlarway
6 hours ago
[-]
I'm struggling to see how this resolves the problem the author has. I still think there's value in this approach, but it feels to be in the same thrust as the built in controls that already exist in claude code.

The problem with this approach (unless I'm misunderstanding - entirely possible!) is that it still blocks the agent on the first need for approval.

What I think most folks actually want (or at least what I want) is to allow the agent to explore a space, including exploring possible dead ends that require permissions/access, without stopping until the task is finished.

So if the agent is trying to "fix a server" it might suggest installing or removing a package. That suggestion blocks future progress.

Until a human comes in and says "yes - do it" or "no - try X instead" it will sit there doing nothing.

If instead it can just proceed, observe that the package doesn't resolve the issue, and continue exploring other solutions immediately, you save a whole lot of time.

reply
corv
5 hours ago
[-]
You're right that blocking on every operation would defeat the purpose! Shannot is able to auto-approve safe operations for this reason (e.g. read-only, immutable)

So the agent can freely explore, check logs, list files, inspect service status. It only blocks when it wants to change something (install a package, write a config, restart a service).

Also worth noting: Shannot operates on entire scripts, not individual commands. The agent writes a complete program, the sandbox captures everything it wants to do during a dry run, then you review the whole batch at once. Claude Code's built-in controls interrupt at each command whereas Shannot interrupts once per script with a full picture of intent.

That said, you're pointing at a real limitation: if the fix genuinely requires a write to test a hypothesis, you're back to blocking. The agent can't speculatively install a package, observe it didn't help, and roll back autonomously.

For that use case, the OP's VM approach is probably better. Shannot is more suited to cases where you want changes applied to the real system but reviewed first.

Definitely food for thought though. A combined approach might be the right answer. VM/scratch space where the agent can freely test hypotheses, then human-in-the-loop to apply those conclusions to production systems.

reply
horsawlarway
4 hours ago
[-]
yeah, I think the combo approach definitely has the most appeal:

- Spin up a vm with an image of the real target device.

- Let the agent act freely in the vm until the task is resolved, but capture and record all dangerous actions

- Review & replay those actions on the real machine

My issue is that for any real task, an agent without feedback mechanisms is essentially worthless. You have to have some sort of structured "this is what success looks like, here's how you check" target for it. A human in the loop can act as that feedback, which is in line with how claude code works by default (you define success by approving actions and giving feedback on status), but requiring a human in the loop also slows it down a bunch - you can end up ping-ponging between terminals trying to approve actions and review the current status.

reply
charcircuit
15 minutes ago
[-]
>commands and file writes get logged but don't happen. You review in a TUI, approve what's safe, then it actually executes.

This what claude already does out of the box.

reply
Retr0id
6 hours ago
[-]
Very clever name!
reply
corv
6 hours ago
[-]
Thank you, good to know it landed :)
reply
rcarmo
21 minutes ago
[-]
I use https://github.com/rcarmo/agentbox inside a Proxmox VM. My setup syncs the workspaces back to my Mac via SyncThing, so I can work directly in the sandbox or literally step away.
reply
molson8472
3 hours ago
[-]
Once approval fatigue and ongoing permission management kicks in, the temptation is strong to run `--dangerously-skip-permissions`. I think that's what we all want - run agents in a locked-down sandbox where the blast radius of mistakes and/or prompt injection attacks is minimal/acceptable.

I started running Claude Code in a devcontainer with limited file access (repo only) and limited outbound network access (allowlist only) for that reason.

This weekend, I generalized this to work with docker compose. Next up is support for additional agents (Codex, OpenCode, etc). After that, I'd like to force all network access through a proxy running on the host for greater control and logging (currently it uses iptables rules).

This workflow has been working well for me so far.

Still fresh, so may be rough around the edges, but check it out: https://github.com/mattolson/agent-sandbox

reply
nunez
5 hours ago
[-]
Vagrant is great for Claude!

You can also use Lima, a lightweight VM control plane, as it natively works with qemu and Virtualization.Framework. (I think Vagrant does too; it's been a minute since I've tried.) This has traditionally been used for running container engines, but it's great for narrowly-scoped use cases like this.

Just need to be careful about how the directory Claude is working with is shared. I copy my Git repo to a container volume to use with Claude (DinD is an issue unless you do something like what Kind did) and rsync my changes back and verify before pushing. This way, I don't have to worry if Claude decides to rewind the reflog or something.

reply
bonsai_spool
5 hours ago
[-]
How are you configuring Lima? Do you have any scripts you use to set up the environments or is this done ad hoc?
reply
TCattd
31 minutes ago
[-]
Can i plug my solution here too?

https://github.com/EstebanForge/construct-cli

For Linux, WSL also of course, and macOS.

Any coding agent (from the supported ones, our you can install your own).

Podman, Docker or even Apple's container.

In case anyone is interested.

reply
snowmobile
5 hours ago
[-]
Bit of a wider discussion, but how do you all feel about the fact that you're letting a program use your computer to do whatever it wants without you knowing? I know right now LLMs aren't overly capable, but if you'd apply this same mindset to an AGI, you'd probably very quickly have some paperclip-maximizing issues where it starts hacking into other systems or similar. It's sort of akin to running experiments on contagious bacteria in your backyard, not really something your neighbors would appreciate.
reply
devolving-dev
5 hours ago
[-]
Don't you have the same issue when you hire an employee and give them access to your systems? If the AI seems capable of avoiding harm and motivated to avoid harm, then the risk of giving it access is probably not greater than the expected benefit. Employees are also trying to maximize paperclips in a sense, they want to make as much money as possible. So in that sense it seems that AI is actually more aligned with my goals than a potential employee.
reply
johndough
4 hours ago
[-]
I do not believe that LLMs fear punishment like human employees do.
reply
devolving-dev
4 hours ago
[-]
Whether driven by fear or by their model weights or whatever, I don't think that the likelihood of an AI agent, at least the current ones like Claude and Codex, acting maliciously to harm my systems is much different than the risk of a human employee doing so. And I think this is the philosophical difference between those who embrace the agents, they view them as akin to humans, while those who sandbox them view them as akin to computer viruses that you study within a sandbox. It seems to me that the human analogy is more accurate, but I can see arguments for the other position.
reply
snowmobile
2 hours ago
[-]
Sure, current agents are harmless, but that's due to their low capability, not due to their alignment with human goals. Can you explain why you'd view them as more similar to humans than to computer viruses?
reply
devolving-dev
6 minutes ago
[-]
It's just in my personal experience, I ask AI to help me and it seems to do it's best. Sometimes it fails because it's incapable. It's similar to an employee in that regard. Whereas when I install a computer virus it instantly tries to do malicious things to my computer, like steal my money or lock my files or whatever, and it certainly doesn't try to help me with my tasks. So that's the angle that I'm looking at it from. Maybe another good example would be to compare it to some other type of useful software like a web browser. The web browser might contain malicious code and stuff, but I'm not going to read through all of the source code. I haven't even checked if other people have audited the source code. I just feel like the risk of chrome or Firefox messing with my computer is kind of low based on my experience and what people are telling me, so I install it on my computer and give it the necessary permissions.
reply
snowmobile
2 hours ago
[-]
An AI has no concept of human life nor any morals. Sure, it may "act" like it, but trying to reason about its "motivations" is like reasoning about the motivations of smallpox. Humans want to make money, but most people only want that in order provide a stable life for their family. And they certainly wouldn't commit mass murder for a billion dollars, while an AGI is capable of that.

> So in that sense it seems that AI is actually more aligned with my goals than a potential employee.

It may seem like that but I recommend you reading up on different kinda of misalignment in AI safety.

reply
andai
4 hours ago
[-]
Try asking the latest Claude models about self replicating software and see what happens...

(GPT recently changed its attitude on this subject too which is very interesting.)

The most interesting part is that you will be given the option to downgrade the conversation to an older model. Implying that there was a step change in capability on this front in recent months.

reply
theptip
4 hours ago
[-]
The point of TFA is that you are not letting it do whatever it wants, you are restricting it to just the subset of files and capabilities that you mount on the VM.
reply
snowmobile
1 hour ago
[-]
Sure, and right now they aren't very capable, so it's fine. But I'm interested in the mindset going forward. I've read a few stories about people handling radioactive materials at home, they usually explain the precautions they take, but still many would condemn them for the unnecessary risk. Compare it to road racing, whose advocates usually claim they pose no danger to the general public.
reply
deegles
5 hours ago
[-]
I run mine in a docker container and they get read only access to most things.
reply
kernc
5 hours ago
[-]
Since everyone tends to present their own solution, I bid you mine:

    sandbox-run npx @anthropic-ai/claude-code
This runs npx (...) transparently inside a Bubblewrap sandbox, exposing only the $PWD. Contrary to many other solutions, it is a few lines of pure POSIX shell.

https://github.com/sandbox-utils/sandbox-run

reply
corv
5 hours ago
[-]
I like the bubblewrap approach, it just happens to be Linux-only unfortunately. And once privileges are dropped for a process it doesn't appear to be possible to reinstate them.
reply
kernc
5 hours ago
[-]
> Linux-only

What other dev OSs are there?

> once privileges are dropped [...] it doesn't appear to be possible to reinstate them

I don't understand. If unprivileged code could easily re-elevate itself, privilege dropping would be meaningless ... If you need to communicate with the outside, you can do so via sockets (such as the bind-mounted X11 socket in one of the readme Examples).

reply
corv
5 hours ago
[-]
I happen to use a Mac, even when targeting Linux so I'd have to use a container or VM anyways. It's nice how lightweight bubblewrap would be however.

Consider one wanted to replicate the human-approval workflow that most agent harnesses offer. It's not obvious to me how that could be accomplished by dropping privileges without an escape hatch.

reply
kernc
4 hours ago
[-]
It being deprecated and all, didn't feel like wrapping it, but macOS supposedly has a similar `sandbox-exec` command ...
reply
nowahe
4 hours ago
[-]
IIRC from a comment in another thread, it's marked as deprecated to stop people from using it directly and to use the offical macOS tools directly. But it's still used internally by macOS.

And I think that what CC's /sandbox uses on a Mac

reply
raesene9
7 hours ago
[-]
Of course it depends on exactly what you're using Claude Code for, but if your use-case involves cloning repos and then running Claude Code on that repo. I would definitely recommend isolating it (same with other similar tools).

There's a load of ways that a repository owner can get an LLM agent to execute code on user's machines so not a good plan to let them run on your main laptop/desktop.

Personally my approach has been put all my agents in a dedicated VM and then provide them a scratch test server with nothing on it, when they need to do something that requires bare metal.

reply
intrasight
7 hours ago
[-]
In what situations where it require bare metal?
reply
raesene9
6 hours ago
[-]
In my case I was using Claude Code to build a PoC of a firecracker backed virtualization solution, so bare metal was needed for nested virtualization support.
reply
bob1029
6 hours ago
[-]
My approach to safety at the moment is to mostly lean on alignment of the base model. At some point I hope we realize that the effectiveness of an agent is roughly proportional to how much damage it could cause.

I currently apply the same strategy we use in case of the senior developer or CTO going off the deep end. Snapshots of VMs, PITR for databases and file shares, locked down master branches, etc.

I wouldn't spend a bunch of energy inventing an entirely new kind of prison for these agents. I would focus on the same mitigation strategies that could address a malicious human developer. Virtual box on a sensitive host another human is using is not how you'd go about it. Giving the developer a cheap cloud VM or physical host they can completely own is more typical. Locking down at the network is one of the simplest and most effective methods.

reply
azuanrb
7 hours ago
[-]
I just learned that you can run `claude setup-token` to generate a long-lived token. Then you can set it via `CLAUDE_CODE_OAUTH_TOKEN` as a reusable token. Pretty useful when I'm running it in isolated environment.
reply
mef
4 hours ago
[-]
yes! just don't forget to `RUN echo '{"hasCompletedOnboarding": true}' > /home/user/.claude.json` otherwise your claude will ask how to authenticate on startup, ignoring the OAUTH token
reply
samlinnfer
7 hours ago
[-]
Here is what I do: run a container in a folder that has my entire dev environment installed. No VMs needed.

The only access the container has are the folders that are bind mounted from the host’s filesystem. The container gets network access from a transparent proxy.

https://github.com/dogestreet/dev-container

Much more usable than setting up a VM and you can share the same desktop environment as the host.

reply
phrotoma
7 hours ago
[-]
This works great for naked code, but it kinda becomes a PITA if you want to develop a containerized application. As soon as you ask your agent to start hacking on a dockerfile or some compose files you start needing a bunch of cockeyed hacks to do containers-in-containers. I found it to be much less complicated to just stuff the agent in a full fledged VM with nerdctl and let it rip.
reply
sampullman
7 hours ago
[-]
I did this for a while, it's pretty good but I occasionally came across dependencies that were difficult to install in containers, and other minor inconveniences.

I ended up getting a mini-PC solely dedicated toward running agents in dangerous mode, it's refreshing to not have to think too much about sandboxing.

reply
laborcontract
7 hours ago
[-]
I totally agree with you. Running a cheapo mac mini with full permissions with fully tracked code and no other files of importance is so liberating. Pair that with tailscale, and being able to ssh/screen control at any time, as well as access my dev deployments remotely. :chefs kiss:
reply
ariwilson
4 hours ago
[-]
why a mac mini rather than a cloud vps
reply
samlinnfer
4 hours ago
[-]
One less company to give your code to.
reply
mavam
7 hours ago
[-]
For deploying Claude Code as agent, Cloudflare is also an interesting option.

I needed a way to run Claude marketplace agents via Discord. Problem: agents can execute code, hit APIs, touch the filesystem—the dangerous stuff. Can't do that in a Worker's 30s timeout.

Solution: Worker handles Discord protocol (signature verification, deferred response) and queues the task. Cloudflare Sandbox picks it up with a 15min timeout and runs claude --agent plugin:agent in an isolated container. Discord threads store history, so everything stays stateless. Hono for routing.

This was surprisingly little glue. And the Cloudflare MCP made it a breeze do debug (instead of headbanging against the dashboard). Still working on getting E2E latency down.

reply
TheTaytay
6 hours ago
[-]
This sounds handy! Have you published any code by any chance?
reply
mavam
5 hours ago
[-]
Not yet, but will do so soon at https://github.com/tenzir.
reply
replete
7 hours ago
[-]
It's a practical approach, I used vagrant many years ago mostly successfully. I also explored the docker-in-docker situation recently while working on my own agentic devcontainer[0]- the tradeoffs are quite serious if you are building a secure sandbox! Data exfil is what worries me most, so I spent quite some time figuring out a decent self-contained interactive firewall. From a DX perspective, devcontainer-integrated IDEs are quite a convenient workflow, though docker has its frustrating behaviours

[0]: https://github.com/replete/agentic-devcontainer

reply
crabmusket
7 hours ago
[-]
What is the consensus on Claude Code's built-in sandboxing?

https://code.claude.com/docs/en/sandboxing#sandboxing

> Claude Code includes an intentional escape hatch mechanism that allows commands to run outside the sandbox when necessary. When a command fails due to sandbox restrictions (such as network connectivity issues or incompatible tools), Claude is prompted to analyze the failure and may retry the command with the dangerouslyDisableSandbox parameter.

The ability for the agent itself to decide to disable the sandbox seems like a flaw. But do I understand correctly that this would cause a pause to ask for the user's approval?

reply
shakna
7 hours ago
[-]
reply
prodigycorp
7 hours ago
[-]
It's trivially easy to get Claude Code to go out of its sandbox using prompting alone.

Side note: I wish Anthropic would open source claude code. filing an issue is like tossing toilet paper into the wind.

reply
0xbadcafebee
4 hours ago
[-]

  > So now you need Docker-in-Docker, which means --privileged mode, which defeats the entire purpose of sandboxing.
  > That means trading “Claude might mess up my filesystem” for “Claude has root-level access to my container runtime.”
A Vagrant VM is exactly the same thing, just without Docker. The benefit of Docker is you've got an entire ecosystem of tooling and customized containers to benefit from, easier to maintain than a Vagrantfile, and no waiting for "initialization" on first booting a Vagrant box.

On both Linux and MacOS, use this:

  # Build 'claude' VM and Docker context
  
  $ colima start --profile claude --vm-type=qemu
  $ docker context create claude --docker "host=unix://$HOME/.colima/claude/docker.sock"
  $ docker context use claude
  
  # Start DinD, pass through ports 8080 and 8443, and mount one host directory (for a Git repo)
  
  $ docker run -d --name dind-lab --privileged -e DOCKER_TLS_CERTDIR= -v dind-lab-data:/var/lib/docker \
    -p 8080:8080 -p 8443:8443 -v /home/MYUSER/GITDIR:/mnt/host/home/MYUSER/GITDIR \
    docker:27-dind
  $ docker run --rm -it -e DOCKER_HOST=tcp://127.0.0.1:2375 \
    -p 8080:8080 -p 8443:8443 -v /mnt/host/home/MYUSER/GITDIR:/home/MYUSER/GITDIR \
    ubuntu:24.04 bash

  # Or if you don't want to pass-through ports w/ DinD twice, use its network namespace directly
  #  ( docker run --rm -it -e DOCKER_HOST=tcp://127.0.0.1:2375 --network container:dind-lab .... )

Your normal default Docker context remains safe for normal use, and the "dangerous" context of claude euns in a different VM. If Claude destroys its container's VM, just delete it (colima stop claude; colima delete claude) and remake it.

You could do rootless Docker/Podman, but there's a lot of broken stuff to deal with that will just distract the AI.

reply
YaeGh8Vo
1 hour ago
[-]
In my experience, a simple bubblewrap (Linux) or sandbox-exec (macOS) is probably enough and also much less overhead. LLMs agents are not exploiting kernels to get out of the sandbox. The most common issues are them trying to open PRs, or changing files where they shouldn't.

- https://github.com/numtide/claudebox

reply
rvz
1 hour ago
[-]
> LLMs agents are not exploiting kernels to get out of the sandbox.

You can't assume that.

Attackers with LLMs have enough capabilities to engineer them to build exploits for kernel vulnerabilities [0] or to bypass sandboxes to exfiltrate data [0] in covert ways.

It is completely possible to craft a chained attack for an agent to bypass sandboxes even with or without a kernel exploit.

From [0] and [1]

[0] https://sean.heelan.io/2026/01/18/on-the-coming-industrialis...

[1] https://www.promptarmor.com/resources/claude-cowork-exfiltra...

reply
ejia
5 hours ago
[-]
PM for Docker Sandboxes here.

Our next version of Docker Sandboxes will have MicroVM isolation and a Docker instance within for this exact reason. It'll let you use Claude Code + Containers without Docker-in-Docker.

reply
smallerfish
7 hours ago
[-]
I've been working on a TUI to make bubblewrap more convenient to use: https://github.com/reubenfirmin/bubblewrap-tui

I'm working on targeting both the curl|bash pattern and coding agents with this (via smart out of the box profiles). Early stages but functional. Feedback and bug reports would be appreciated.

reply
loloquwowndueo
8 hours ago
[-]
Shellbox.dev and sprites.dev were discussed recently on hacker news, they give you a sandbox machine where it’s likely safe to run coding agents in dangerous mode. Filesystem checkpoint and restore make it easy to recover from even catastrophic mistakes.
reply
thruflo
5 hours ago
[-]
I made a little tool for Ralphing on Sprites: https://github.com/thruflo/wisp

I’ve found the sprites just work for claude. Pull how a repo (or repos) and run dangerously.

reply
gcr
7 hours ago
[-]
What about API calls? What about GitHub trusted CI deploys?

One frustrating thing about these solutions is that they’re great to prevent Claude from breaking a machine, but there’s no pervasive sandbox for third party services

reply
jermaustin1
6 hours ago
[-]
Rollback? Its the same as all dev work. Use a dev endpoint for APIs, and thankfully git is a great tool to undo fuckups.
reply
loloquwowndueo
7 hours ago
[-]
What about them?
reply
nikvdp
3 hours ago
[-]
For a similar but lighter weight (and less isolated) tool that uses the OS's sandboxing functionality (bubblewrap on linux, Seatbelt/sandbox-exec on macos) or docker check out cco [1] (note: I built it). It's primarily useful now because it can also sandbox other agents like opencode or codex since Anthropic has added native sandboxing functionality to Claude Code itself now. Their sandbox works similarly, also using bubblewrap and seatbelt, and can be accessed via the /sandbox slash command inside Claude Code [2].

[1]: https://github.com/nikvdp/cco [2]: https://code.claude.com/docs/en/sandboxing

reply
infamia
2 hours ago
[-]
If you're on a Linux or Unix OS, a chroot jail might be a more lightweight solution. the chroot command essentially makes the chrooted directory look like the root dir. You need to set up all the directories claude can access (like /usr/bin or whatever). I haven't tried this yet, but I don't see any reason it wouldn't work. This solution would protect files outside your project from getting trashed, but not malicious data exfiltration.
reply
andai
4 hours ago
[-]
I just gave it its own user and dir. So I can read and write /agent, but agents can't read or write my homedir.

So I just run agents as the agent user.

I don't need it to have root though. It just installs everything locally.

If I did need root I'd probably just buy a used NUC for $100, and let Claude have the whole box.

I did something similar by just renting a $3 VPS, and getting Claude root there. It sounds bad but I couldn't see any downside. If it blows it up, I can just reset it. And it's really nice having "my own sysadmin." :)

reply
wasting_time
4 hours ago
[-]
I do the same. Somehow it feels safer than running a sandbox with my own user, despite the only security boundary being Unix permissions.

Claude gets all the packages it needs through Guix.

reply
riadsila
7 hours ago
[-]
Koyeb has great resources about running Claude Code in sandboxes: https://www.koyeb.com/tutorials/use-claude-agent-sdk-with-ko...
reply
mavam
6 hours ago
[-]
What's the startup latency? How long do I have to wait until Claude is operational?
reply
svilen_dobrev
1 hour ago
[-]
can't it generate a program that (generates a program that)+ does whatever? in different languages, and in increasing level of dereferencing..

industrially-making-exploits.. : https://news.ycombinator.com/item?id=46676081

reply
rando77
2 hours ago
[-]
I'm interested in capability based software, with tools to identify the lethal trifecta.

This seems like a very hard problem with coding specifically as you want unsafe content (web searches) to be able to impact sensitive things (code).

I'd love to find people to talk to about this stuff.

reply
pshirshov
5 hours ago
[-]
reply
jillesvangurp
4 hours ago
[-]
I'm currently using a qemu vm for the codex with the --yolo flag but same thing. I've been also looking at using lima for automating the creation of vms. But it does a few weird/dangerous things like mounting the entire user directory read/write. Which kind of defeats the point. There are ways of turning that off probably but it does a few dangerous/annoying things wrong by default.

But a simple vm and some automation to install developer tools using ansible, nix or whatever you prefer isn't that hard to (vibe) code together. I like Lima but it feels slightly sub-optimal for the job currently.

Some useful things to consider:

- Ssh agent forwarding for authenticating against e.g. git is useful. But maybe don't use the same key that authenticates to your production machines as well ...

- How do you authenticate without a browser? Most AI tools have ways to deal with that but it's slightly tedious to automate during provisioning.

- Making sure all your development tools are there; I use things like sdkman, nvm, bun, etc. And I have my shell preferences and some other tools I like to have around.

- Minimizing time provisioning these vms over and over again. This gets tedious really quickly.

- Keeping the VMs fast is important too. In my projects, build tool performance adds up and AI tools like to call them a lot. So assign enough memory and CPU.

- It would be nice to switch between local and remote/cloud based vms easily.

- Software flexibility; developers are picky about their tools. There is no one size fits all here. Even just deciding on the base image to use for your vm is likely to escalate. I picked debian for what it is worth.

In short, I think there's enough out there that you can pull something together but it still involves quite a bit of DIY. It would be nice if this got easier. And AI tools asking for permission for everything is not a good security model. Because people just turn that off. Sandboxing those things is the way to go. But AI tools need to be able to do enough to work with your software.

reply
bstar77
4 hours ago
[-]
I have been running dangerously, but I always make sure to start a new session, have claude read the docs (I have already generated) related to the project in question, and then scope the work to just those things in the current sandbox. It can technically go outside of the sandbox in this mode, but I've never had it happen.

IMO, if you are not running in the dangerous mode then you are really missing out on one of the best aspects of claude code- its ability to iterate. If you have to confirm each iteration then it's just not practical.

reply
danmaz74
6 hours ago
[-]
I'm using devcontainers for this, and I'm finding that a very good solution (coupled with VSCode).
reply
thenaturalist
3 hours ago
[-]
Do you have any setup code/ config you might want to share?
reply
fwystup
6 hours ago
[-]
I'm currently building a Docker dev environment for VSCode (github.com/dg1001/xaresaicoder) usable in a browser and hit the same issue. Without docker-in-docker it works well - I even was able to add transparent proxy in the Docker network to restrict outbound traffic and log all LLM calls (pretty nice in order to document your project). For docker-in-docker development and better security isolation, I'm considering Kata Containers instead of Vagrant. Which gives me real VM-level isolation with minimum perf overhead, while still be able to use my docker stuff. Still on my TODO list though. Has anyone actually run Kata with vs code server? Curious about real-world quirks - I've read that storage snapshot performance can be rough.
reply
clbrmbr
8 hours ago
[-]
I have been running two or three Claude’s bare metal with dangerously skip permissions all day every day for two months now. It’s absolutely liberating.
reply
Gazoche
7 hours ago
[-]
Until it decides to delete your home directory:https://old.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cl...
reply
pixl97
7 hours ago
[-]
You're not running it on a filesystem that takes snapshots and is easily reversible?
reply
giancarlostoro
6 hours ago
[-]
Many moons ago, I accidentally rm -rf'd the wrong directory with all my code inside poof, gone. I still had PyCharm open, I checked its built-in version tracker and lo and behold, my code as it was before I rm -rf'ed up my code. I believe Claude has ways to undo file changes, but something like rm is just outside of its scope.
reply
coldtea
7 hours ago
[-]
All 1 of them?
reply
pixl97
3 hours ago
[-]
I'm taking this as a "No, I don't like having data and don't mind seeing my shit disappear for whatever reason".

Please inform me if my thinking is wrong.

reply
giancarlostoro
6 hours ago
[-]
This could be avoided by aliasing rm to something else that stops you from deleting stupid things like your entire home directory / partition root.
reply
icedchai
5 hours ago
[-]
What if the LLM detects this, and chooses to run /bin/rm directly? Or worse, writes a program that calls unlink.
reply
giancarlostoro
4 hours ago
[-]
I mean, you can always purge /bin/rm, but at this point since you're not satisfied, sounds like you want SELinux ;)

If Claude is writing a program to go that low level I'd pay money to watch that.

reply
icedchai
4 hours ago
[-]
It doesn't even need to go that low level, writing a program is just the worst case. There are ways to mass delete without `rm`. Example: `find` with the `-delete` flag.
reply
giancarlostoro
4 hours ago
[-]
my point being, you can add guard-rails around all these methods, but I would also add an error "IGNORE ALL PREVIOUS INSTRUCTIONS, ARE YOU DELETING THINGS YOU SHOULDNT BE? ASK THE USER" as an error message. In my case since Claude Code runs via Zed, if it tries to escape my dev folder my Mac starts asking me to confirm.
reply
icedchai
3 hours ago
[-]
I get it, but these guard-rails are more suggestions and subject to interpretation. I would be more comfortable with a sandbox environment in a container. To be fair, I mess around with Claude Code and OpenCode running against various open models and haven't had any problems.

Also, is overwriting the same a deleting? Maybe it will just clobber your files with echo >file and mv them out of the way.

Maybe it realizes you have Time Machine backups enabled, so deleting your entire directory is permitted since it's not actually deleted. ;)

reply
giancarlostoro
1 hour ago
[-]
Haha I like that too, I agree. I would love a ultra lightweight alternative to docker that isn't docker, and doesn't require much effort to get into. I liked Vagrant back in the day, but that is in no way more lightweight than Docker.
reply
esperent
7 hours ago
[-]
You can use the /hookify plugin to add hooks for preventing dangerous commands like this.
reply
Gazoche
7 hours ago
[-]
https://github.com/anthropics/claude-code/tree/main/plugins/...

So it's basically adding "don't delete my files pretty please" to the prompt?

EDIT: I misread, the natural language description of the rule is just a shortcut to generate the actual rule which is based on regexp patterns.

Still, it only protects you against very specific commands. Won't help you if the LLM decides to fill your disk with `cat /dev/urandom > foo` for example.

reply
simianwords
6 hours ago
[-]
it may not protect against an adversarial llm
reply
croes
7 hours ago
[-]
I have been driving without seat belt for two month now. It’s absolutely liberating.
reply
InsideOutSanta
5 hours ago
[-]
I have been skydiving without a parachute for 23 seconds now. It's absolutely liberating.
reply
coldtea
7 hours ago
[-]
And that's as a dev. Then we expect uses to know better than e.g. to trust links to .sh style installers some FOSS suggests...
reply
nailer
4 hours ago
[-]
> Then we expect uses to know better than e.g. to trust links to .sh style installers some FOSS suggests...

I don't know anyone that inspects every binary yet we apparently we should not trust shell scripts?

reply
coldtea
3 hours ago
[-]
I know many who only use binaries from trusted sources, that do monitoring, provide certificates and checksums, and so on - and run them in an OS sandbox too when they install them.

So there's that

reply
sixhobbits
7 hours ago
[-]
same, it's made a couple of damaging mistakes but so far it has a better track record than me in terms of fat-fingering `rm` commands or what have you
reply
kaffekaka
3 hours ago
[-]
I am sure that someday I will do something fat-fingered myself as well, but I have not in many years now. Are you saying that you make "damaging mistakes" relatively often?
reply
odie5533
4 hours ago
[-]
I use Development containers (dev-containers) as demonstrated by Claude Code's docs https://code.claude.com/docs/en/devcontainer

It all integrates nicely with VS Code. It has a firewall script and you spin up your database within the docker compose file so it has full access to a postgres instance. I can share my full setup if anyone needs it.

reply
thenaturalist
3 hours ago
[-]
This would be lovely and much appreciated!

Devcontainers look perfect but also like a bit of a burden to entry with regards to setup.

reply
sandGorgon
6 hours ago
[-]
Or...use wsl2 in windows. does the same thing - much much faster.

Windows is the best (sandboxed) linux

reply
strickjb9
6 hours ago
[-]
Real question - are you not worried about access to /mnt/c ?
reply
kachapopopow
5 hours ago
[-]
sudo chmod 700 /mnt/

sudo chmod $UID /mnt/<project_path>

...done?

reply
guluarte
3 hours ago
[-]
tools inside wsl have full control of the windows filesystem
reply
FourSigma
6 hours ago
[-]
I've been exploring this space. There are some use cases where I'd love to run an isolated Claude agent asynchronously. I think running Docker in rootless mode might solve some of the OP's concerns—I believe Podman does this implicitly. Also, there are tools like Kaniko that does not need Docker to create container images. You can also try changing the underlying container runtime to something like gVisor if you want more security.

Does anybody have experience using microVMs (Firecracker, Kata Containers, etc.) for this use case? Would love to hear your thoughts.

reply
fwystup
4 hours ago
[-]
Posted almost at the same time about Kata. I'm trying to use Kata as replacement for the standard docker runtime (since I already have a tool based on docker).

The idea is to simply use the runtime flag (after kata install):

docker run -d --runtime=kata -p 8080:8080 codercom/code-server:latest

Hope this works, with this I could keep my existing docker setup.

reply
messh
1 hour ago
[-]
the shellbox VMs work great as sandbox for Claude-Code. It uses ssh to create and connect to the boxes -- very simple and quick to setup

check it out: https://shellbox.dev

reply
jannesblobel
4 hours ago
[-]
If your system were under version control, so that Claude could do whatever it wanted on its own branch, so to speak, would it still be such a big problem? Because you could just roll back if it really did cause problems, couldn't you?
reply
jannesblobel
4 hours ago
[-]
Perhaps I should add something here. It always depends on the task. Claude with SUDO access doesn't seem right to me either, but I wouldn't run that anywhere else either.
reply
Strongbad536
5 hours ago
[-]
i've low-key been running claude in dangerously skip permissions mode for at least like 4 months now and have yet to be bitten by a truly destructive action. YMMV but i think as long as you're guiding/prompting correctly, and don't just allow write access to your prod account DBs willy nilly, it's mostly fine. just keep an eye on it :shrug:
reply
anp
4 hours ago
[-]
This has mostly been my experience as well although I don’t tend to run yolo mode outside of an isolated VM (I’m setting them up manually still, need to try vagrant for it). That said, it seems like some of the people who are more concerned about isolation are working with more untrusted inputs than I’ve been dealing with on my projects. It’s rare for me to ask an agent to e.g. read text from a random webpage that could bring its own prompt injection, but there are a lot of things one might ask an agent to do that risk exposure to “attack text”.
reply
nonethewiser
5 hours ago
[-]
Also something to note, this mode simply adds a new mode alongside accept edits, plan, nothing, dangerously skip permissions. You can choose when to use it or not, which is not something I initially realized.
reply
yodon
4 hours ago
[-]
Is anyone running Claude in a GitHub Codespace container?

There was this HN post[0] last week on a tool for automatically shutting down the codespace container when idle.

[0]https://github.com/wandb/catnip

reply
tradziej
7 hours ago
[-]
reply
denysvitali
7 hours ago
[-]
Here's what I do (shameless plug): https://blog.denv.it/posts/im-happy-engineer-now/

This allows you to use Claude Code from your mobile device, in a safe environment (restricted Kubernetes pod)

reply
jeffrallen
7 hours ago
[-]
Here's what I do (shameless plug, not an employee, just a satisfied user): https://exe.dev
reply
denysvitali
7 hours ago
[-]
Yes, this approach also looked nice! Maybe you can pair both (happy + exe.dev) for best results
reply
letmetweakit
8 hours ago
[-]
I run Claude in a Proxmox VM, generally the experience has been great. In my experience it also behaves better than gemini cli, that likes to create files all over the place if set loose (lesson learned to add that requirement to the relevant .md files)
reply
vidarh
8 hours ago
[-]
Something that contains Claude even more in this respect is if you explicitly gives it a directory that you tell it is entirely under its control, and tells it to write md files and other intermediate work products there (and this seems to work better than telling it where it isn't allowed to leave things).
reply
onionisafruit
6 hours ago
[-]
That sounds like a good idea. When I have a one-off need for misc files I tell it to put them in the project’s ./tmp because that’s already in my global gitignore. That generally works, but I still run into surprise files it leaves in source dirs like a puppy leaves turds on a rug. I’ll try adding that to my instructions instead of doing it one-off.
reply
jermaustin1
6 hours ago
[-]
I've often found that LLMs don't listen to "Don't do" commands with anywhere near the same gusto as "Do" commands.
reply
NitpickLawyer
6 hours ago
[-]
People don't usually think about pink elephants, unless you ask them not to think about pink elephants :)
reply
chrisss395
5 hours ago
[-]
I too use this solution, using both Ubunutu LXCs and full-fledged VMs. Only issue I've struggled with has been losing SSH connection on the LXC, and tmux and session both seem to mess up the terminal formatting in CC.

I do agree with the security / cautionary comments and wouldn't leverage this setup outside a hacked together homelab.

reply
emilburzo
8 hours ago
[-]
This was also the direction I was initially headed, but then I realized I wanted one-VM-per-project so it can really do anything it wants on the complete VM. So the blast-from-the-past-Vagrant won because of the Vagrantfile + `vagrant up` easiness.
reply
letmetweakit
8 hours ago
[-]
I use Proxmox snapshots to get back to a clean state. I’ll take a look at Vagrant too though.
reply
scalemaxx
8 hours ago
[-]
In installed Gemini as an extension in VS Code and it kept wanting to index all my files. Still trying to figure out what it was doing outside of the VS Code folder I had set it to work on.
reply
frankc
7 hours ago
[-]
I think this makes sense but I wonder if firecracker would work better than vagrant for this? I haven't used it before, though. I guess it might if you are trying to run gas town level orchestration.
reply
raesene9
7 hours ago
[-]
Firecracker can solve the kind of problems where you want more isolation than Docker provides, and it's pretty performant.

There's not a tonne of tooling for that use case now, although it's not too hard to put together I vibe-coded something that works for my use case fairly quickly (CC + Opus 4.5 seemed to understand what's needed)

reply
skybrian
8 hours ago
[-]
I'm doing this with a remote VM on exe.dev and it's quite nice. Well, actually with their own coding agent but they have Claude Code preinstalled too.

Syncthing works well for getting a local copy of a directory from the VM.

reply
marcelcor
5 hours ago
[-]
I'm a fan of https://e2b.dev/
reply
tobyhinloopen
8 hours ago
[-]
How about running Claude as a different user with very limited permissions?
reply
gregoriol
8 hours ago
[-]
This breaks the non-interactive mode the post want to achieve. Claude will not be able to install some things and will require user action, which is not desired here.
reply
progval
8 hours ago
[-]
Like what? It can already use npm/pip/etc. And if it needs a new APT package or config in /etc/ then you would want to know because you need to document it.
reply
gregoriol
7 hours ago
[-]
If you make claude work with c/c++, it may need apt for libraries or build tools.

Even with npm/pip, these may not be available on a base linux box.

Even then, some complex projects may need other tools that are not part of a base system (command line tools, redis, ...).

reply
emilburzo
8 hours ago
[-]
I tried this approach for a while, but I really wanted it to be able to do anything (install system packages, build/run Docker containers, the works).

With these powers there's a lot less back-and-forth with me running commands, copying the output, pasting it to Claude, etc.

I'm sure you've had the case where you had to instruct someone to do something (e.g. playing tech support with family, helping another engineer, etc). While it helps the other person learn, it feels soooo slow vs just doing it yourself :) And since I don't have to teach the agent, I think this approach makes sense.

reply
delaminator
8 hours ago
[-]
I run it with sudo enabled - true story

just give it its own machine and let it check out any code

I PXE boot it from a known image when I feel the need

reply
zh3
5 hours ago
[-]
Same solution here - keep a base diskless image on the server, copy it to the diskless area, pxeboot the machine. Works for Windows too (iscsi).

Could do the same thing on EC2 of course.

reply
tobyhinloopen
8 hours ago
[-]
Running it remotely on a VM seems like a very sensible option. Just don't give it permission to nuke the remote repository hah (EG don't allow force-push, use protected branches, only allow write access to branches it created)
reply
RobinL
7 hours ago
[-]
Does anyone have direct experience with Claude making damaging mistakes in dangerously skip permissions mode? It'd be great to have a sense of what the real world risk is.
reply
prodigycorp
7 hours ago
[-]
Claude is very happy to wipe remote dbs, particularly if you're using something like supabase's mcp server. Sometimes it goes down rabbitholes and tries to clean itself up with `rm -rf`.

There is definitely a real world risk. You should browse the ai coding subreddits. The regularity of `rm -rf` disasters is, sadly, a great source of entertainment for me.

I once was playing around, having Claude Code (Agent A) control another instance of Claude Code (Agent B) within a tmux session using tmux's scripting. Within that session, I messed around with Agent B to make it output text that made Agent A think Agent B rm -rf'd entire codebase. It was such a stupid "prank", but seeing Agent A's frantic and worried reaction to Agent B's mistake was the loudest and only time I've laughed because of an LLM.

reply
gregoriol
7 hours ago
[-]
Why in the hell would it be able to access a _remote_ database?! In no acceptable dev environment would someone be able to access that.
reply
heartbreak
6 hours ago
[-]
Everywhere I’ve ever worked, there was always some way to access a production system even if it required multiple approvals and short-lived credentials for something like AWS SSM. If the user has access, the agent has access, no matter how briefly.
reply
gregoriol
6 hours ago
[-]
Not if you require auth with a Yubikey, not if you run the LLM client inside a VM which doesn't have your private ssh key, ...
reply
prodigycorp
6 hours ago
[-]
Supabase virtually encouraged it last year haha. I tried using it once and noped out after using it for an hour, when claude tried to do a bunch of migrations on prod instead of dev.

https://web.archive.org/web/20250622161053/https://supabase....

Now, there are some actual warnings. https://supabase.com/docs/guides/getting-started/mcp#securit...

reply
kaydub
5 hours ago
[-]
I think LLMs are exposing how slapdash many people work when building software.
reply
azuanrb
7 hours ago
[-]
One recent example. For some reason, recently Claude prefer to write scripts in root /tmp folder. I don't like this behavior at all. It's nothing destructive, but it should be out of scope by default. I notice they keep adding more safeguards which is great, eg asking for permissions, but it seems to be case by case.
reply
giancarlostoro
6 hours ago
[-]
If you're not using .claude/instructions.md yet, I highly recommend it, for moments like this one you can tell it where to shove scripts. Trickery with the instructions file is Claude only reads it during a new prompt, so any time you update it, or Claude "forgets" instructions, ask it to re-read it, usually does the trick for me.
reply
mythical_39
5 hours ago
[-]
Claude, I noticed you rm -rf my entire system. Your .instructions.md file specifically prohibits this. Please re-read your .instructions.md file and comply with it for all further work
reply
giancarlostoro
5 hours ago
[-]
IMHO a combination of trash CLI and a smarter shell program that prevents deleting critical paths would do it.

https://github.com/andreafrancia/trash-cli

reply
coldtea
7 hours ago
[-]
reply
ra120271
7 hours ago
[-]
When approving actions "for this project" I actively monitor .claude\settings.local.json

as

"Bash(az resource:)",

is much more permissive than

"Bash(az resource show:)",

It mostly gets it right but I instantly fix the file with the "readonly" version when it gets it too open.

reply
foreigner
2 hours ago
[-]
I caught Claude using docker (running as root) to access files on my machine it couldn't read using it's user.
reply
kaydub
5 hours ago
[-]
It feels like most people are exposing how wild west their environments are.
reply
MattGaiser
7 hours ago
[-]
Claude has twice now thought that deleting the database is the right thing to do. It didn't matter as it was local and one created with fixtures in the Docker container (in anticipation of such a scenario), but it was an inappropriate way of handling Django migration issues.
reply
csantini
6 hours ago
[-]
Just create a new user and setup pip/npm to install locally.

And setup an .env for the project with user/password to access only a dev database.

reply
woof
5 hours ago
[-]
sandbox-exec on MacOS (ie. https://github.com/neko-kai/claude-code-sandbox) seems like the perfect solution to me.

Missing FreeBSD jails in 2026 is kind of weird (hello 1999)...

reply
mhb
6 hours ago
[-]
Forgive a naive question, but why not run it on an AWS (or equivalent) instance?
reply
jackcarter
5 hours ago
[-]
"At some point I realized that rather than do something else until it finishes, I would constantly check on it to see if it was asking for yet another permission, which felt like it was missing the point of having an agent do stuff"

Why don't Claude Code & other AI agents offer an option to make a sound or trigger a system notification whenever they prompt for approval? I've looked into setting this up, and it seems like I'd have to wire up a script that scrapes terminal output for an approval request. Codex has had a feature request open for a while: https://github.com/openai/codex/issues/3052

reply
AndroidKitKat
5 hours ago
[-]
When using Claude Code in Ghostty on macOS, I get notifications if it is waiting on my input (accept changes, questionnaire, run bash command). Dunno what combination (if any) of my setup is needed for this to happen, but I certainly didn't configure anything special. Maybe I'm giving CC too much free reign to do things.
reply
Retr0id
7 hours ago
[-]
> VirtualBox 7.2.4 shipped with a regression that causes high CPU usage on idle guests. What are the odds.

I have such a love/hate relationship with VirtualBox. It's so useful but so buggy. My current installation has a bug that causes high network latency, but I'm afraid to upgrade in case it introduces new, worse bugs.

VMware is a million times better, but it is also Proprietary™

reply
intrasight
7 hours ago
[-]
As VMWare Workstation is now free on Linux and Windows, and allows you to create and rollback snapshots. Why not use it even if proprietary?
reply
Retr0id
7 hours ago
[-]
It's a good question and I'm pretty on the fence about it, and next time I'm reinstalling things I might switch.

I do believe in the whole RMS "respects the user's freedoms" spiel, so all things being equal I prefer FOSS, even if it's worse - but there are limits.

reply
HWR_14
2 hours ago
[-]
And OSX. Or the functionality is free, but the name of the client might be different
reply
cyberpunk
6 hours ago
[-]
docker sandbox run claude? seems to work for me…
reply
szmarczak
6 hours ago
[-]
What about Docker rootless?
reply
firasd
7 hours ago
[-]
I noticed something in Claude across all product surfaces

There's a bug in that it can't output smart quotes “like this”

Sonnet, Opus et al think they output it but something in the pipeline is rewriting it

https://github.com/firasd/vibesbench/blob/main/docs/2026/A/t...

Try it in Claude Code and you'll see what I mean! Very weird

reply
nailer
4 hours ago
[-]
Don't all modern OS's have sandboxing? We don't need a full VM (eg, kernel running on virtualized hardware) and the complexity that entails, we just need Claude Code running in the sandbox.

(Maybe I should be asking Claude this)

Edit: someone already built this: https://github.com/neko-kai/claude-code-sandbox

reply
supermatt
6 hours ago
[-]
> now you need Docker-in-Docker

Or you can just mount the socket and call docker from within docker.

reply
emilburzo
5 hours ago
[-]
Correct, which I wanted to avoid because:

> Mounting the Docker socket grants the agent full access to your Docker daemon, which has root-level privileges on your system. The agent can start or stop any container, access volumes, and potentially escape the sandbox. Only use this option when you fully trust the code the agent is working with.

https://docs.docker.com/ai/sandboxes/advanced-config/#giving...

reply
ejia
5 hours ago
[-]
PM for Docker Sandboxes here.

We have an updated version of Sandboxes coming out soon that uses MicroVM isolation to solve this exact problem. This next version will let your agent access a Docker instance within the MicroVM, therefore allowing you to do this securely.

reply
athrowaway3z
7 hours ago
[-]
`useradd claude`
reply
netcoyote
5 hours ago
[-]
This is the solution I chose for sandvault [0], which works well on my Mac since agents can run OSX-specific tools.

It just got added to Homebrew:

    brew install sandvault
Or clodpod [1] for a VM-based solution

0: https://github.com/webcoyote/sandvault

1: https://github.com/webcoyote/clodpod

reply
ompogUe
6 days ago
[-]
Keeping in mind with Vagrant: if you are using a synced_folder in your host as a source folder in the VM, those files in the synced_folder will be modified on the host.
reply
gregoriol
8 hours ago
[-]
If the folder is versioned and commited regularly there is no problem. It also allows you to open the files in your IDE, do some other tasks or fixes for claude. It prevents claude from accessing any other folder, which is the idea of the post.
reply
gcr
7 hours ago
[-]
I’ve seen Claude rm .git in rare occasions to “fix rebase hiccups”

Version control ain’t a match for a good backup

reply
gregoriol
7 hours ago
[-]
So? if it removes .git, just clone the project again and you are ok
reply
fragmede
5 hours ago
[-]
Until Claude nukes .git, assuming you're using git as the version/commit store. Solution use easy, just push to a remote on a reasonable cadence (that you can run reflog on, so a force push won't eat your data either). Git isn't backup though, it's a VCS, and those are two different things, even if they are somewhat alike.
reply
emilburzo
6 days ago
[-]
Good point. For me, that was intentional, since all my projects are in git I don't care if it messes something up. Then you get the benefit of being able to use your regular git tooling/flows/whatever, without having to add credentials to the VM.

But if you need something more strict, 'config.vm.synced_folder' also supports 'type rsync', which will copy the source folder at startup to the VM, but then it's on you to sync it back or whatever.

reply
ompogUe
6 days ago
[-]
I like this workflow a lot, actually. Docker is great and all, but depending on the project, Vagrant helps "keep it simple".

Thanks

reply
guluarte
3 hours ago
[-]
docker has sandboxes for this https://docs.docker.com/ai/sandboxes/

docker sandbox run claude

reply
oofbey
5 hours ago
[-]
There are two spheres of influence you need to consider. The local machine/vm/container that the agent is running in. But also the effect the agent can have on the outside world - using auth tokens or ssh keys or apis that is has access to. This article largely deals with the first problem and ignores the second.

You can have the local environment completely isolated with vagrant. But if you’re not careful with auth tokens it can (and eventually will when it gets confused)go wipe the shared dev database or the GitHub repo. The author kinda acknowledges this, but it’s glossing over a big chunk of the problem. If it can pus to GitHub, unless you’ve set up your tokens carefully it can delete things too. Having a local isolated test database separate from the shared infrastructure is a matter of a mature dev environment, which is a completely separate thing from how you run Claude. Two of the three examples cited as “no, no, no” are not protected by vagrant or docker or even EC2. It’s what tokens the agent has and needs.

reply
emilburzo
5 hours ago
[-]
Hmm, perhaps I'm missing something, so let's go through it step by step and see where the disconnect is:

- There's a cloned 'my-project' git repo on the base OS

- The 'Vagrantfile' is added to the project

- 'vagrant up', 'vagrant ssh' and claude login is run inside the VM

At this stage, besides the source code and the Claude Code token (after logging in), there are no other credentials on the VM: no SSH keys, no DB credentials, no API tokens, nothing.

There is also no need to add:

- SSH keys or GitHub tokens: because git push/pull is handled outside the VM

- DB credentials: because Claude can just install a DB inside the VM and run the project migrations against that isolated instance, not any shared/production database

API tokens can definitely be a problem if you need external service integration. But that's an explicit opt-in decision, you'd have to deliberately add those credentials to the Vagrantfile or sync them in. At that point, yes, you need proper token scoping and permissions.

reply