Today, I asked it to run gt restack (Graphite CLI) and resolve conflicts. The agent resolved the submodule conflict correctly, but then proceeded to run git push --force-with-lease --no-verify without asking for permission - directly violating my rules.
The agent's defense was reasonable ("force push is expected after a rebase"), but that's exactly why I want to be asked first. The whole point of the rule is to maintain human oversight on destructive operations.
I'm curious:
Has anyone else experienced AI agents ignoring explicit safety rules? How are you handling guardrails for potentially destructive operations? Is there a more reliable way to enforce these boundaries?
The irony is that the agent acknowledged the rule violation in its apology, which means it "knew" the rule existed but chose to proceed anyway. This feels like a trust issue that could have much worse consequences in other scenarios.
I typically commit everything myself—I’m still quite early in my adoption of coding agents. One of my first experience with OpenCode (which made me stop using it instantly) was when it tried to commit and force push a change after I simply asked it to look into a potential bug.
Claude Code seems to have better safeguards against this. However, I wonder how come we don’t generally run these things inside docker containers with only the current dir volume mounted or something to prevent spurious FS modifications.
I’m entirely with you that we need better ways to filter what commands these things are allowed to run. Specifically, a CLAUDE.md or “do not do this under any circumstance” as part of the prompt is a futile undertaking.
No, the AI never "knew" anything! :)
With Claude Code, tools like Bash(“git *”) always ask for permission unless you’ve allowed it.
Figure out the Cursor equivalent of that.