Ask HN: Are you worried, and care, about AI stealing your code/secrets?
2 points
4 hours ago
| 2 comments
| HN
Recently, I started to use AI coding agents. They are really great, and I feel like this is the best $100 month I spend for my career.

And yet, I understand that I don’t fully know how they work and what they do behind the scenes. I know the general gist of how an agent works, but I don’t really know if they don’t cat .env behind the scenes, or whether someone on the other side of the planet gets pieces of my code in their AI response.

This is the reason I use AI mainly at $JOB, but not on my personal project (in addition to keeping my skills sharp, and the fun factor). Do you ever think about this? Do you care?

viraptor
4 hours ago
[-]
You need to run them sandboxed in some way. Docker is one kind of solution, selinux / apparmor / sandbox-exec is another. Basically, create an environment where .env is not accessible in any way and you don't have to worry about it anymore.

I don't care about it reading the code itself. 90% of my usage is on opensource projects anyway. The other - if I can generate something, then there's no barrier to someone else doing the same - I'm just making applications that do expected things, not doing some groundbreaking research.

reply
fnoef
4 hours ago
[-]
It’s not only about the .env, but also intellectual property, algorithms, even product ideas.

Moreover, let’s say you run a dev server with watch mode, and ask claude to implement a feature. Claude can generate a code that reads your .env (from within the server) and send to some third party url. The watch mode would catch it and reload the server and will run the code. By the time you catch it, it’s too late. I know it’s far fetched, and maybe the paranoia is coming from my lack of understanding these tools well, but in the end they are probabilistic token generators, that were trained on all code in open existence, including malware.

reply
viraptor
4 hours ago
[-]
> Claude can generate a code that reads your .env (from within the server) and send to some third party url.

Again - sandboxes. If you either block or filter the outbound traffic, it can't send anything. Neither can the scripts LLMs create.

reply
coolcat258
4 hours ago
[-]
tbh im sure they do.
reply