If you feed the most critical parts of your project to an AI, wouldn't that introduce security vulnerabilities? The AI would then have an in-depth understanding of your project's core architecture. Consequently, couldn't other AI users potentially gain easy access to these underlying details and breach your security defenses?
Furthermore, couldn't other users then easily copy your code without any attribution, making it seem no different from open-source software?
So far I’ve decided to trust Anthropic and OpenAI with my code, but not Deepseek, for instance.
Yeah, we're not doing that.
Also moved our private git repos and CIs to self-managed.
God how I wish this were true