The real risk with LLM-generated code is that it looks plausible but hasn't gone through the same level of scrutiny. It passes a quick review because it reads well, but edge cases get missed.
I think the better question is: what verification pipeline do you put around any contribution, regardless of origin? If the answer is "the same review process we've always had" then the problem isn't AI, it's whether that process is rigorous enough for the stakes involved.
Precisely! Because the code is made to believable the risk of accepting it without understanding full implications is very high.
> If the answer is "the same review process we've always had" then the problem isn't AI, it's whether that process is rigorous enough for the stakes involved.
True, but there is also a reputational component to how changes are reviewed (whether we like it or not). The longer the tenure and the deeper the understanding of the changed code is - the chance of careless Pull Request gets lower.