There are some opinions that using LLMs to write code is just a new high level language we are dealing in as engineers. However, this leads to a disconnect come code-review time, in that the reviewed code is an artifact of the process that created it. If we are now expressing ourselves via natural language, (prompting, planning, writing, as the new "programming language"), but only putting the generated artifact (the actual code) up for review, how do we review it completely?
I struggle with what feels like a missing piece these days of lacking the context around how the change was produced, the plans, the prompting, to understand how an engineer came to this specific code change as a result. Did they one-shot this? did they still spend hours prompting/iterating/etc.? something in-between?
The summary in the PR often says what the change is, but doesn't contain the full dialog or how we arrived at this specific change (tradeoffs, alternatives, etc.)
How do you review PRs in your organization given this? Any rules/automation/etc. you institute?
In traditional workflows, a lot of the reasoning is visible through commit history, comments, or intermediate refactors. With LLMs, the reasoning step can be hidden because the model collapses that exploration into a single output.
What we've started doing internally is asking for two artifacts instead of just the code:
1. the prompt or task description that produced the code 2. the generated code itself
Reviewing both together gives you much better context about the intent, constraints, and tradeoffs that led to the implementation.
Other than that, I review known “this works when testing but breaks in production” - concurrency code, scalability issues, using the correct data load patterns, database indexes, etc.
I also validate non functional requirements around security, logging, costs, etc.
This is the same thing I did when I was working with other team leads as the “architect” who was mostly concerned with - would it fall over in production, would it cause security issues, would it cause compliance issues and are they following standards that we agreed on.
On the other hand, I haven’t done any serious web development since 2002. Now I vibe code internal web admin apps for customers where I use AWS Cognito for authentication. I don’t look at a line of code for it. I verify that it works, and the UX (not the UI - it’s ugly AF)
The chance of any human ever looking at the “AI first” code I wrote is slim. By AI first I mean detailed markdown files with references to other markdown files that start with the contract, requirement gathering transcripts and is modular by design.
If I was working at a startup or working on a personal project, I wouldn't read the code but instead build a tighter verification loop to ensure the code functions as expected. Much harder to do in an existing system that was built pre-AI
That said, one thing review can't fully cover is runtime behavior under real traffic. Not saying that's a review problem – it's just a separate layer that still needs attention after the merge.
In my opinion, you have to review it the way you always review code. Does the code do what it's supposed to do? Does it do it in a clean, modular way? Does it have a lot of boilerplate that should be reduced via some helper functions, etc.
It doesn't matter how it was produced. A code review is supposed to be: "Here's this feature {description}" and then, you look at the code and see if it does the thing and does it well.
Even without LLMs, there was a thought process that led to the engineer coming to a specific outcome for the code, maybe some conversations with other team members, discussions and thoughts about trade offs, alternatives, etc... all of this existed before.
Was that all included in the PR in the past? If so, the engineer would have to add it in, so they should still do so. If not, why do you all the sudden need it because of an AI involved?
I don't see why things would fundamentally change.
What does help is requiring a short design note in the PR explaining the intent, constraints, and alternatives considered. That gives the context reviewers actually need without turning the review into reading a chat transcript.