Ask HN: Are diffs still useful for AI-assisted code changes?
5 points
by nuky
11 hours ago
| 4 comments
| HN
I’m wondering whether traditional diffs are becoming less suitable for AI-assisted development..

Lately I’ve been feeling frustrated during reviews when an AI generates a large number of changes. Even if the diff is "small", it can be very hard to understand what actually changed in behavior or structure.

I started experimenting with a different approach: comparing two snapshots of the code (baseline and current) instead of raw line diffs. Each snapshot captures a rough API shape and a behavior signal derived from the AST. The goal isn’t deep semantic analysis, but something fast that can signal whether anything meaningful actually changed.

It’s intentionally shallow and non-judgmental — just signals, not verdicts.

At the same time, I see more and more LLM-based tools helping with PR reviews. Probabilistic changes reviewed by probabilistic tools feels a bit dangerous to me.

Curious how others here think about this: – Do diffs still work well for AI-generated changes? – How do you review large AI-assisted refactors today?

ccoreilly
8 hours ago
[-]
There‘s many approaches being discussed and it will depend on the size of the task. You could just review a plan and assume the output is correct but you need at least behavioural tests to understand what was built fulfilled the requirements. You can split the plan further and further until the changes are small enough to be reviewable. Where I don’t see the benefit is in asking an agent to generate test as it tends to generate many useless unit tests that make reviewing more cumbersome. Writing the tests yourself (or defining them and letting an agent write the code) and not letting implementation agents change the tests is also something worth trying.

The truth is we’re all still experimenting and shovels of all sizes and forms are being built.

reply
nuky
7 hours ago
[-]
That matches my experience too - tests and plans are still the backbone.

What I keep running into is the step before reading tests or code: when a change is large or mechanical, I’m mostly trying to answer "did behavior or API actually change, or is this mostly reshaping?" so I know how deep to go etc.

Agree we’re all still experimenting here.

reply
DiabloD3
8 hours ago
[-]
You know there are other kinds of diffs, right?

Its common to change git's diff to things like difftastic, so formatting slop doesn't trigger false diff lines.

You're probably better off, FWIW, just avoiding LLMs. LLMs cannot produce working code, and they're the wrong tool for this. They're just predicting tokens around other tokens, they do not ascribe meaning to them, just statistical likelihood.

LLM weights themselves would be far more useful if we used them to indicate statistical likelihood (ie, perplexity) of the code that has been written; ie, strange looking code is likely to be buggy, but nobody has written this tool yet.

reply
nuky
7 hours ago
[-]
It was precisely because this was going too far that I thought the consequences of the active adoption of LLM tools could be made visible. I'm not saying LLM is completely bad—after all, and not all tools, even non-LLM ones, are 100% deterministic. At the same time, reckless and uncontrolled use of LLM is increasingly gaining ground not only in coding but even in code analyze/review.
reply
nuky
7 hours ago
[-]
Yeah difftastic and similar tools help a lot with formatting noise really.

My question is slightly orthogonal though: even with a cleaner diff, I still find it hard to quickly tell whether public API or behavior changed, or whether logic just moved around.

Not really about LLMs as reviewers — more about whether there are useful deterministic signals above line-level diff.

reply
nuky
11 hours ago
[-]
Just to clarify - this isn’t about replacing diffs or selling a tool

I ran into this problem while reviewing AI-gen refactors and started thinking about whether we’re still reviewing the right things. Mostly curious how others approach this.

reply
uhfraid
8 hours ago
[-]
> How do you review large AI-assisted refactors today?

just like any other patch, by reading it

reply
nuky
7 hours ago
[-]
fair — that’s what I do as well)
reply