Code review can be better
387 points
2 days ago
| 41 comments
| tigerbeetle.com
| HN
3036e4
2 days ago
[-]
What bothered me for a long time with code reviews is that almost all useful things they catch (i.e. not nit-picking about subjective minor things that doesn't really matter) are much too late in the process. Not rarely the only (if any) useful outcome of a review is that everything has to be done from scratch in a different ways (completely new design) or that it is abandoned since it turns out it should never have been done at all.

It always seems as if the code review is the only time when all stakeholders really gets involved and starts thinking about a change. There may be some discussion earlier on in a jira ticket or meeting, and with some luck someone even wrote a design spec, but there will still often be someone from a different team or distant part of the organization that only hears about the change when they see the code review. This includes me. I often only notice that some other team implemented something stupid because I suddenly get a notification that someone posted a code review for some part of the code that I watch for changes.

Not that I know how to fix that. You can't have everyone in the entire company spend time looking at every possible thing that might be developed in the near future. Or can you? I don't know. That doesn't seem to ever happen anyway. At university in the 1990's in a course about development processes there wasn't only code reviews but also design reviews, and that isn't something I ever encountered in the wild (in any formal sense) but I don't know if even a design review process would be able to catch all the things you would want to catch BEFORE starting to implement something.

reply
epolanski
2 days ago
[-]
> and that isn't something I ever encountered in the wild (in any formal sense)

Because in the software engineering world there is very little engineering involved.

That being said, I also think that the industry is unwilling to accept the slowliness of the proper engineering process for various reasons, including non criticality of most software and the possibility to amend bugs and errors on the fly.

Other engineering fields enjoy no such luxuries, the bridge either holds the train or it doesn't, you either nailed the manufacturing plant or there's little room for fixing, the plane's engine either works or not

Different stakes and patching opportunities lend to different practices.

reply
j2kun
2 days ago
[-]
reply
epolanski
1 day ago
[-]
He's spot on.
reply
pjmlp
2 days ago
[-]
There is plenty on large scale enterprise projects, but than whole that stuff is looked down by "real developers".

Also in many countries, to one call themselves Software Engineer, they actually have to hold a proper degree, from a certified university or professional college, validated by the countrie's engineering order.

Because naturally 5 year (or 3 per country) degree in Software Engineering is the same a six weeks bootcamp.

reply
integralid
2 days ago
[-]
I never finished my degree, but I believe I'm a very good developer (my employere agree). In my times most good programmers were self-taught.

I don't mind (hypothetically) not being allowed to call myself "engineer", but I do mind false dichotomy of "5 year course" vs "six week bootcamp". In the IT world it's entirely possibly to learn everything yourself and learn it better than one-fits-all course ever could.

reply
hxtk
1 day ago
[-]
CS is sort of unique in that regard. I value my university degree, but when I think about the classes that helped me the most, only one of them was a degree requirement. Not because the degree was useless, but because the information was accessible enough that I already knew most of the required content for an undergraduate degree when I got there.

I took lots of electives outside my major, and I know that I could have easily loved chemistry, mathematics, mechanical engineering, electrical engineering, or any number of fields. But when you're 12 years old with a free period in the school computer lab, you can't download a chemistry set or an oscilloscope or parts for building your next design iteration. You can download a C compiler and a PDF of K&R's "The C Programming Language," though.

CS just had a huge head-start in capturing my interest compared to every other subject because the barrier to entry is so low.

reply
bornfreddy
1 day ago
[-]
> In the IT world it's entirely possibly to learn everything yourself and learn it better than one-fits-all course ever could.

Strong disagree. However, this is closer to truth:

In the IT world, if you have learned everything yourself, it's entirely possible to think you have learned it better than one-fits-all course ever could.

There is lots of theoretical knowledge that comes with the degree that, while mostly useless in day-to-day work, is priceless in those rare moments that it comes handy. A self-taught developer won't even know they are missing this knowledge. Example of this is knowing how compilers work (which is surprisingly useful) - without the theoretical background one might attempt to parse HTML with regex and expect correct results.

Not that all degrees are created equal. But those X years do give you an edge over self-taught developers. You still need to work on other skills too, of course.

reply
epolanski
2 days ago
[-]
In Italy a degree isn't enough, you need to take an exam and be certified.
reply
pjmlp
2 days ago
[-]
Like in Portugal, and in many countries yes, usually having Software Engineer on legally bound contracts implies taking the final examination.

However, already by having been through the degree there is a whole set of skills that one would not have gotten otherwise.

Assuming that they actually did it the right way, and not getting through it with minimal effort.

reply
epolanski
1 day ago
[-]
> However, already by having been through the degree there is a whole set of skills that one would not have gotten otherwise.

Of course you would/could.

1) a degree doesn't imply you've built any specific skills or retained any information, just that you passed a set of exams. I've met a huge bunch of people from important universities that clearly studied just to pass exams with good degrees, but where absolutely crap problem solvers and even worse coders.

2) plenty of brilliant engineers did not graduate, from Leonardo Da Vinci to, just to stay in software John Carmack, Zuckerberg, Paul Allen, Romero, Wozniak (technically he did, 12 years after founding Apple), Karp, and many others.

What I'm trying to say: engineering skills are acquired by sheer will of studying and solving problems. And in 2025 you can follow pretty much any course/lecture from most top rated courses just watching your computer. A person doing so with interest will leapfrog anybody sitting there and going through the exam just because he has to.

reply
pjmlp
1 day ago
[-]
Except most people that call themselves Engineers without a degree aren't Leonardo Da Vinci, or any of the others listed by you.

Exceptional individuals that made an impact to mankind, regardless of which kind.

There are schools for special developed kids with advanced cognitive skills for a reason.

Also there is a huge difference between being immersed in an engineering degree between 3 to 5 years almost every single day with compulsory assignments, depending on the country, and occasionally watch a couple of videos, or read one or two books.

This all without getting into the soft and ethical skills that engineering degrees also require.

reply
ozim
2 days ago
[-]
It still is engineering you only mistake design phase.

Writing code is the design phase.

You don’t need design phase for doing design.

Will drop link to relevant video later.

reply
ChrisMarshallNY
2 days ago
[-]
I see there has been a “spirited discussion” on this. We can get fairly emotionally invested into our approaches.

In my experience (and I have quite a bit of it, in some fairly significant contexts), “It Depends” is really where it’s at. I’ve learned to take an “heuristic” approach to software development.

I think of what I do as “engineering,” but not because of particular practices or educational credentials. Rather, it has to do with the Discipline and Structure of my approach, and a laser focus on the end result.

I have learned that things don’t have to be “set in stone,” but can be flexed and reshaped, to fit a particular context and development goal, and that goals can shift, as the project progresses.

When I have worked in large, multidisciplinary teams (like supporting hardware platforms), the project often looked a lot more “waterfall,” than when I have worked in very small teams (or alone), on pure software products. I’ve also seen small projects killed by overstructure, and large projects, killed, by too much flexibility. I’ve learned to be very skeptical of “hard and fast” rules that are applied everywhere.

Nowadays, I tend to work alone, or on small teams, achieving modest goals. My work is very flexible, and I often start coding early, with an extremely vague upfront design. Having something on the breadboard can make all the difference.

I’ve learned that everything that I write down, “ossifies” the process (which isn’t always a bad thing), so I avoid writing stuff down, if possible. It still needs to be tracked, though, so the structure of my code becomes the record.

Communication overhead is a big deal. Everything I have to tell someone else, or that they need to tell me, adds rigidity and overhead. In many cases, it can’t be avoided, but we can figure out ways to reduce the burden of this crucial component.

It’s complicated, but then, if it were easy, everyone would be doing it.

reply
marcyb5st
2 days ago
[-]
Googler, but opinions are my own.

I disagree. The design phase of a substantial change should be done beforehand with the help of a design doc. That forces you to put in writing (and in a way that is understandable by others) what you are envisioning. This exercise is really helpful in forcing you to think about alternatives, pitfalls, pros & cons, ... . This way, once stakeholders (your TL, other team members) agreed then the reviews related to that change become only code related (style, use this standard library function that does it, ... ) but the core idea is there.

reply
mytailorisrich
2 days ago
[-]
This should only be a first phase of the design and should be high level and not a commitment. Then you quickly move on to iterate on this by writing working code, this is also part of the design.

Having an initial design approved and set in stone, and then a purely implementation phase is very waterfall and very rarely works well. Even just "pitfalls and pros & cons" are hard to get right because what you thought was needed or would be a problem may well turn out differently when you get hands-on and have actual data in the form of working code.

reply
ozim
2 days ago
[-]
This is the talk I mentioned in my original comment:

https://www.youtube.com/watch?v=RhdlBHHimeM

reply
jmholla
2 days ago
[-]
I also read this series of blog posts recently where the author, Hillel Wayne, talked to several "traditional" engineers that had made the switch to software. He came to a similar conclusion and while I was previously on the fence of how much of what software developers do could be considered engineering, it convinced me that software engineer is a valid title and that what we do is engineering. First post here: https://www.hillelwayne.com/post/are-we-really-engineers/
reply
wolvesechoes
2 days ago
[-]
Personally I don't need to talk with "traditional" engineers to have an opinion there, as I am mechanical engineer that currently deals mostly with software, but still in the context of "traditional" engineering (models and simulation, controls design).

Definitely making software can be engineering, most of the time it is not, not because of the nature of software, but the characteristics of the industry and culture that surrounds it, and argument in this article is not convincing (15 not very random engineers is not that much to support the argument from "family resemblance").

reply
hgomersall
2 days ago
[-]
Engineering is just about wielding tools to solve problems. You don't need to use formal methods to do engineering in general. Sometimes they're useful; sometimes they're required; often they just get in the way.

In the context of software vs other sub-disciplines, the big difference is in the cost of iterating and validating. A bridge has very high iteration cost (generally, it must be right first time) and validation is proven over decades. Software has very low iteration cost, so it makes much more sense to do that over lots of upfront design. Validation of software can also generally be implemented through software tools, since it's comparatively easy to simulate the running environment of the software.

Other disciplines like electronics live a little closer to a bridge, but it's still relatively cheap to iterate, so you tend to plan interim design iterations to prove out various aspects.

reply
wolvesechoes
2 days ago
[-]
"In the context of software vs other sub-disciplines, the big difference is in the cost of iterating and validating."

People forget that software is used in those other disciplines. CFD, FEA, model-based design etc. help to verify ideas and design without building any physical prototype and burning money in the real lab.

You can do some strain and stress analysis on a virtual bridge to get a high degree of confidence that the real bridge will perform fine. Of course, then you need to validate it at all stages of development, and at the end perform final validation under weight.

The thing is that people building engines, cars, planes, sensors, PCBs and bridges actually do so, largely because they are required to do so. If you give them freedom to not do that, many of them will spare themselves such effort. And they understand the principles of things they are working on. No one requires any of that from someone that glued together few NPM packages with a huge JS front-end framework, and such person may not even know anything about how the HTTP works, how browser handles the DOM etc. It's like having a mechanical engineer that doesn't even understand basic principles of dynamics.

There are industries that deal with the software (i.e. controls design) that have much higher degree of quality assurance and more validation tools, including meaningful quantitative criteria, so it clearly is not a matter of software vs hardware.

reply
tsimionescu
2 days ago
[-]
> Engineering is just about wielding tools to solve problems.

By that standard, doctors and hair stylists are also engineers, as are some chimps and magpies. I don't think it's a useful definition, it's far too broad.

reply
closewith
2 days ago
[-]
> In the context of software vs other sub-disciplines, the big difference is in the cost of iterating and validating.

No, the big difference is that in the Engineering disciplines, engineers are responsible end-to-end for the consequences of their work. Incompetence or unethical engineers can and regularly do lose their ability to continue engineering.

It's very rare that software developers have any of the rigour or responsibilities of engineers, and it shows in the willingness of developers to write and deploy software which has real-world costs. If developers really were engineers, they would be responsible for those downstream costs.

reply
hgomersall
2 days ago
[-]
There are plenty of engineering of physical things where nobody has or takes responsibility. Equally, there's plenty of examples of software where careful processes are in place to demonstrate exactly the responsibilities you discuss.
reply
closewith
2 days ago
[-]
> There are plenty of engineering of physical things where nobody has or takes responsibility.

That is by definition not engineering.

> Equally, there's plenty of examples of software where careful processes are in place to demonstrate exactly the responsibilities you discuss.

Software engineering of course exists, but 99%+ of software is not engineered.

reply
hgomersall
2 days ago
[-]
Ah, you've defined yourself to be right. Nice move.

I'm not sure the generally accepted definition of engineering makes any reference to taking responsibility: https://dictionary.cambridge.org/dictionary/english/engineer...

reply
closewith
2 days ago
[-]
In fact, every definition of engineering used by professionals includes explicitly taking responsibility, including in law in many countries.
reply
ozim
1 day ago
[-]
I will add this link once more.

This is the talk on real software engineering: https://www.youtube.com/watch?v=RhdlBHHimeM

reply
closewith
1 day ago
[-]
I think maybe you should watch this video, as it speaks only about software development and not engineering in any meaningful aspect.
reply
stonemetal12
2 days ago
[-]
>Engineering is just about wielding tools to solve problems. You don't need to use formal methods to do engineering in general.

Way to general to be useful. By that definition the store clerk is an engineer (tool cash register, problem solved my lack of gummy bears), janitors swinging a mops, or automotive techs changing oil.

Engineering is applied science.

reply
hgomersall
2 days ago
[-]
It wasn't a definition. Everything an engineer does is wield tools to solve problems. That doesn't mean wielding tools to solve problems automatically makes you an engineer. That said, I'm much less restrictive in those I would class as "doing engineering" than many.
reply
gabrielpoca118
2 days ago
[-]
But what about other engineering fields? From what I understand, if you compare it to chemical engineering, you have many more similarities, because you’re doing Hypothesis -> Experiment -> Analyze -> Refine -> Repeat, which seems very similar to what we do in software
reply
wolvesechoes
2 days ago
[-]
Mechanical engineering also uses prototypes, iteration, lab testing etc. Building architects build multiple models before the first shovel is put into the ground.

Software is clearly different than "hardware", but it doesn't mean that other industries do not use experiment and iteration.

reply
gabrielpoca118
2 days ago
[-]
ok you lost me then xD I was trying to understand what you meant by it not being engineering most of the time.
reply
ozim
1 day ago
[-]
I will add this link once more.

This is the talk on real software engineering: https://www.youtube.com/watch?v=RhdlBHHimeM

reply
jmholla
2 days ago
[-]
I think it's still be worth your time to check out the articles as they talk about other fields of engineering and how they compare to software AND mechanical.
reply
3036e4
2 days ago
[-]
I was an undergraduate (computer) engineer student, but like many of my friends at that time (dot-com boom) I did not graduate since it was too tempting to get a job and get well paid instead.

However many, probably half, that I work with, and most that I worked with overall for the last 25+ years (since after I dropped out) have an engineering degree. Especially the younger ones, since this century it has been more focus on getting a degree and fewer seems to drop out early to get a job like many of us did in my days.

So when American employers insist on giving me titles like "software engineer" I cringe. It's embarrassing really, since I am surrounded by so many that have a real engineering degree, and I don't. It's like if I dropped out of medical school and then people started calling me "doctor" even if I wasn't one, legally. It would be amazing if we could find a better word so that non-engineers like me are not confused with the legally real engineers.

reply
AndrewDavis
2 days ago
[-]
I've decided that titles are mostly meaningless in software. What X title means in one org means another in a different one with near zero overlap, and another title might have considerable overlap with a differently named one but viewed lowly, borderline pejoratively at another org. Eg system admin vs devops vs sre. In one org sysadmins are deploying desktop machines with no expectations they can cut code, in my old role as one I was working with Linux systems, building glue and orchestration, and when things go wrong debugging backend code written by a development team. Something far closer to the work of "devops" or "sre".

As a aside, I find your example of doctor as amusing because it's overloaded with many considering the term a synonym of physician, and the confusion that can cause with other types of doctors.

reply
lukan
2 days ago
[-]
If you are doing the work of an engineer and you do it right, I believe you are an engineer, whether you graduated, or not.

And proper software developement definitely has engineering parts. Otherwise titles are just labels.

reply
ozim
2 days ago
[-]
This is the talk on real software engineering:

https://www.youtube.com/watch?v=RhdlBHHimeM

reply
erichocean
2 days ago
[-]
> Writing code is the design phase.

Rich Hickey agrees it's a part of it, yes. https://www.youtube.com/watch?v=c5QF2HjHLSE

reply
motorest
2 days ago
[-]
> Writing code is the design phase.

No, it really isn't. I don't know which amateur operation you've been involved with, but that is really not how things work in the real world.

In companies that are not entirely dysfunctional, each significant change to the system's involve a design phase, which often includes reviews from stakeholders and involved parties such as security reviews and data protection reviews. These tend to happen before any code is even written. This doesn't rule out spikes, but their role is to verify and validate requirements and approaches, and allow new requirements to emerge to provide feedback to the actual design process.

The only place where cowboy coding has a place is in small refactoring, features and code fixes.

reply
mytailorisrich
2 days ago
[-]
It is, as often, a trade-off.

You need a high level design up-front but it should not be set in stone. Writing code and iterating is how you learn and get to a good, working design.

Heavy design specs up-front are a waste of time. Hence, the agile manifesto's "Working software over comprehensive documentation", unfortunately the key qualifier "comprehensive" is often lost along the way...

On the whole I agree that writing code is the design phase. Software dev. is design and test.

reply
motorest
2 days ago
[-]
> You need a high level design up-front but it should not be cast in stone.

Yes, you need a design that precedes code.

> Writing code and iterating is how you learn and get to a good, working design.

You are confusing waterfall-y "big design upfront" with having a design.

It isn't.

This isn't even the case in hard engineering fields such as aerospace where prototypes are used to iterate over design.

In software engineering fields you start with a design and you implement it. As software is soft, you do not need to pay the cost of a big design upfront.

reply
mytailorisrich
2 days ago
[-]
> You are confusing waterfall-y "big design upfront" with having a design.

I do not and I have explained it.

> In software engineering fields you start with a design and you implement it

And part of my previous comment is that this "waterfall-y" approach in which you design first and implement second does not work and has never worked.

> you do not need to pay the cost of a big design upfront

Exactly, and not only that but usually requirements will also change along the way. The design can change and will change as you hit reality and learn while writing actual, working code. So keep your design as a high-level initial architecture then quickly iterate by writing code to flesh out the design.

Software is often opposed to "traditional engineering" but it is actually the same. How many experiments, prototyopes, iterations go into building a car or a rocket? Many. Engineers do not come up with the final design up front. The difference it is that this is expensive while in software we can iterate much more, much quicker, and for free to get to the final product.

reply
stonemetal12
2 days ago
[-]
>How many experiments, prototyopes, iterations go into building a car or a rocket? Many. Engineers do not come up with the final design up front.

No where did anyone claim you need the full final design up front. For cars\rockets how many of those experiments, prototypes, and iterations had designs? All of them. You never see a mechanical engineer walk out to the shop and just start hammering on a pile of slop until it sort of looks like a car.

>The difference it is that this is expensive while in software we can iterate much more, much quicker, and for free to get to the final product.

If you have no design to meet how do you judge the output of an iteration or know you have arrived at the final product?

reply
mytailorisrich
1 day ago
[-]
> If you have no design to meet how do you judge the output of an iteration or know you have arrived at the final product?

I think you mean "requirements" here instead of "design".

reply
motorest
1 day ago
[-]
> I think you mean "requirements" here instead of "design".

No. This is exactly what you are getting wrong. Requirements are constraints that guide the design. The design then is used to organize, structure, and allocate work, and determine what code needs to be written.

reply
motorest
2 days ago
[-]
> I do not and I have explained it.

You should review the sources of your confusions and personal misconceptions, as you deny design and then proceed to admit there is design.

> And part of my previous comment is that this "waterfall-y" approach in which you design first and implement second does not work and has never worked.

Nonsense. "Big design upfront" works, but is suboptimal in software development. That's why it's not used.

"Big design upfront" approaches are costly as it requires know-how and expertise to pull off, which most teams lack, and it assumes requirements don't change, which is never the case.

Once you acknowledge that requirements will change and new requirements will emerge, you start to think of strategies to accommodate them. In software development, unlike in any hard engineering field, the primary resource consumed is man-hours. This means that, unlike in hard engineering fields, a software development process can go through total rebuilds without jeopardizing their success. Therefore in software development there is less pressure to get every detail right at the start, and thus designs can be reviewed and implementations can be redone with minimal impact.

> Exactly, and not only that but usually requirements will also change along the way. The design can change and will change as you hit reality and learn while writing actual, working code.

Yes.

But you do need a design upfront, before code is written. Design means "know what you need to do". You need to have that in place to create tickets and allocate effort. It makes no sense at all to claim that writing code is the design stage. Only in amateur pet projects this is the case.

reply
ndriscoll
2 days ago
[-]
The difference I see is that in other fields, part of your design process is thinking through the logical details of the thing. Essentially, doing some math. In software, the logical details are the finished product. The math is what you're trying to make. If you've actually thought through all of the details, you have written the software (if only in your head). If you haven't thought through all of the details and only figured out a high level design, you've still written some software (essentially, stubbing out some functionality, or leaving it as a dependency to be provided. However you want to think of it). So naturally, one way to think through things is to write software.
reply
motorest
1 day ago
[-]
> The difference I see is that in other fields, part of your design process is thinking through the logical details of the thing. Essentially, doing some math.

The "some math" is used in engineering fields in things like preliminary design, sizing, verification&validation, etc. To a lesser degree, "some math" can be used in the design stages of software development projects. For example, estimating the impact of micro services tax in total response times to verify if doing synchronous calls can work vs doing polling/messaging. Another example is estimating max throughput per service based on what data features in a response and how infrastructure is scaled. This is the kind of things that you do way before touching code to determine if the expected impact of going with a particular architecture vs another that mitigates issues.

> In software, the logical details are the finished product. The math is what you're trying to make.

You're confused. The design stage precedes writing any code, let alone the finished product. Any remotely complex work, specially if it involves architecture changes, is preceded by a design stage where alternatives are weighed and validated, and tradeoffs are evaluated.

To further drive the point home, in professional settings you also have design reviews for things like security and data protection. Some companies even establish guidelines such as data classification processes and comparative design to facilitate these reviews.

> If you've actually thought through all of the details, you have written the software (if only in your head). If you haven't thought through all of the details and only figured out a high level design, you've still written some software (essentially, stubbing out some functionality, or leaving it as a dependency to be provided. However you want to think of it).

You're confusing having a design stage with having a big design upfront. This is wrong.

The purpose of the design stage is to get the necessary and sufficient aspects right from the start, before resources are invested (and wasted) in producing something that meets requirements. No one cares what classes or indentation style you use to implement something. The ultimate goal is to ensure the thing is possible to deliver, what it actually does and how it does it, and if it is safe enough to use. You start writing code to fill in the details.

reply
ndriscoll
1 day ago
[-]
Design reviews don't mean you don't write code first. We have a change that we're about to put through security review right now to special case TLS handling when talking to some older clients, but before we put a proposal forward, I validated that the core idea will actually work and that we can heavily restrict the scope of what the change applies to by writing the code and tests to do it. Meetings and asking multiple people to read through and provide feedback on documents very quickly racks up hours. Best to spend an hour or two first validating that what you're proposing isn't nonsense. This lets us be very specific about what the problem is, what the proposed solution is, and what the risks are.

With data classification, you're going to need to think through what data you are using and what you want to do with it. i.e. write a program.

I didn't claim class structure or indentation matters. I'm saying that assuming you are discussing some sort of algorithm or functionality, a formal language is a perfectly fine thing to use for thinking about the problem and writing down your ideas. Writing "what it actually does and how it does it" is just programming. If you write your ideas in a language like Scala, they can easily be more concise (so easier to review) than they would be in English, and you get a compiler helping you think through things.

reply
ozim
2 days ago
[-]
Operation that uses software developers not as code monkeys but actual business problem solvers that have also business knowledge.

Operation that delivers features instead of burning budget on discussions.

Operation that uses test/acceptance environments where you deploy and validate the design so people actually see the outcome.

Obviously you have to write down the requirements - but writing down requirements is not design phase.

Design starts with idea, is written down to couple sentences or paragraphs then turned into code and while it is still on test/acceptance it still is design phase. Once feature goes to production in a release "design phase" is done, implementation and changes are part of design and finding out issues, limitations.

reply
ownagefool
2 days ago
[-]
This response is rude / insulting and doesn't actually add much because you've just asserted a bunch of fallacious opinions without any meat.

My opinion is reality is more nuanced. Both "the code is self documenting" and "the code is the design" are reasonable takes within reasonable situations.

I'll give an example.

I work in a bureaucratic organization where there's a requirement to share data and a design doc that goes through a series of not-really-technical approvals. The entire point of the process is to be consumable to people who don't really know what an API is. It's an entirely reasonable point of view that we should just create the swagger doc and publish that for approval.

I worked in another organization where everything was an RFC. You make a proposal, all the tech leads don't really understand the problem space, and you have no experience doing the thing, so you get the nod to go ahead. You now have a standard that struggles against reality, and is difficult to change because it has broad acceptance.

I'm not saying we should live in a world with zero non-code artifacts, but as someone who hops org to org, most of the artifacts aren't useful, but a CI/CD that builds, tests, and deploys, looking at the output and looking at the code gives me way more insight that most non-code processes.

reply
albrewer
2 days ago
[-]
> Because in the software engineering world there is very little engineering involved.

I can count on one hand the number of times I've been given the time to do a planning period for something less than a "major" feature in the past few years. Oddly, the only time I was able to push good QA, testing, and development practices was at an engineering firm.

reply
ghosty141
2 days ago
[-]
I work in a small team where we are essentially 4-6 core developers. When I develop a feature I usually talk about it with my coworkers once I made a rough draft in my head how I'd approach it. They do the same so our code reviews are mostly only the minor code smells etc. but we usually decide on the architecture together (2-3 people usually).

I find this to be one of the most important things in our team. Once people don't agree on code it all kinda goes downhill with nobody wanting to interact with code they didn't write for various reasons.

In bigger orgs I believe it's still doable this way as long as responsibilities are shared properly and it's not just 4 guys who know it all and 40 others depend on them.

reply
sigmaml
2 days ago
[-]
I strongly second this. In my own experience of about 30 years, I have seen this method to work almost always.
reply
motorest
2 days ago
[-]
> It always seems as if the code review is the only time when all stakeholders really gets involved and starts thinking about a change.

That is a problem with your organization, not with Git or any version control system. PRs are orthogonal to it.

If you drop by a PR without being aware of the ticket that made the PR happen and the whole discussion and decision process that led to the creation of said tickets, you are out of the loop.

Your complain is like a book publisher complaining that the printing process is flawed because seeing the printed book coming out of the production line is the only time when all stakeholders really get involved. Only if you work in a dysfunctional company.

reply
AdrianB1
2 days ago
[-]
I saw this in many places, so I read that original statement like a complaint about a widespread problem, not an exception in one company.

Sometimes is not even about a PR, it is about an entire project. I always do reviews (design and code, separate stages) for projects where code is almost complete when people come for design reviews and by the time we get to code reviews it is usually too late to fix problems other than showstoppers. I worked in small companies, huge companies (over 100k employees), some are better, most are bad, in my experience. YMMV, of course.

reply
Ozzie_osman
2 days ago
[-]
> Not that I know how to fix that. You can't have everyone in the entire company spend time looking at every possible thing that might be developed in the near future. Or can you?

You don't need to. I've seen this generally work with some mix of the following:

1. Try to decouple systems so that it's less likely for someone in a part of the org to make changes that negatively impact someone in a more distant part of the org.

2. Some design review process: can be formal "you will write a design doc, it will be reviewed and formally approved in a design committee" if you care more about integrity and less about speed, or can be "write a quick RFC document and share it to the relevant team(s)".

3. Some group of people that have broad context on the system/code-base (usually more senior or tenured engineers). Again, can be formal: "here is the design review committee" or less formal: "run it by these set of folks who know there stuff". If done well, I'd say you can get pretty broad coverage from a group like this. Definitely not "everyone in the entire company". That group can also redirect or pull others in.

4. Accept that the process will be a bit lossy. Not just because you may miss a reviewer, but also, because sometimes once you start implementing the reality of implementation is different than what people expect. You can design the process for this by encouraging POC or draft implementations or spikes, and set expectations that not all code is expected to make it into production (any creative process includes drafts, rewrites, etc that may not be part of the final form, but help explore the best final form).

I've basically seen this work pretty well at company sizes from 5 engineers all the way up to thousands.

reply
DixieDev
2 days ago
[-]
Where I work we tend to write RFCs for fundamental design decisions. Deciding what counts as a "fundamental design decision" is sometimes self-moderated in the moment but we also account for it when making long term plans. For example when initially creating epics in Jira we might find it hard to flesh out as we don't really know how we're going to approach it, so we just start it off with a task to write an RFC.

These can be written either for just our team or for the eyes of all other software teams. In the latter case we put these forward as RFCs for discussion in a fortnightly meeting, which is announced well in advance so people can read them, leave comments beforehand, and only need to attend the meeting if there's an RFC of interest to them up for discussion.

This has gone pretty well for us! It can feel like a pain to write some of these, and at times I think we overuse them somewhat, but I much prefer our approach to any other place I've worked where we didn't have any sort of collaborative design process in place at all.

reply
unkulunkulu
2 days ago
[-]
I view this process like this: code review is a communication tool: you can discuss concrete decisions vs hand waving and explaining in the conceptual space, which of course has its place, but is limited.

But writing the whole working code just to discuss some APIs is too much and will require extra work to change if problems are surfaced on review.

So a design document is something in the middle: it should draw a line where the picture of the planned change is as clear as possible and can be communicated with shareholders.

Other possible middle grounds include PRs that don’t pass all tests or that don’t even build at all. You just have to choose the most appropriate sequence of communication tools to come to agreements in the team and come to a point where the team is on the same page on all the decisions and how the final picture looks.

reply
sdeframond
2 days ago
[-]
I share your feelings.

Regarding design reviews, we used to have them at my current job. However we stopped doing both formal design documents and design reviews in favor of prototyping and iterative design.

The issue with the design phase is that we often failed to account for some important details. We spent considerable time discussing things and, when implementing, realized that we omitted some important detail or insight. But since we already invested that much time in the design phase, it was tempting to take shortcuts.

What's more, design reviews were not conducted by the whole team, since it would be counter-productive to have 10-more people in the same room. So we'd still discover things during code reviews.

And then not everyone is good at/motivated to producing good design documents.

In the end, I believe that any development team above 5 people is bound to encounter these kinds of inefficiencies. The ideal setup is to put 5 people in the same room with the PO and close to a few key users.

reply
benji-york
2 days ago
[-]
> The ideal setup is to put 5 people in the same room with the PO and close to a few key users.

(I suspect you are aware, but just in case this is new to you.) This is essentially the core of Extreme Programming.

reply
sdeframond
2 days ago
[-]
What team size does XP recommends, if any ?

It seems like the standard around me is between 8 to 12 people. This is too many in my opinion.

I believe this is because management is unknowingly aiming for the biggest team the does not completely halts instead of seeking a team that delivers the most bang for the buck.

reply
benji-york
2 days ago
[-]
I don't know of a canonical XP team size, but "small" is the normal recommendation. 8-12 would seem larger than small to me.

Personally, 8 is the largest I would have a team. At that point you should consider breaking it into two teams of 4 (even if those teams periodically recombine from the original set of 8 people).

reply
sdeframond
2 days ago
[-]
I find it non trivial to split a team in two independent teams.

If the two teams have to coordinate a lot and work on the same code base, is there still two teams?

To be independent, they would need to work on functionnaly different parts of the project. Not all projects have several independent parts, feature-wise.

What do I miss?

reply
skybrian
2 days ago
[-]
One way to fix it: pair programming. You're getting feedback in real time as you write the code.

Unfortunately, the conditions where it works well can be difficult to set up. You need people who are into it and have similar schedules. And you don't want two people waiting for tests to run.

reply
guytv
2 days ago
[-]
Anyone actually done long-term pair programming and lived to tell the tale? Is it real, or just a utopian fantasy?
reply
jacques_chester
2 days ago
[-]
It's not for everyone. Some people have excellent reasons why it isn't workable for them. Others have had terrible experiences. It takes a great deal of practice to be a good pair and, if you don't start by working with an experienced pair, your memories of pairing are unlikely to be fond.

However.

I paired full-time, all day, at Pivotal, for 5 years. It was incredible. Truly amazing. The only time in my career when I really thrived. I miss it badly.

reply
skybrian
2 days ago
[-]
I did it at a startup for a few months. The startup failed, but I think it was more of a business failure.

Pivotal Labs was a contracting firm that did it for years. They aren’t around anymore, but they had a good run:

https://en.m.wikipedia.org/wiki/Pivotal_Labs

reply
grilledcheez
2 days ago
[-]
It's crazy that we go out of our way to design processes (code review, design review) to avoid actually just... working together? And then we design organizations that optimize for those processes instead of optimizing for collaboration.
reply
guytv
2 days ago
[-]
With AI based coding (no, i won't use "Vibe coding", thank you) this workflow improves a lot. Instead of jumping straight into code, I have my engineers write a Notion doc that describes what needs to be built. Think of it like an LLD, but really it’s a prompt for Claude-code. This forces them to think through the problem at a low level, and they share the doc with me before sending it to Claude — so I get to review early in the process. Once we finalize this "LLD" or "low-level-prompt", they hand it to Claude. The next time I see the work is in a GitHub PR. At that point, we rarely have to throw everything away and start from scratch.
reply
teiferer
2 days ago
[-]
Seems like your organization is lacking structure and communication.

Where I work, the structure is such that most parts of the codebase have a team that is responsible for it and does the vast majority of changes there. If any "outsider" plans a change, they come talk to the team and coordinates.

And we also have strong intra-team communication. It's clear who is working on what and we have design reviews to agree on the "how" within the team.

It's rare that what you describe happens. 95% of the code reviews I do are without comments or only with minor suggestions for improvement. Mainly because we have developed a culture of talking to each other about major things beforehand and writing the code is really just the last step in the process. We also have developed a somewhat consistent style within the teams. Not necessarily across the teams, but that's ok.

TL;DR: It's certainly possible to do things better that what you are experiencing. It's a matter of structure, communication and culture.

reply
ruslan_sure
2 days ago
[-]
This can be used in any process where the result is only judged at the end.

The solution here may be to add a midterm check. I think this is what you mean by a "design review."

In my experience, there are some rules that need to be followed for it to work.

- Keep the number of stakeholders involved in all decisions, including PR, as small as possible.

- Everyone involved should take part in this check. That way, no one will be surprised by the results.

- This check should have been documented, like in the ticket.

This can be used in any process where the result is only judged at the end. The solution here may be to add a midterm check. I think this is what you mean by a "design review." In my experience, there are some rules that need to be followed for it to work. We should keep the number of stakeholders involved in all decisions, including PR, as small as possible. Everyone involved should take part in this mid-term check. That way, no one will be surprised by the results. This check should have been documented, like in the ticket.

When and how to do this check and how to handle disagreements depend on the task, culture, and personalities.

reply
ruslan_sure
2 days ago
[-]
We should do something similar with AI-coding.

If you don't have a documented mid-term check, vibe-coded PR might not be what you expected.

reply
flir
2 days ago
[-]
You can get the same thing if you do user interface testing after you've built the thing. A design system can help there - at the very least, the change can feed back into the next revision of the playbook.

Even if you can't fix it this time, hopefully you've taught someone a better pattern. The direction of travel should still be positive.

On personal projects I've used architectural decision records, but I've never tried them with a team.

reply
tomasreimers
2 days ago
[-]
Just taking a step back, it is SO COOL to me to be reading about stacked pull requests on HN.

When we started graphite.dev years ago that was a workflow most developers had never heard of unless they had previously been at FB / Google.

Fun to see how fast code review can change over 3-4yrs :)

reply
benreesman
2 days ago
[-]
I'm a pre-mercurial arcanist refugee who tends to promote Graphite in teams that are still struggling with mega-PRs and merge commits and other own goal GitHub-isms. Big fan in general even with the somewhat rocky scaling road we've been on :)

And I very much appreciate both the ambition and results that come from making it interop with PRs, its a nightmare problem and its pretty damned amazing it works at all, let alone most of the time.

I would strongly lobby for a prescriptive mode where Graphite initializes a repository with hardcore settings that would allow it to make more assumptions about the underlying repo (merge commits, you know the list better than I do).

I think that's what could let it be bulletproof.

reply
tomasreimers
2 days ago
[-]
We've talked about a "safe mode" where we initialize it similar to JJ - such that you can no longer directly run git commands without funneling them thru graphite, but which would make it bulletproof. Would that be interesting?
reply
benreesman
2 days ago
[-]
I think jujitsu is interesting it it's own right!

It seems non-obvious that you would have to prohibit git commands in general, they're already "buyer beware" with the current tool (and arcanist for that matter). Certainly a "strict mode" where only well-behaved trees could interact with the tool creates scope for all kinds of performance and robustness optimizations (and with reflog bisecting it could even tell you where you went off script).

I was more referring to the compromises that gt has to make to cope with arbitrary GitHub PRs seem a lot more fiddly than directly invoking git, but that's your area of expertise and my anecdote!

Broad strokes I'm excited for the inevitable decoupling of gt from GitHub per se, it was clearly existential for zero to one, but you folks are a first order surface in 2025.

Keep it up!

reply
m11a
2 days ago
[-]
Graphite seems cool, it’s just unfortunately quite expensive and sometimes hard to convince procurement/etc to invest in when it has a noticeable cost involved.

So I’m really hoping something like Graphite becomes open-source, or integrated into GitHub.

reply
akshayshah
2 days ago
[-]
git-spice does everything I liked from Graphite, but it’s fully open source and easy to adopt piecemeal.

https://abhinav.github.io/git-spice/

reply
sywhang
2 days ago
[-]
git-spice is completely open source and free: https://abhinav.github.io/git-spice/
reply
mytailorisrich
2 days ago
[-]
Stacked pull requests seem to add a layer of complexity to solve a problem that should and can be avoided in the first place.

Frequent, small changes are really a good practice.

Then we have things like trunk-based development and continuous integration.

reply
vlovich123
2 days ago
[-]
I’m confused. How do you do frequent small changes and avoid stacked PRs. Do you just do a small commit, wait for a review, merge, do another small commit? Or do you make a bunch of small commits locally and only put up the next one for review when the previous one is reviewed and merged?

That’s the only models I can think of and it’s weird to advocate to have a variable time asynchronous process in the middle of your code or review loops. Seems like you’re just handicapping your velocity for no reason.

Stacked PRs are precisely about factoring out small changes into individually reviewable commits that can be reviewed and landed independently, decoupling reviewer and developer while retaining good properties like small commits that the reviewer is going to do a better job on, larger single purpose commits that the reviewer knows to spend more time on without getting overwhelmed dealing with unrelated noise, and the ability to see relationships between smaller commits and the bigger picture. Meanwhile the developer gets to land unobtrusive cleanups that serve a broader goal faster to avoid merge conflicts while getting feedback quicker on work while working towards a larger goal.

The only time stacked commits aren’t as useful is for junior devs who cants organize themselves well enough to understand how to do this well (it’s an art you have to intentionally practice at) and don’t generally have a good handle on the broader scope of what they’re working towards.

reply
jacques_chester
2 days ago
[-]
Trunk-based development, by itself, is a fool's errand.

But combine it with TDD & pairing and it becomes a license to deliver robust features at warp speed.

reply
vlovich123
1 day ago
[-]
I don’t follow. Regardless of where you merge, are you not developing features on a shared branch with others? Or do you just have a single long development branch and then merge once “you’re done” and hope that there’s no merge conflicts? But regardless, I’m missing how reviews are being done.
reply
johnmaguire
2 days ago
[-]
Stacked PRs allow me to post frequent, small changes without waiting for a review between each one.
reply
mytailorisrich
2 days ago
[-]
Well, you don't need stacked PRs for that...

I think stacked PRs are a symptoms of the issues the underlying workflow (feature branches with blocking reviews) has.

reply
ncruces
2 days ago
[-]
“What you'll do next and in what way” is often an important tool to put the small changes into context.

Stacked pull requests can be an important tool to enable “frequent, small changes” IMO.

Sure, I can use a single pull request and a branch on top of that, but then it's harder for others to leave notes on the future, WIP, steps.

A common situation is that during code review I create a few alternative WIP changes to communicate to a reviewer how I might resolve a comment; they can do the same, and share it with me. Discussion can fork to those change sets.

Gerrit is much closer to my desired workflow than GitHub PRs.

reply
mytailorisrich
2 days ago
[-]
From a continuous integration perspective my understanding is that stacked pulled requests do not make change more frequent if we define a "change" as being committed on the master branch. They only split the feature branch into smaller chunks. On the other hand, I do take your point about context over a number of consecutive changes.

But, to me, "creating a few alternative WIP changes to communicate to a reviewer" indicates an issue with code reviews. I don't think code reviews are the time to propose alternative implementations, even if you have a "better" idea unless the code under review is broken.

reply
gen220
2 days ago
[-]
FWIW, stacking is "just a tool" that you can use to make whatever sort of workflow you want. I agree creating alt PRs isn't a high-value usage of the tools.

The //actually better// workflows stacking enables are the same sort of workflows that `git add -p`, `git commit --fixup` and `git rebase` enable, just at a higher level of abstraction (PRs vs commits).

You can "merge as a stack" as you imply, but you can also merge in sub-chunks, or make a base 2-3 PRs in a stack that 4 other stacks build on top of. It allows you to confidently author the N+1th piece of work that you'd normally "defer" doing until after everything up to N has been reviewed.

An example: I add a feature flag, implement a divergent behavior behind a feature flag gate, delete the feature flag and remove the old behavior. I can do this in one "stack", in which I deploy the first two today and the last one next week.

I don't have to "come back" to this part of the codebase a week from now to implement removing the flag, I can just merge the last PR that I wrote while I had full context on this corner.

In theory you can do all of this stuff with vanilla git and GitHub. In non-stacking orgs, I'd regularly be the only person doing this, because I was the only one comfortable enough with git (and stacking) for it to not be toooo big a burden to my workflow. Graphite (and other stacking tools) make this workflow more accessible and intuitive to people, which is a big net win for reviewers imo.

reply
vlovich123
2 days ago
[-]
> From a continuous integration perspective my understanding is that stacked pulled requests do not make change more frequent if we define a "change" as being committed on the master branch

Empirically this is not true if you also control for review quality. If your code review is a rubber stamp then sure mega PRs win because you put up a PR and then merge. But why review then?

However, code review quality goes up when you break things down into smaller commits because the code reviewer can sanity check a refactor without going over each line (pattern matching) while spending more time on other PRs that do other things.

And if you are breaking things down, then stacked PRs are definitely faster at merged to master/unit of time. I introduced graphite to my team and whereas before we struggled to land a broken down PR of ~5 commits in one week, we’d regularly land ~10+ commit stacks every few days because most of the changes of a larger body of work got approved and merged (since often times the commit order isn’t even important, you can reorder the small commits), conditional approvals (ie cleanups needed) didn’t require further follow ups from the reviewer, and longer discussion PRs could stay open for longer without blocking progress and both developer and reviewer could focus their attention there.

Additionally, graphite is good about automatically merging a group of approved small individual commits from a larger set of changes automatically without you babysitting which is infinitely easier than managing this in GitHub and merging 1 commit, rebasing other PRs after a merge etc.

reply
vlovich123
2 days ago
[-]
What’s your counter proposal on the workflow? No code reviews?
reply
foota
2 days ago
[-]
I miss the fig workflow :-(
reply
kyrra
2 days ago
[-]
Try `jj`, as others have mentioned. It's being built by the team that built/maintains fig, and the are porting all their learnings into that.
reply
ndr
2 days ago
[-]
jj is cool solo, but it doesn't seem of much help when maintaining a stack of PRs neatly updated on github
reply
abound
2 days ago
[-]
It requires a bit of scripting between the `gh` CLI and `jj`, but it's totally doable to maintain even complex stacks of PRs on GitHub with jj.

One thing I've found at $DAYJOB is that I have to set the PR's "base" branch to "main" before I push updated commits (and then switch it back to the parent after), otherwise CI thinks my PR contains everything on main and goes nuts emailing half the company to come review it.

reply
ndr
2 days ago
[-]
Is there something that does this?

I've played with git town which is great for what it is.

But at $DAYJOB we are now all on graphite and that stacking is super neat. The web part is frustratingly slow, but they got stacking working really well.

reply
jacobegold
2 days ago
[-]
we have a big effort in the works to improve web perf! where specifically are you seeing slowness in the app — what flows, what pages, etc?
reply
ndr
3 hours ago
[-]
Super glad to hear!

The worst offender is a slack notification[0] deep link into a PR I need to review.

It loads in stages, and the time from click to first diff is often so frustratingly long that I end up copying the PR ID and going to GitHub instead.

Sometimes I give up while Graphite is still loading and use the shortcut C-G to go to GitHub.

The second issue might be the landing page. I love what it shows compared to GitHub, but it's often slow to display loading blocks for things that haven’t even changed. Reloads are usually fast after that — until sometime later, maybe a day, when it slows down again.

I don't know why it feels worse than Linear, even though there are clearly many similarities in how it's supposed to load.

The guest instance isn’t so much about loading speed, but usage speed.

When I submit a stack of PRs, I get a nice carousel to fill in PR titles/descriptions and choose where to publish each PR. What’s missing for me there is access to files and diffs, so I can re-review before publishing. I often end up closing it and going back to the PR list instead.

[0] Thank God for those! You've made them much more useful than GitHub's. Also, the landing page is far more helpful in terms of what’s displayed.

reply
thinkindie
2 days ago
[-]
Given the security incident that happened to CodeRabbit I’m a bit less enthusiastic about testing out new tools that have LLMs and my codebase under the same tool.

What can be a very nice experiment to try something new can easily become a security headache to deal with.

reply
vlovich123
2 days ago
[-]
I don’t understand. By LLMs you’re referring to the optional LLM review graphite offers as an additional purchase on product? I’m not sure I understand the risk concern.
reply
globular-toast
2 days ago
[-]
As someone who already breaks tasks into atomic (or near atomic) pieces and always has done, is this just submitting a PR for each commit as you go? How does it work for breaking changes? Requires use of feature flags?
reply
gen220
2 days ago
[-]
Sort of, yeah! It lends itself well to a 1 PR = 1 commit philosophy. Every PR has to pass CI to be mergeable. If you want to make a CI-breaking change, putting that behind a feature flag is one valid strategy.

I'd recommend giving it a try to see what it's like. The `gt`/onboarding tour is pretty edifying and brief.

It's likely that you'll find that `gt` is "enabling" workflows that you've already found efficient solutions for, because it's essentially an opinionated and productive subset of git+github. But it comes with some guardrails and bells and whistles that makes it both (1) easier for devs who are new to trunk-based dev to grok and (2) easier for seasoned devs to do essentially the same work they were already doing with fewer clicks and less `git`-fu.

reply
vinnymac
2 days ago
[-]
Just signed up, looks like everything I ever wanted
reply
paulddraper
2 days ago
[-]
Dude, I love Graphite.

Best AI code review, hands down. (And I’ve tried a few.)

reply
jacobegold
2 days ago
[-]
hell yeah
reply
Areibman
2 days ago
[-]
The biggest grip I have with Github is the app is painfully slow. And by slow, I mean browser tab might freeze level slow.

Shockingly, the best code review tool I've ever used was Azure DevOps.

reply
wenc
2 days ago
[-]
When I worked at a Microsoft shop, I used Azure DevOps. To be honest, it's actually not bad for .NET stuff. It fits the .NET development life cycle like Visual Studio fits C#.
reply
echelon
2 days ago
[-]
> The biggest grip I have with Github is the app is painfully slow. And by slow, I mean browser tab might freeze level slow.

Javascript at scale combined with teams that have to move fast and ship features is a recipe for this.

At least it's not Atlassian.

reply
lmm
2 days ago
[-]
Stash (now BitBucket Server) had the best code review going, head and shoulders above GitHub to the point I thought GitHub would obviously adopt their approach. But I imagine Atlassian has now made it slow and useless like they do with all their products and acquisitions.
reply
caspar
1 day ago
[-]
Stash was not an acquisition. Stash was built from the ground up inside Atlassian during its golden age, by a bunch of engineers who really cared about performance. Though it helped that they didn't have Jira's 'problem' of having 8 figures of revenue hanging off a terrible database schema designed a decade ago.

You might be thinking of Fisheye/Crucible, which were acquisitions, and suffered the traditional fate of being sidelined.

(You are 100% correct that Stash/Bitbucket Server has also been sidelined, but that has everything to do with their cloud SaaS model generating more revenue than selling self-hosted licenses. The last time I used it circa 2024, it was still way faster than Bitbucket Cloud though.)

Source: worked at Atlassian for a long time but left a few years ago.

reply
dotancohen
2 days ago
[-]
Bit Bucket had a git-related tool called Stash? I love Bit Bucket, but I'm glad I did not know about that.
reply
lmm
2 days ago
[-]
There was a locally-hosted Git server platform called Stash. Atlassian bought it, rebranded it as "BitBucket Server" (positioned similarly to GitHub Enterprise or self-hosted GitLab) and gradually made it look and feel like BitBucket (the cloud product), even though they're actually completely separate codebases (or at least used to be).
reply
dotancohen
2 days ago
[-]
But why was it ever called Stash? The word stash has particular meaning in git, and it's been that way since a very early git version.
reply
phito
2 days ago
[-]
That explains why it doesn't suck nearly as much as Jira and Confluence...
reply
agos
2 days ago
[-]
which is ironic because historically the slowness of GitHub's UI was due to them not using much JS and requiring round trips for stuff like flagging a checkbox.
reply
benrutter
2 days ago
[-]
What did you like so much about DevOps?

I use it every day and don't have any issues with the review system, but to me it's very similar to github. If anything, I miss being able to suggest changes and have people click a button to integrate them as commits.

reply
neerajsi
2 days ago
[-]
I've used this 'suggestion' workflow in azure devops. https://devblogs.microsoft.com/devops/introducing-the-new-pu...
reply
benrutter
2 days ago
[-]
I didn't know about it at all! That looks like exactly what I wanted.

So I'm back to liking dev-ops and github code reviews identically!

reply
Areibman
2 days ago
[-]
Commenting feels so much better. You can comment on entire files, and you can leave review comments that actually "block" (rather than just get appended to the conversation)
reply
opium_tea
2 days ago
[-]
That suggestion feature actually exists on ADO. It was introduced in the last year or so.
reply
000ooo000
2 days ago
[-]
Curious if you've used GitLab in anger. It is my pick of the big 4.
reply
awesome_dude
2 days ago
[-]
nit: gripe, not grip :-P
reply
noname120
2 days ago
[-]
Worst part is it was alright when it was mostly static pages. With the gradual SPA rewrite it has become absolute garbage for basically no benefits.
reply
ivanjermakov
2 days ago
[-]
I find the idea of using git for code reviews directly quite compelling. Working with the change locally as you were the one who made it is very convenient, considering the clunky read-only web UI.

I didn't get why stick with the requirement that review is a single commit? To keep git-review implementation simple?

I wonder if approach where every reviewer commits their comments/fixes to the PR branch directly would work as well as I think it would. One might not even need any additional tools to make it convenient to work with. This idea seems like a hybrid of traditional github flow and a way Linux development is organized via mailing lists and patches.

reply
spike021
2 days ago
[-]
is github's PR considered read-only?

i've had team members edit a correction as a "suggestion" comment and i can approve it to be added as a commit on my branch.

reply
ivanjermakov
2 days ago
[-]
By read-only I meant that you can't fully interact with the code: run/debug it, use intellisense, etc.
reply
spike021
2 days ago
[-]
Can't you just check out the branch of the repo locally into your ide? i'm still confused what limitation you are talking about.
reply
ivanjermakov
2 days ago
[-]
Described well in the post. This way you have to switch between ide and web diff viewer, redundant and not convenient.
reply
teiferer
2 days ago
[-]
> I didn't get why stick with the requirement that review is a single commit

Yeah that is pretty weird. If 5 people review my code, do they all mangle the same review commit? We don't do that with code either, feels like it's defeating the point.

Review would need to be commits on top of the reviewed commit. If there are 5 reviews of the same commit, then they all branch out from that commit. And to address them, there is another commit which also lives besides them. Each commit change process becomes a branch with stacked commits beinf branches chained on top of one another. Each of the commits in those chained branches then has comment commits attached. Those comment commits could even form chains if a discussion is happening. Then when everybody is happy, each branch gets squashed into a single commit and those then get rebased on the main branch.

You likely want to make new commits for that though to preserve the discussions for a while. And that's the crux: That data lives outside the main branch, but needs to live somewhere.

reply
koolba
2 days ago
[-]
> When I review code, I like to pull the source branch locally. Then I soft-reset the code to mere base, so that the code looks as if it was written by me.

This is eerily similar to how I review large changes that do not have a clear set of commits. The real problem is working with people that don’t realize that if you don’t break work down into small self contained units, everybody else is going to have to do it individually. Nobody can honestly say they can review tons of diffs to a ton of files and truly understand what they’ve reviewed.

The whole is more than just the sum of the parts.

reply
stitched2gethr
2 days ago
[-]
For those that want an easy button. Here ya go.

``` review () { if [[ -n $(git status -s) ]] then echo 'must start with clean tree!' return 1 fi

        git checkout pristine # a branch that I never commit to
        git rebase origin/master

        branch="$1"
        git branch -D "$branch"
        git checkout "$branch"

        git rebase origin/master
        git reset --soft origin/master
        git reset

        nvim -c ':G' # opens neovim with the fugitive plugin - replace with your favorite editor

        git reset --hard
        git status -s | awk '{ print $2 }' | xargs rm
        git checkout pristine
        git branch -D "$branch"
} ```
reply
000ooo000
2 days ago
[-]
Having a PR worktree is good with this kind of workflow.
reply
cedws
2 days ago
[-]
Crafting good commits, and good PRs out of those commits is a skill just like how writing good code is. Unfortunately, too many people suck at the former.
reply
Maxion
2 days ago
[-]
This does also tie in directly with tickets and the overall workflow the team has. I find this to have a huge effect on how managable PRs are. I feel the majority of devs are quite oblivious to the code they produce, they simply keep coding untill they fill the acceptence criteria. No matter if the result is 200 lines in 1 file, or 1 000 lines in 30 files.
reply
porridgeraisin
2 days ago
[-]
I use this: https://github.com/sindrets/diffview.nvim

as a PR review tool in neovim. It's basically vscode's diff tool UI-wise but integrates with vim's inbuilt diff mode.

Also, `git log -p --function-context` is very useful for less involved reviews.

reply
aitchnyu
2 days ago
[-]
We were a team with highly parallizable data science tasks, we checked out 3 copies of the repo, one for our branch, two for branches where we are the reviewer.
reply
jFriedensreich
2 days ago
[-]
Its pretty clear to a growing number of devs what a review tool should look like. It is more a matter of what needs to happen so this becomes a usable and sustainable reality and what shape of organisation/ players can make this happen in the right way.

- git itself wont go much further than the change-id which is already a huge win (thanks to jj, git butler, gerrit and other teams)

- graphite and github clearly showed they are not interested in solving this for anyone but their userslaves and have obviously opposing incentives.

- there are dozens of semi abandoned cli tools trying this without any traction, a cli can be a part of a solution but is just a small part

What we need:

- usable fully local

- core team support for vscode not just a broken afterthought by someone from the broader community

- web UI for usecases where vscode does not fit (possibly via vscode web or other ways to reuse as much of the interface work that went into the vscode integration)

- the core needs to be usable from a cli or library with clear boundaries so other editor teams can build as great integrations as the reference but fitting their native ui concepts

- it needs to work for commits, branches, stacked commits and any snapshot an agent creates as well as reviewing a devs own work before pushing

- it needs to incorporate CI/CD signals natively, meta did great UI work on this and its crucial to not ignore all that progress but build on top of it

- it needs to be as fine grained as the situation requires and with editability at every step. Why can i just accept one line in cursor but there is nothing like that when reviewing a humans code? Why can i fix a typo without any effort when reviewing in cursor when i have to go through at least 5 clicks to do the same when fixing a typo of a human.

- It needs to by fully incremental, when a pr is fixed there needs to be a simple way to review just the fix and not re-review the whole pr or the full file

reply
liampulles
2 days ago
[-]
Here's an alternative I've wondered about: Instead of one person writing code, and another reviewing it - instead you have one person write the first pass and then have another person adjust it and merge it in. And vice-versa; the roles rotate.

Anyone tried something like this? How did it go?

reply
whyever
2 days ago
[-]
I know some people who do trunk-based development with pair programming: You write the code together, and once you are satisfied, you merge it to the main branch, from where it is deployed to production if the tests pass. It works well for them.
reply
mattikl
2 days ago
[-]
I've noticed for a long time that if I have participated in writing the code under review, I'm able to provide much more insight. I think what you're suggesting starts from thinking the code as "our code" instead of my code vs. your code, which so easily happens with pull requests. And learning to work iteratively instead of trying to be too perfect from the start, which goes well with methodologies like TDD.
reply
actinium226
2 days ago
[-]
I would want the first person to write 90+% of the code, and really more like ~98% of it, because at some point you need to just do your job. But I like the idea of having the reviewer make the relevant changes themselves and merge it in. That's more or less what we did at the first place I worked, and the expectation was that both of you were responsible for the code. If it was more than minor changes the second person could send you notes for you to implement, but they were always the person to merge. I prefer it to the alternative of chasing someone down so that they hit "approve" so that you can go back to your desk and hit "merge."
reply
thinkindie
2 days ago
[-]
That’s basically an async pair programming session, isn’t it?
reply
liampulles
2 days ago
[-]
Yes I think so. Have you tried it?
reply
thinkindie
2 days ago
[-]
Just cases where PR were submitted close to someone holidays and I assigned it to someone else in the team to bring it over the line. But otherwise I have worked with sync pair programming only.
reply
000ooo000
2 days ago
[-]
There's something to be said for reviewing code you've never seen before. When you're familiar with it, you lose the ability to be surprised, and sometimes "WTF?" is useful for a reviewer to think.
reply
romanovcode
2 days ago
[-]
Great idea, if you're fine with development time to take twice as long.
reply
caspar
1 day ago
[-]
If you are used to PRs taking days (or over a week!) to merge, then I've found it's way faster to get someone to sit down and walk them through the code - which is not miles different from what's being proposed here.
reply
ramon156
2 days ago
[-]
The alternative is a merge that potentially has more bugs. Trunk based is definitely not twice as long, rather 1.5x on average
reply
fogzen
2 days ago
[-]
It might be worth it if it reduces bugs. If you’re running a payments company maybe working software is more important than saving a buck and shipping a day earlier.

Braintree was a pair programming company for example.

reply
hydroxideOH-
2 days ago
[-]
I use the GitHub Pull Request extension in VSCode to do the same thing (reviewing code locally in my editor). It works pretty well, and you can add/review comments directly in the editor.
reply
ivanjermakov
2 days ago
[-]
It's better, but still quite deep vendor lock-in (in both GitHub and VSCode).
reply
hydroxideOH-
2 days ago
[-]
Well my employer chooses to use GitHub so I don’t have a choice there. And it’s vendor lock-in VSCode but that’s already my primary editor so it means there’s no need to learn another tool just for code review.
reply
cyberax
2 days ago
[-]
JetBrains IDEs can do the same.
reply
plonq
2 days ago
[-]
Unfortunately it’s not feature complete - you can’t paste images in review comments, for example. Still very useful for large PRs though.
reply
NortySpock
2 days ago
[-]
GitHub may be dominant, but it's not like it doesn't have competitors nipping at its heels (GitLab, BitBucket come to mind).

VSCode is open source, and there are plenty of IDEs...

I guess I'm just focused on different lock-in concerns than you are.

reply
pm90
2 days ago
[-]
Same! Its much nicer now especially since Github seems to be pretty arbitrary/rigid about when it hides files that have "too many changes". Its so much nicer to see/navigate around such changes quickly in VSCode vs trying to do the same in the web interface.

I suspect that since this is possible with VSCode/Github, its probably extensible to other providers editors.

reply
cebert
2 days ago
[-]
I use this a lot too. Also, if you open a PR on the GitHub website and press the “.” key, it opens the review in VSCode, which I consider a much better web experience.
reply
reilly3000
2 days ago
[-]
TIL thanks.
reply
jacobegold
2 days ago
[-]
It's so cool that Git is considering first class change IDs!! That's huge! This sounds similar to what we had at Facebook to track revisions in Phabricator diffs. Curious if anyone knows the best place to read about this?
reply
3036e4
2 days ago
[-]
The fundamental problem is that git doesn't track branches in any sane way. Maybe it would be better to fix that? Fossil remembers what branch a commit was committed on, so the task branch itself is a change ID. That might be tricky to solve while also allowing git commands to mess with history of course. Fossil doesn't have that problem.
reply
jbmsf
2 days ago
[-]
Recently, I've been wondering about the point of code review as a whole.

When I started my career, no one did code review. I'm old.

At some point, my first company grew; we hired new people and started to offshore. Suddenly, you couldn't rely on developers having good judgement... or at least being responsible for fixing their own mess.

Code review was a tool I discovered and made mandatory.

A few years later, everyone converged on GitHub, PRs, and code review. What we were already doing now became the default.

Many, many years layer, I work with a 100% remote team that is mostly experienced and 75% or more of our work is writing code that looks like code we've already written. Most code review is low value. Yes, we do catch issues in review, especially with newer hires, but it's not obviously worth the delay of a review cycle.

Our current policy is to trust the author to opt-in for review. So far, this approach works, but I doubt it will scale.

My point? We have a lot of posts about code review and related tools and not enough about whether to review and how to make reviews useful.

reply
Feeble
2 days ago
[-]
I am very much in the same position right now. My dev team has introduced mandatory code reviews for every change and I can see their output plummeting. It also seems that most code reviews done are mostly syntax and code format related - noone actually seems to run the code or look at the actual logic if it makes sense.

I think its easy to add processes under the good intention of "making the code more robust and clean", but I never heard anyone discuss what is the cost of this process to the team's efficiency.

reply
alkonaut
2 days ago
[-]
The value of having more people actually see the code will be there even if it’s just an unnecessary syntax nitpick.
reply
mgaunard
2 days ago
[-]
You need to validate things like syntax upfront so that such things don't make it to review to begin with.

I'm not a fan of automatic syntax formatting but you can have some degree of pre-commit checks.

reply
benrutter
2 days ago
[-]
Interesting take! Personally I'd never throw out code review, for a couple reasons.

1. It's easy to optimise for talented, motivated people in your team. You obviously want this, and it should be the standard, but you also want it to be the case that somebody who doesn't care about their work can't trash the codebase.

2. I find even people just leaving 'lgtm' style reviews for simple things, does a lot to make sure folks keep up with changes. Even if there's nothing caught, you still want to make sure there aren't changes that only one person knows about. That's how you wind up with stuff like, the same utility functions written 10 times.

reply
pm90
2 days ago
[-]
My rule of thumb is that if you have an OnCall rotation for a codebase, you should require reviews. Besides all the benefits you've mentioned, its important to spread know-how of the code so that people on the rotation don't need to be pulled in e.g. over the weekends/on vacation because they're the only ones familiar with the code.

(There should be breakglass mechanisms to bypass code reviews, sure. Just the default should always be to require reviews)

reply
dsego
2 days ago
[-]
You can also automate it now with AI code review tools. You get most of the benefits, it will catch most obvious mistakes and suggest code changes.
reply
mgaunard
2 days ago
[-]
The way I like to do it, is that some projects may have an owner (can only be a single person).

The owner is allowed to make changes without review.

reply
MutedEstate45
2 days ago
[-]
Agree with your pain points. One thing id add is GitHub makes you reapprove every PR after each push. As an OSS contributor it’s exhausting to chase re-approvals for minor tweaks.
reply
irjustin
2 days ago
[-]
mmmm this is up to each repo/maintainer's settings.

To be fair you don't know if one line change is going to absolutely compromise a flow. OSS needs to maintain a level of disconnect to be safe vs fast.

reply
MutedEstate45
2 days ago
[-]
Good to know! Never been a maintainer before so I thought that was required.
reply
o11c
2 days ago
[-]
Adding fixup commits (specifying the specific commit they will be squashed into), to be squashed by the bot before merge, handles that.
reply
pie_flavor
2 days ago
[-]
This is a security setting that the author has chosen to enable.
reply
Ar-Curunir
2 days ago
[-]
Hm that’s not the case for my repositories? Maybe you have a setting enabled for that?
reply
maunke
2 days ago
[-]
Very nice to read.

Sourcehut is missing in the list; it’s built on the classical concept of sending patches / issues / bugs / discussion threads via email and it integrates this concept into its mailing lists and ci solution that also sends back the ci status / log via email.

Drew Devault published helpful resources for sending and reviewing patches via email on git-send-email.io and git-am.io

reply
_kidlike
2 days ago
[-]
me and my team have been doing code reviews purely within IntelliJ, for something like 6 years. We started doing it "by hand", by checking out the branch and comparing with master, then using Github for comments.

Now there's official support and tooling for reviews (at least in IDEA, but probably in the others too), where you also get in-line highlighting of changed lines, comments, status checks, etc...

I feel sorry for anyone still using GitHub itself (or GitLab or whatever). It's horrible for anything more than a few lines of changes here and there.

reply
bkolobara
2 days ago
[-]
I have been working on the PR implementation for lubeno[1] and have been thinking a lot about the code review process.

A big issue is that every team has a slightly different workflow, with different rules and requirements. The way GitHub is structured is a result of how the GitHub team works. They built the best tool for themselves with their "just keep appending commits to a PR" workflow.

Either you need to have enough flexibility so that the tool can be adapted to everyone's existing workflow. Or you need to be opinionated about your workflow (GitHub) and force everyone to match it in some way. And in most cases this works very well, because people just want you to tell them the best way of doing things and not spend time figuring out what the best workflow would look like.

[1]: https://lubeno.dev

reply
brabel
2 days ago
[-]
Author has not tried IDE plugin for GitHub / Bitbucket reviews!? We review code directly in IntelliJ with full support for commenting, navigation, even merging without leaving the IDE. Solves all problems the author has.
reply
sgt
2 days ago
[-]
Articles from Tigerbeetle I tend to just upvote and then click on the link. I just know it's going to be quality stuff.
reply
liampulles
2 days ago
[-]
This is how I learned to do code review when I was a new junior dev. I would write my review comments on another junior's code, and then our team lead would go write their comments that we missed, and then both of us juniors would read and see what we missed. It was a good way to learn about coding and reviewing I think.
reply
codeman001
2 days ago
[-]
I use CodeRabbit that helps, but it does not fix the two root issues. I run their free VS code plugin to review local commits first, which catches nits, generates summaries, and keeps me in my editor. The PR bot then adds structure so humans focus on design and invariants. Review state still lives in the forge, not in Git, and interdiffs still depend on history. If Git gets a stable Change-Id, storing review metadata in Git becomes realistic. Until then this is a pragmatic upgrade that reduces friction without changing the fundamental.

https://www.coderabbit.ai/ide

reply
faangguyindia
2 days ago
[-]
Essentially, you are turning fork/branch induced changes to "precommit" review like workflow which is great.

I was on a lookout for best "precommit" review tool and zeroed on Magit, gitui, Sublime Merge.

I am not an emac user, so i'll have to learn this.

reply
xeonmc
2 days ago
[-]
In theory this functionality would be best suited as a git subcommand.

I suggest `git-precom` for conciseness.

reply
faangguyindia
2 days ago
[-]
Git already has `git add -p` but demands a lot from user.
reply
Pxtl
2 days ago
[-]
Git demands a lot from user in general.
reply
z3t4
2 days ago
[-]
Leave the comments in the commit messages and make many small commits! That way they don't change the actual source and they're specific for that version of the code.
reply
pjmlp
2 days ago
[-]
I never did proper code review, other than when being lucky that we got a team of top devs in specific projects.

More often than not, it either doesn't exist, or turns out in a kind of architecture fetishism that the lead devs/architects have from conferences or space ship enterprise architecture.

Already without this garbage it feels so much better, than arguing about SOLID, clean code, hexagonal architecture, member functions being with an underscore, explicit types or not,...

reply
shayief
2 days ago
[-]
Gitpatch attempts to solve this. Supports versioned patches and patch stacks (aka stacked PRs). Also handles force-pushes in stacks correctly even without Change-IDs using heuristics based on title, author date etc. It should also be unusually fast. Disclosure: I'm the author.

I'm not convinced that review comments as commits make thing easier, but I think storing them in git in some way is a good idea (i.e. git annotations or in commit messages after merge etc)

reply
6LLvveMx2koXfwn
2 days ago
[-]
> But modifying code under review turned out to be tricky.

GitLab enables this - make the suggestion in-line which the original dev can either accept or decline.

reply
globular-toast
2 days ago
[-]
Kind of. Don't you have to type the change into the browser? Which means your change might not even be syntactically correct. It would be far better if you could make the change locally then somehow and that straight to GitLab. Also how does it work with multiple commits? Which commit does it amend?
reply
eafkuor
2 days ago
[-]
> Alas, when I want to actually leave feedback on the PR, I have to open the browser, navigate to the relevant line in the diff, and (after waiting for several HTTP round-trips) type my suggestion into a text area

This doesn't seem like much of a problem, does it? It's a matter of alt-tab and a click or two.

Also, what is the point of having reviews in the git history?

reply
shmerl
2 days ago
[-]
I was recently looking for something that at least presents a nice diff that resembles code review one in neovim.

This is a pretty cool tool for it: https://github.com/sindrets/diffview.nvim

On the branch that you are reviewing, you can do something like this:

:DiffviewOpen origin/HEAD...HEAD

reply
icy
2 days ago
[-]
Worth mentioning that Tangled has support for stacked pull requests and a unique round-based PR flow with interdiffing: https://blog.tangled.sh/stacking
reply
sc68cal
2 days ago
[-]
This brings back memories of https://opendev.org/ttygroup/gertty when I was contributing to OpenStack
reply
germandiago
2 days ago
[-]
I am happy with Gerrit but I am sure I do not know even how to use 20% of its capacity.

The patchsets get stacked up and you know where you left off if there are different changes and that is very cool.

reply
aitchnyu
2 days ago
[-]
Tangential, long ago I wanted to use a repo-backed (IIRC tied to Mercurial) backend for issues. They were also flat files. We put too many plugins into Redmine and it died frequently.
reply
actinium226
2 days ago
[-]
I like the idea of max 500 lines for custom solutions to problems that have "existing" solutions, might have to steal that.
reply
gatane
2 days ago
[-]
>remote-first web-interface

https://youtu.be/Qscq3l0g0B8

reply
loeg
2 days ago
[-]
I've used Reviewboard and Phabricator and both seem "fine" to me. Superior to Github (at the time, anyway).
reply
kungfufrog
2 days ago
[-]
Anyone know what editor the author is using in the first screenshot showing two panels side by side?
reply
wry_discontent
2 days ago
[-]
They mention magit, which makes me think Emacs, but it looks like there's lot of custom UI stuff.
reply
000ooo000
2 days ago
[-]
Looks like VSCode in macOS, maybe with some custom CSS.
reply
kjgkjhfkjf
2 days ago
[-]
If you want to remain relevant in the AI-enabled software engineering future, you MUST get very good at reviewing code that you did not write.

AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.

Educational establishments MUST prioritize teaching code review skills, and other high-level leadership skills.

reply
ZYbCRq22HbJ2y7
2 days ago
[-]
> AI can already write very good code

Debatable, with same experience, depends on the language, existing patterns, code base, base prompts, and complexity of a task

reply
netghost
2 days ago
[-]
How about AI can write large amounts of code that might look good out of context.
reply
ZYbCRq22HbJ2y7
2 days ago
[-]
Yeah, LLMs can do that very well, IMO. As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code. In my experience, LLMs tend to conflate shape and correctness.
reply
dragonwriter
2 days ago
[-]
> As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code.

For human written code, shape correlates somewhat with correctness, largely because the shape and the correctness are both driven by the human thought patterns generating the code.

LLMs are trained very well at reproducing the shape of expected outputs, but the mechanism is different than humans and not represented the same way in the shape of the outputs. So the correlation is, at best, weaker with the LLMs, if it is present at all.

This is also much the same effect that makes LLMs convincing purveyors of BS in natural language, but magnified for code because people are more used to people bluffing with shape using natural language, but churning out high-volume, well-shaped, crappy substance code is not a particularly useful skill for humans to develop, and so not a frequently encountered skill. And so, prior to AI code, reviewers weren't faced with it a lot.

reply
h4ny
2 days ago
[-]
> you MUST get very good at reviewing code that you did not write.

I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.

> AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.

I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.

If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.

> Educational establishments MUST prioritize teaching code review skills

Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.

> and other high-level leadership skills.

Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.

reply
nop_slide
2 days ago
[-]
I’m considered one of the stronger code reviewers on the team, what grinds my gears is seeing large, obviously AI heavy PRs and finding a ton of dumb things wrong with them. Things like totally different patterns and even bugs. I’ve lost trust that the person putting up the PR has even self reviewed their own code and has verified it does what they intend.

If you’re going to use AI you have to be even more diligent and self reviewed your code, otherwise you’re being a shitty team mate.

reply
kubectl_h
2 days ago
[-]
Same. I work at a place that has gone pretty hard into AI coding, including onboarding managers into using it to get them into the dev lifecycle, and it definitely puts an inordinate amount of pressure on senior engineers to scrutinize PRs much more closely. This includes much more thorough reviews of tests as well since AI writes both the implementation and tests.

It's also caused an uptick in inbound to dev tooling and CI teams since AI can break things in strange ways since it lacks common sense.

reply
faangguyindia
2 days ago
[-]
if you are seeing that it just means they are not using the tool properly or using the wrong tool.

AI assisted commits on my team are "precise".

reply
abenga
2 days ago
[-]
No True AI …
reply
jonahx
2 days ago
[-]
There is no reason to think that code review will magically be spared by the AI onslaught while code writing falls, especially as devs themselves lean more on the AI and have less and less experience coding every day.

There just hasn't been as many resources yet poured into improving AI code reviews as there has for writing code.

And in the end the whole paradigm itself may change.

reply
im_lince
2 days ago
[-]
Totally agree with this. Code review is quickly becoming the most important skill for engineers in the AI era. Tools can generate solid code, but judgment, context, and maintainability come from humans. That’s exactly why we built LiveReview(https://hexmos.com/livereview/) — to help teams get better at reviewing and learning from code they didn’t write.
reply
gf000
2 days ago
[-]
> AI can write better code than most of them can at this point

So where is your 3 startups?

reply
wfhrto
2 days ago
[-]
AI can review code. No need for human involvement.
reply
gf000
2 days ago
[-]
For styling and trivial issues, sure. And if it's free, do make use of it.

But it is just as unable to properly reason about anything slightly more complex as when writing code.

reply
ramon156
2 days ago
[-]
I'm of the hot opinion that a reviewer shouldn't be running code. The one making the code is responsible for the fix, code reviews are just about maintainability.

If your PR did not fix the issue or implement the feature, that's on you, not the reviewer.

reply
OskarS
2 days ago
[-]
I don't know about "shouldn't", I think it's fine if they do. But I basically agree, at some fundamental level, you have to have some trust in your coworkers. If someone says "This fixes X", and they haven't even tried running it or testing it, they shouldn't be your coworker. The purpose of code reviews shouldn't be "is this person honest?" or "is this person totally incompetent?". If they're not, it's a much bigger issue, one that shouldn't be dealt with through code reviews.

Very different situation if it's open source or an external contribution, of course.

reply
allknowingfrog
2 days ago
[-]
The author mentioned that he doesn't want to make suggestions that don't actually work. That seems like a pretty valid reason to run the code.
reply
godelski
2 days ago
[-]
While I like the post and agree with everything the author talked about I find that this is not my problem. Despite having a similar workflow (classic vim user). The problem I have and I think a lot of others have too is that review just doesn't actually exist. LGTMs are not reviews, yet so common.

I'm not sure there's even a tech solution to this class of problems and it is down to culture. LGTMs exist because it satisfies the "letter of the law" but not the spirit. Classic bureaucracy problem combined with classic engineer problems. It feels like there are simple solutions but LGTMs are a hack. You try to solve this by requiring reviews but LGTMs are just a hack to that. Fundamentally you just can't measure the quality of a review[0]. Us techie types and bureaucrats have a similar failure mode: we like measurements. But a measurement of any kind is meaningless without context. Part of the problem is that businesses treat reviewing as a second class citizen. It's not "actual work" so shouldn't be given preference, which excuses the LGTM style reviews. Us engineers are used to looking at metrics without context and get lulled into a false sense of security, or convince ourselves that we can find a tech solution to this stuff. I'm sure someone's going to propose a LLM reviewer and hey, it might help, but it won't address the root problems. The only way to get good code reviews is for them to be done by someone capable of writing the code in the first place. Until the LLMs can do all the coding they won't make this problem go away, even if they can improve upon the LGTM bar. But that's barely a bar, it's sitting on the floor.

The problem is cultural. The problem is that code reviews are just as essential to the process as writing the code itself. You'll notice that companies that do good code review already do this. Then it is about making this easier to do! Reducing friction is something that should happen and we should work on, but you could make it all trivial and it wouldn't make code reviews better if they aren't treated as first class citizens.

So while I like the post and think the tech here is cool, you can't engineer your way out of a social problem. I'm not saying "don't solve engineering problems that exist in the same space" but I'm making the comment because I think it is easy to ignore the social problem by focusing on the engineering problem(s). I mean the engineering problems are magnitudes easier lol. But let's be real, avoiding addressing this, and similar, problems only adds debt. I don't know what the solution is[1], but I think we need to talk about it.

[0] Then there's the dual to LGTM! Code reviews exist and are detailed but petty and overly nitpicky. This is also hacky, but in a very different way. It is a misunderstanding of what review (or quality control) is. There's always room for criticism as nothing you do, ever, will be perfect. But finding problems is the easy part. The hard part is figuring out what problems are important and how to properly triage them. It doesn't take a genius to complain, but it does take an expert to critique. That's why the dual can even be more harmful as it slows progress needlessly and encourages the classic nerdy petty bickering over inconsequential nuances or over unknowns (as opposed to important nuances and known unknowns). If QC sees their jobs as finding problems and/or their bosses measure their performance based on how many problems they find then there's a steady state solution as the devs write code with the intentional errors that QC can pick up on, so they fulfill their metric of finding issues, and can also easily be fixed. This also matches the letter but not the spirit. This is why AI won't be able to step in without having the capacity of writing the code in the first place, which solves the entire problem by making it go away (even if agents are doing this process).

[1] Nothing said here actually presents a solution. Yes, I say "treat them as first class citizens" but that's not a solution. Anyone trying to say this, or similar things, is a solution is refusing to look at all the complexities that exist. It's as obtuse as saying "creating a search engine is easy. All you need to do is index all (or most) of the sites across the web." There's so much more to the problem. It's easy to over simplify these types of issues, which is a big part of why they still exist.

reply
jrowen
2 days ago
[-]
Part of the problem is that businesses treat reviewing as a second class citizen. It's not "actual work" so shouldn't be given preference, which excuses the LGTM style reviews.

I've been out of the industry for a while but I felt this way years ago. As long as everybody on the team has coding tasks, their review tasks will be deprioritized. I think the solution is to make Code Reviewer a job and hire and pay for it, and if it's that valuable the industry will catch on.

I would guess that testing/QA followed a similar trajectory where it had to be explicitly invested in and made into a job to compete for or it wouldn't happen.

reply
godelski
2 days ago
[-]
I can be totally wrong, but I feel like that's a thing that sounds better on paper. I'm sure there's ways to do this correctly but every instance I've seen has created division and paid testers/QC less. Which I'd say the lower pay is a strong signal of it being considered second class. Has anyone seen this work successfully?

I also think there's benefits to review being done by devs. They're already deep in the code and review does have a side benefit of broadening that scope. Helping people know what others are doing. Can even help serve as a way to learn and improve your development. I guess the question is how valuable these things are?

reply
sfink
2 days ago
[-]
I don't see a lot of value in generic code reviewers. I want the reviewers to be actively engaged in writing somewhat related code themselves, otherwise the value of their opinions will decline over time.

As for prioritization... isn't it enough knowing that other people are blocked on your review? That's what incentivizes me to get to the reviews quickly.

I guess it's always going to depend a lot on your coworkers and your organization. If the culture is more about closing tickets than achieving some shared goal, I don't know what you could do to make things work.

reply
keniu
2 days ago
[-]
we need code review, do not let AI control us.
reply
mike_hearn
2 days ago
[-]
I've used a more hard-core version of this in my own company and have been meaning to write a blog post about it for years. For now an HN comment will suffice. Here's my version, the rationale and the findings.

WORKFLOW

Every repository is personal and reviewer merges, kernel style. Merging is taking ownership: the reviewer merges into their own tree when they are happy and not before. By implication there is always one primary code reviewer, there is never a situation where someone chooses three reviewers and they all wait for someone else to do the work. The primary reviewer are on the hook for the deliverable as much as the reviewee is.

There is no web based review tool. Git is managed by a server configured with Gitolite. Everyone gets their own git repository under their own name, into which they clone the product repository. Everyone can push into everyone else's repos, but only to branches matching /rr/{username}/something and this is how you open a pull request. Hydraulic is an IntelliJ shop and the JetBrains git UI is really good, so it's easy to browse open RRs (review requests) and check them out locally.

Reviewing means pushing changes onto the rr branch. Either the reviewer makes the change directly (much faster than nitpicky comment roundtrips), or they add a //FIXME comment that IntelliJ is configured to render in lurid yellow and purple for visibility. It's up to the reviewee to clear all the FIXMEs before a change will be merged. Because IntelliJ is very good at refactoring, what you find is that reviewers are willing to make much bigger improvements to a change than you'd normally get via web based review discussions. All the benefits the article discusses are there except 100x because IntelliJ is so good at static analysis. A lot of bugs that sneak past regular code review are caught this way because reviewers can see live static analysis results.

Sometimes during a review you want to ask questions. 90% of the time, this is because the code isn't well documented enough and the solution is to put the question in a //FIXME that's cleared by adding more comments. Sometimes that would be inappropriate because the conversation would have no value to others, and it can be resolved via chat.

Both reviewee and reviewer are expected to properly squash and rebase things. It's usually easier to let commits pile up during the review so both sides have state on the changes, and the reviewer then squashes code review commits into the work before merging. To keep this easy most review requests should turn into one or two commits at most. There should not be cases where people are submitting an RR with 25 "WIP" commits that are all tangled up. So it does require discipline, but this isn't much different to normal development.

RATIONALE

1. Conventional code review can be an exhausting experience, especially for junior developers who make more mistakes. Every piece of work comes back with dozens of nitpicky comments that don't seem important and which is a lot of drudge work to apply. It leads to frustration, burnout and interpersonal conflicts. Reviewees may not understand what is being asked of them, resulting in wasted time. So, latency is often much lower if the reviewer just makes the changes directly in their IDE and pushes. People can then study the commits and learn from them.

2. Conventional projects can struggle to scale up because the codebase becomes a commons. Like in a communist state things degrade and litter piles up, because nobody is fully responsible. Junior developers or devs under time pressure quickly work out who will give them the easiest code review experience and send all the reviews to them. CODEOWNERS are the next step, but it's rare that the structure of your source tree matches the hierarchy of technical management in your organization so this can be a bad fit. Instead of improving widely shared code people end up copy/pasting it to avoid bringing in more mandatory reviewers. It's also easy for important but rarely changed directories to be left out, resulting in changes to core code slowing down because it'd require the founder of the company to approve a trivial refactoring PR.

FINDINGS

Well, it worked well for me at small scale (decent sized codebase but a small team). I never scaled it up to a big team although it was inspired by problems seen managing a big team.

Because most questions are answered by improving code comments rather than replying in a web UI the answers can help LLMs. LLMs work really well in my codebase and I think it's partly due to the plentiful documentation.

Sometimes the lack of a web UI for browsing code was an issue. I experimented with using IntelliJ link format, but of course not everyone wants to use IntelliJ. I could have set up a web UI over git just for source browsing, without the full GitHub experience, but in the end never bothered.

Gitolite is a very UNIXy set of Perl scripts. You need a gray beard to use it well. I thought about SaaSifying this workflow but it never seemed worth it.

reply
im_lince
2 days ago
[-]
This hits home. I’ve run into the same pain with conventional web-based review tools: slow, nitpicky, and nobody really “owns” the merge. Your kernel-style approach makes a ton of sense — putting the reviewer on the hook changes the dynamic completely. And pushing FIXMEs straight into the branch instead of playing comment-ping-pong? That’s a huge quality-of-life win.

We’ve gone a slightly different route at my team. Instead of reinventing the workflow around Gitolite/IntelliJ, we layered in LiveReview(https://hexmos.com/livereview/). It’s not as hardcore, but it gives us a similar payoff: reviewers spend less time on drudge work because LiveReview auto-catches a ton of the small stuff (we’re seeing ~40% fewer prod bugs). That leaves humans free to focus on the bigger design and ownership questions — the stuff machines can’t solve.

Different tools, same philosophy: make review faster, saner, and more about code quality than bureaucracy.

reply
cbryant91
2 days ago
[-]
good fine
reply
moonlion_eth
2 days ago
[-]
ersc.io
reply
citizenpaul
2 days ago
[-]
I got into using Jujutsu this year. I'm liking it so far. Is there a beta access in the works?
reply
loeg
2 days ago
[-]
Say more.
reply
toastal
2 days ago
[-]
Shame it’s Jujutsu & not something based on actual Patch Theory (patches are commutative). I think Patch Theory is one of the ways out of merge conflict hell.
reply
kissgyorgy
2 days ago
[-]
putting the review into git notes might have worked better. It's not attached to tje lines directly, but the commit and it can stay as part of the repo
reply