AI Lazyslop and Personal Responsibility
42 points
1 hour ago
| 18 comments
| danielsada.tech
| HN
solomonb
1 hour ago
[-]
> After I “Requested changes” he’d get frustrated that I’d do that, and put all his changes in an already approved PR and sneak merge it in another PR.

This is outrageous regardless of AI. Clearly there are process and technical barriers that failed in order to even make this possible. How does one commit a huge chunk of new code to an approved PR and not trigger a re-review?

But more importantly, in what world does a human think it is okay to be sneaky like this? Being able to communicate and trust one another is essential to the large scale collaboration we participate in as professional engineers. Violating that trust erodes all ability to maintain an effective team.

reply
dshacker
55 minutes ago
[-]
It really demotivated me when this happened, I just kept seeing the PR open, but then I saw the changes applied before the PR was merged, which made me very confused. I then had an alert placed on every one of the updates made by Mike to make sure he didn't do this again. People were against "reset reviewers on commit" for "agility".
reply
liuliu
56 minutes ago
[-]
Collaborative software development is a high-trust activity. It simply doesn't work in low-trust environment. This is not an issue with code review, it is an issue with maintaining a trust environment for collaboration.
reply
ljm
53 minutes ago
[-]
TFA wouldn’t blame ‘Mike’ but I definitely would. And ‘Mike’s Boss.

That’s not just a process error. At some point you just have to feed back to the right person that someone isn’t up to the task.

reply
wasmainiac
58 minutes ago
[-]
Yeah this never happened. This just sounds like an and everyone clapped moments, made to make a blog post. Most people on this planet are reasonable if not pushed.
reply
badsectoracula
37 minutes ago
[-]
I actually had someone like "Mike" in my most recent job (though he wasn't uncooperative, just didn't seem to care about writing proper code). He made some tool using AI and i took it over to clean it up and improve it, but he still worked on it too. He got occasionally annoyed when i suggested changes and sometimes i felt i was talking to ChatGPT (or whatever AI he used, i don't know) via a middleman. He didn't put any 1600 line PR (that i remember anyway) but he did add extra stuff to his PRs which were often related to other tasks and the code submitted was often much larger in "volume" than needed.

As an example there was a case where some buttons needed a special highlight based on some flag, something that could be done in 4-5 lines of code or so (this was in Unreal Engine, the UI is drawn each frame), but the PR was adding a value indicating if the button would need to be highlighted when the button was created and this value was passed around all over the place in the up to the point where the data that was used to create the UI with the buttons would be. And since the UI could change that flag, the code also recreated the UI whenever the flag changed. And because the UI had state such as scrolling, selected items, etc, whenever the UI was recreated, it saved the current state and restored it after it was recreated (together with adding storage for the state). Ultimately, it worked, but it was too much code for what it needed to do.

The kicker was that the modifications to pass around the value for the flag's state wasn't even necessary (even ignoring the fact that the flag could have been checked directly during drawing) because a struct with configuration settings was already passed through the same paths and the value could have been added to said struct. Not that it would have saved the need to save/restore the UI state though.

reply
dshacker
53 minutes ago
[-]
I'm not sure how to prove otherwise, but this actually happened to me. I don't understand this kind of comments saying "FAKE" for views or blog-posting. This is something that happened to me, and I can say for sure people were really pushed in this situation to ship faster every time.
reply
doesnt_know
38 minutes ago
[-]
I envy the type of career you’ve had if you find this sort of behaviour unbelievable.
reply
dkarl
1 hour ago
[-]
I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.

> I don’t blame Mike, I blame the system that forced him to do this.

Bending over backwards not to be the meanie is pointless. You're trying to stop him because the system doesn't really reward this kind of behavior, and you'll do Mike a favor if you help him understand that.

reply
BugsJustFindMe
1 hour ago
[-]
> I have no idea what AI changes

> Mike comes up with 1600 lines of code in a day instead of in a sprint

It seems like you do have an idea of at least one thing that AI changes.

reply
dkarl
23 minutes ago
[-]
The more often it happens, the more practice you get at delivering the bad news, and the quicker Mike learns to live up to the team's technical standards?
reply
zahlman
1 hour ago
[-]
> it just happens more often

Which is extremely relevant, as it dramatically increases the probability that other people will have to care about it.

reply
Aurornis
52 minutes ago
[-]
> Bending over backwards not to be the meanie is pointless.

This thinking that we must avoid blaming individuals for their own actions and instead divert all blame to an abstract system is getting a little out of control. The blameless post-mortem culture was a welcome change from toxic companies who were scapegoating hapless engineers for every little event, but it's starting to seem like the pendulum has swung too far the other way. Now I keep running into situations where one person's personal, intentional choices are clearly at the root of a situation but everyone is doing logical backflips to try to blame a "system" instead of acknowledging the obvious.

This can get really toxic when teams start doing the whole blameless dance during every conversation, but managers are silently moving to PIP or lay off the person who everyone knows is to blame for repeated problems. In my opinion, it's better to come out and be honest about what's happening than to do the blameless performance in public for feel-good points.

reply
dshacker
1 hour ago
[-]
I think people can be in hard conditions, needing a job, under pressure, burnt out and feel like this is their only way to keep their job. At least that's how it felt with Mike.

At the end, I spent a lot of time sitting down with Mike to explain this kinds of things, but I wasn't effective.

Also, now LLMs empower Mike to make a 1600 line PR daily, and me needing to distinguish between "lazyslopped" PRs or actual PRs.

reply
miltonlost
1 hour ago
[-]
> I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.

So now instead of reviewing 1600 lines of bad code every 2 weeks, you must review 1600 lines of bad code every day (while being told 1600 lines of bad code every day is an improvement because just how much more bad code he's "efficiently" producing! Scale and volume is the change.

reply
xyzsparetimexyz
1 hour ago
[-]
If you get a 1600 line PR you just close it and ask them to break it up into reviewable chunks. If your workplace has an issue with that, quit. This was true before AI and will be true after AI.
reply
emeraldd
1 hour ago
[-]
There are a number of cases where this is not really possible. For some classes of updates, the structure of the underlying application and the type of update being made requires that you do an "all or nothing" type of update in order to get a buildable result. I've run into this a lot with Large Java applications where we have to jump several Spring versions just due to the scope of what's being updated. More incremental updates weren't an option for a number of time/architectural reasons and refactoring the application structure (which really wouldn't have helped too much either) would have been time and cost prohibitive... Really annoying but sometimes you just don't have another option to actually accomplish your goals.
reply
dog4hire
44 minutes ago
[-]
Some people can write 1-3k lines of good code (incl. tests) in a day when everything is just right. We used to be called 10xers lol. The 1600LOC PR is legit if trust is there, it's really a single change unit, it's not just being thrown over a wall (should have a great PR description and clear, concise commit history).

I automatically block PRs with LLM-generated summaries, commit messages, documentation, etc.

reply
dshacker
1 hour ago
[-]
I mean, there are some exceptions on when 1600 PRs are acceptable (Refactorings, etc) but otherwise agree.

What really bugs me is that today, it is easier than ever to do this (even the LLM can do this!) and people still don't do it.

reply
hxugufjfjf
1 hour ago
[-]
Or just have AI do it for you /s
reply
tjr
1 hour ago
[-]
If you can have AI review the PR, does this still matter?
reply
babblingfish
1 hour ago
[-]
This is consistent with my own observations of LLM-generated code increasing the burden on reviewers. You either review the code carefully, putting more effort into it than the actual original author. Or you approve it without careful review. I feel like the latter is becoming more common. This is basically creating tech debt that will only be realized later by future maintainers
reply
bunderbunder
39 minutes ago
[-]
It’s a prisoner’s dilemma, too. The person who commits to giving code review its due diligence is going to end up spending an inordinate amount of time reviewing others’ changes, leaving less time to completing their own assignments. And they’re likely to request a lot of changes, too. That’s socially untenable for most people, especially ones who clearly aren’t completing as many story points as their teammates. Next thing you know your manager is giving you less than stellar performance reviews, and the AI slopcoders on your team are getting the promotions and being put into position to influence how team norms and culture evolve over time.

The worst part is, this isn’t me speculatively catastrophizing. I’m just observing how my own organization’s culture has changed over the past couple of years.

It’s hitting the less senior team members hardest, too. They are generally less skilled at reading code and therefore less able to keep up with the rapid growth in code volume. They are also more likely to get assigned the (ever growing volume of) defect tickets so the more senior members can keep on vibecoding their way to glory.

reply
mghackerlady
56 minutes ago
[-]
or, if you know it was written by an LLM, reject it
reply
krzysz00
1 hour ago
[-]
This does seem to align decently well with, for example, the policy the LLVM project recently adopted https://llvm.org/docs/AIToolPolicy.html , which allows for AI but requires a human in the loop that understands the code and allows for fast closure of "extractive" PRs that are mainly a timesink for reviewers where the author doesn't seem to be quite sure what's going on.
reply
yesitcan
1 hour ago
[-]
> why do I need tests? It works already

> I don't blame Mike

You should blame Mike.

reply
colinmilhaupt
1 hour ago
[-]
Love to see the responsible use disclosure. I did the same several months back. https://colinmilhaupt.com/posts/responsible-llm-use/

Also love the points during review! Transparency is key to understanding critical thinking when integrating LLM-assisted coding tools.

reply
mrkeen
52 minutes ago
[-]
> Mike sent me a 1600 line pull-request with no tests, entirely written by AI, and expected me to approve it immediately as to not to block him on his deployment schedule.

Both Mike and the manager are cargo-culting the PR process too. Code review is what you do when you believe it's worth losing velocity in order for code to pass through the bottleneck of two human brains instead of one.

LLMs are about gaining velocity by passing less code through human brains.

reply
dmmartins
1 hour ago
[-]
> What was your thought process using AI? > Share your prompts! Share your process! It helps me understand your rationale.

why? does it matter? do you ask the same questions for people that don't use AI? I don't like using AI for code because I don't like the code it generates and having to go over and over until I like it, but I don't care how people write code. I review the code that's on the PR and if there's I don't understand/agree, I comment on the PR

other than the 1600 lines PR that's hard to view, it feels that the author just want to be in the way and control everything other people are doing

reply
hombre_fatal
1 hour ago
[-]
The prompt is the ground truth that reveals the assumptions and understandings of the person who generated the code.

It makes a lot more sense to review and workshop that into a better prompt than to refactor the derived code when there are foundational problems with the prompt.

Also, we do do this for human-generated code. It's just a far more tedious process of detective work since you often have to go the opposite direction and derive someone's understanding from the code. Especially for low effort PRs.

Ideally every PR would come with an intro that sells the PR and explains the high level approach. That way you can review the code with someone's objectives in mind, and you know when deviations from the objective are incidental bugs rather than misunderstandings.

reply
OptionOfT
1 hour ago
[-]
Because when your code is handwritten, it's supposed to be a translation of you parsing business requirements to code.

Using AI adds a non-deterministic layer in between, and a lot of code now is there that you probably didn't need.

The prompt is helpful to figure out what is needed and what isn't.

reply
lo_zamoyski
1 hour ago
[-]
The correct thing to do is to annotate the code and the PR with comments. You shouldn't be submitting code you don't understand in the first place. These comments will contain the reasoning in the prompts. Giving me a list of prompts would just be annoying and messy, not informative.

Also, we should not be submitting huge PRs in general. It is difficult to be thorough in such cases. Changes will be less well understood and more bugs will sneak their way into the code base.

reply
OptionOfT
20 minutes ago
[-]
The AI velocity comes from large PRs that no-one reviews.
reply
madeofpalk
1 hour ago
[-]
> why? does it matter? do you ask the same questions for people that don't use AI?

…yes? If someone dumps a PR on me without any rationale I definitely want to understand their thought process about how they landed on this solution!

reply
roxolotl
1 hour ago
[-]
Yes of course you should ask the same thing of other non AI PRs. Figuring out the why and the thought process behind behavior is one of the most important parts of communication especially when you don’t know people as well
reply
ghm2199
1 hour ago
[-]
If you work in a company where some kind of testing is optional to get your PR merged, run in the opposite direction. Because testing showed you your engineer _thought_ things through. Its communicating what the intended use and many times when well written is as clarifying as documentation. I would be even willing to accept integration/manual tests if writing unit tests is sometimes not possible.
reply
serial_dev
59 minutes ago
[-]
> put all his changes in an already approved PR and sneak merge it in another PR. I don’t blame Mike, I blame the system that forced him to do this.

Oh you should definitely blame Mike for this. It’s like blaming the system when someone in the kitchen spits in the food of customer. Working with people like this is horrible because you know they don’t mind to lie cheat deceive.

reply
fnoef
1 hour ago
[-]
While I agree with the sentiment of the post, I’ve also came to a conclusion that it’s not worth to fight against the system. If you can’t quit your job, then just do what everyone else is doing: use AI to write and review code, and make sure everyone is happy (especially the management).
reply
Ozzie_osman
57 minutes ago
[-]
I call it L-ai-ziness and I try to reduce it on my team.

If it has your name on it, you're accountable for it to your peers. If it has our name on it as a company, we're accountable for it to our users. AI doesn't change that.

reply
dog4hire
51 minutes ago
[-]
hiring?
reply
serial_dev
45 minutes ago
[-]
Lazyslop PRs offload the work to code reviewers while keeping all the benefits to the PR creator.

Now creating a 1600 loc PR is about ten minutes, reviewing it at the very least an hour. Mike submits a bunch of PRs, the rest of the team tries to review it to prevent the slop from causing an outage at night or blowing up the app. Mike is a hero, he really embraced AI, he leveraged it to get 100x productivity.

This works for a while, until everyone realizes that Mike gets the praise, they get reprimanded for not shipping their features fast enough. After a couple of these sour experiences, other developers will follow suit, embrace the slop. Now there is nobody that stops the train wreck. The ones who really cared, left, the ones who cared at least a little gave up, and churn out slop.

reply
firasd
56 minutes ago
[-]
Unfortunately the list of AI edits this person declares at the bottom of their post is self-refuting

If you use AI as a Heads-up Display you can't make a giant scroll of every text change you accepted.

reply
dragoman1993
1 hour ago
[-]
At the end there's a typo "catched" should be caught.

Otherwise, agree-ish. There should be business practices in place for responsible AI use to avoid coworkers having to suffer from bad usage.

reply
NewsaHackO
1 hour ago
[-]
He had a typo in the one section where he didn't use AI to copy-edit! But really, copyediting with LLMs is a godsend. I used to struggle with grammar to the point that I had a grammarly subscription. Now, proofreading can even be done locally.
reply
dshacker
1 hour ago
[-]
Thanks! Fixed.
reply
throwawaysleep
1 hour ago
[-]
> Then, I’d get a ping from his manager asking on why am I blocking the review.

The suffering is self inflicted for this particular person. The organization doesn’t value code review.

reply
throwawaysleep
1 hour ago
[-]
> Then, I’d get a ping from his manager asking on why am I blocking the review.

If you are in a culture like this, you may as well just ship slop.

Management wants to break stuff, that is on them.

reply
dkarl
1 hour ago
[-]
I've received questions like this from very good, very reasonable, very technically carefully managers. What happens is, Mike complains and tries to throw you under the bus, and the manager reaches out to hear your side of it. You tell them Mike is trying to ship code with a bunch of issues and no tests, and they go back to Mike and tell him that he's the problem and he needs to meet the technical standards enforced by the rest of the team.

Just because management asks doesn't mean they're siding with Mike.

reply
sdoering
51 minutes ago
[-]
I have been on both - actually on all three - sorry, make that four - sides.

1. I tried to ship crap and complained to my manager for being blocked. I was young, dumb, in a bad place and generally an asshole. 2. I was the manager being told that some unreasonable idiot from X blocked their progress. I was the unreasonable manager demanding my people to be unblocked. I was without context, had a very bad prior relationship with the other party and an asshole - because no prior bad faith acts were actually behind the block - it was shitty code. 3. I was the manager being asked to help with unblocking. I asked to understand the issue and to actually try to - based on the facts - find a way towards a solution. My report had to refactor. 4. I was the one being asked. Luckily I had prior experience and did this time manage to not become the asshole.

I am glad I had the environments to learn.

Edit: Format.

reply
dshacker
1 hour ago
[-]
Right, I think there is always a balance between being strict on code reviews, and just letting people ship stuff. I've also seen the other end of the stick in which a senior employee is blocking an important pr over "spacing".
reply
AndrewDucker
15 minutes ago
[-]
Your software linting should be automated, and if possible it should be formatted automatically.

It really shouldn't be possible to have arguments in a PR over formatting.

reply
dgxyz
1 hour ago
[-]
I paid my mortgage off by being the insurance policy when that happens.
reply
shimman
1 hour ago
[-]
How does that work? I find the ability to be in these positions as an IC really impossible nowadays. Maybe it was easier in the 90s? I heard contracting was a way better gig back then too, until corpos got all high and mighty about it putting an end to the practice by favoring head shops instead.
reply
kibwen
1 hour ago
[-]
> Management wants to break stuff, that is on them.

This implies that managers will do both of the following in response to the aforementioned breakage:

1. Understand that their own managerial policies are the root cause.

2. Not use you as a scapegoat.

And yet, if you had managers that were mentally and emotionally capable enough to do both of the above, you wouldn't be in this position to begin with.

reply
throwawaysleep
1 hour ago
[-]
That happens a lot, but rarely have I seen the reviewer get blamed. The guy who shipped it gets blamed.
reply
apercu
1 hour ago
[-]
I mean, you'll still get blamed even if management pushes you to work in a manner that "breaks stuff".

There is very little accountability in the upper echelons these days (if there ever was) and less each day in our current "leadership" climate.

reply
epolanski
1 hour ago
[-]
Pointless blog post about made up situations that never happened.

1. Companies that push and value slop velocity do not have all these bureaucratic merge policies. They change them or relax them, and a manager would just accept it without needing to ping the author.

2. If the author was on the high paladin horse of valuing the craft he would not be working in such a place. Or he would be half assing slop too while concentrating on writing proper code for his own projects like most of us do when we end in bs jobs.

reply
noitpmeder
55 minutes ago
[-]
I think you're being overly pessimistic about the chance this exists in some form at nearly every mid-to-large size software company.

It doesn't take a company policy for an ai-enabled engineer to start absolutely spewing slop. But it's instantly felt by whatever process exists downstream.

I think there's still a significant quantity of engineers who value the output of AI, but at the same time put the effort in to avoid situations like what the author is describing. Reviewing code, writing/generating appropriate tests (and reviewing those too). The secret is those are the good ones. These are the ones you SHOULD promote, laud, and set as examples. The rest should be made examples of and be held accountable.

Id hope my usages of AI are along these lines. I'm sure I'm failing at some of the points, and always trying to improve.

reply
throwawaysleep
1 hour ago
[-]
Things like SOC II effectively require merge control. That doesn't mean the organization really values it, but for compliance purposes, the approval process needs to be there and is applied by someone up on high.
reply