▲I've been on both ends of this.
As the maintainer of ghidra-delinker-extension, whenever I get a non-trivial PR (like adding an object file format or ISA analyzer) I'm happy that it happens. It also means that I get to install a toolchain, maybe learn how to use it (MSVC...), figure out all of the nonsense and undocumented bullshit in it (COFF...), write byte-perfect roundtrip parser/serializer plus tests inside binary-file-toolkit if necessary, prepare golden Ghidra databases, write the unit tests for them, make sure that the delinked stuff when relinked actually works, have it pass my standards quality plus the linter and have a clean Git history.
I usually find it easier to take their branch, do all of that work myself (attributing authorship to commits whenever appropriate), push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me.
Conversely, at work I implemented support for PKCS#7 certificate chains inside of Mbed-TLS and diligently submitted PRs upstream. They were correct, commented, documented, tested, everything was spotless to the implicit admission of one of the developers. It's still open today (with merge conflicts naturally) and there are like five open PRs for the exact same feature.
When I see this, I'm not going to insist, I'll move on to my next Jira task.
reply▲> push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me
While I understand the sentiment I am glad I got into open source more than fifteen years ago, because it was maintainers “puppeteering” me that taught me a lot of the different processes involved in each project that would be hard to learn by myself later.
reply▲There's a balance though. Some people want to end up with a perfect PR that gets applied, some just want the change upstream.
Most of my PRs are things where I'm not really interested in learning the ins and outs of the project I'm submitting to; I've just run into an issue that I needed fixed, so I fixed it[1], but it's better if it's fixed upstream. I'll submit my fix as a PR, and if it needs formatting/styling/whatevs, it'll probably be less hassle if upstream tweaks it to fit. I'm not looking to be a regular contributor, I just want to fix things. Personally, I don't even care about having my name on the fix.
Now, if I start pushing a lot of PRs, maybe it's worth the effort to get me onto the page stylewise. But, I don't usually end up pushing a lot of PRs on any one project.
[1] Usually, I can have my problem fixed much sooner if I fix it, than if I open a bug report and hope the maintainer will fix it.
reply▲This is the Eternal September problem in disguise. That is, the personal interactions we treasure so much in small communities simply do not scale when the communities grow. If a community (or a project) grows too large then the maintainers / moderators can no longer spend this amount of time to help a beginner get up to speed.
reply▲This seems...fine?
I know when I run into bugs in a project I depend on, I'll usually run it down and fix it myself, because I need it fixed. Writing it up the bug along with the PR and sending it back to the maintainer feels like common courtesy. And if it gets merged in, I don't need to fork/apply patches when I update. Win-win, I'd say.
But if maintainers don't want to take PR's, that's cool, too. I can appreciate that it's sometimes easier to just do it yourself.
reply▲hunterpayne2 hours ago
[-] Somehow, this seems like a serious negative consequence of LLMs to me. We should consider how security patches move through the ecosystem. Changes like this are understandable but only because PRs from LLMs are so bad and prolific. When a new exploit is discovered, the number of sites that require a change goes up exponentially due to LLMs not using libraries. At the same time, the library contributors will likely not know to change their code in view of the new exploit. This doesn't seem like healing, more like being dissolved and atomized to the point of uselessness.
reply▲Code changes are cheaper to make now and kind of more expensive to verify.
So you can still contribute, you just not need to provide the code, just the issue.
Which isn't as bad as it sounds, it kind of feels bad to rewrite somebody's code right away when it is theoretically correct, but opinionated codebases seem to work very well if the maintainer opinions are sane.
reply▲hunterpayne25 minutes ago
[-] And if the maintainer doesn't understand something about how the exploit works? Also, code changes aren't cheaper, its just that you can watch YouTube instead of putting in effort now. But time still passes and that costs the same. Reviewing the code is far more expensive now though since the LLM won't use libraries.
PS The economics of software haven't really changed, its just that people (executives) wish they have changed. They misunderstood the economics of software before LLMs and they misunderstand the economics of software now.
PPS The only people that LLMs benefit are the segment of devs who are lazy.
reply▲I think every maintainer should be able to say how they want or don't want others to contribute.
But i feel like it was always true that patches from the internet at large were largely more trouble then they were worth most of the time. The reason people accept them is not for the sake of the patch itself but because that is how you get new contributors who eventually become useful.
reply▲> But i feel like it was always true that patches from the internet at large were largely more trouble then they were worth most of the time.
Oh god, I needed to add a feature to an open source project (kind of a freemium project) about fifteen years ago. I had no experience with professional software development nor did I have any understanding of pull requests. I sent one over after explaining what I was trying to do and that I thought it would be a good feature for the project.
Now they probably shouldn’t have just blindly merged it, but they did, and it really made a mess.
Learned a valuable lesson that day haha.
reply▲curious, what happened, and what did you learn?
if they merge something blindly, then it's really on them if it makes a mess.
reply▲I think the author of the article is missing this point.
When you actually work alongside people and everyone builds a similar mental model of the codebase then communication between humans is far more effective than LLMs.
For random contributions then this doesn’t apply
reply▲Given that submitters are just using LLMs to produce the PR anyway, it makes sense that the author can just run that prompt himself. Just share the 'prompt' (whether or not it is actually formatted as a prompt for an LLM), which is not too different than a feature request by any other name.
reply▲dpc_0123419 minutes ago
[-] Yes. At this point a prompt that produces the desired result is more useful than the resulting code in a PR. Effectively the code starts to have properties of a resulting binaries.
reply▲Agree with this. Maybe we should start making PRs with the proposed spec and then the maintainer can get their agent to implement it.
This is similar to what we’ve started to do at work. The first stage of reviewing a PR is getting agreement on the spec. Writing and reviewing the code is almost the trivial part.
reply▲If your use of LLMs is limited to one shot prompts you must be new.
reply▲Forking and coming back is what I like to do. At this very moment I've got a forked project that I'm actively using and making tiny changes to as things come up in my workflow. In another week or two, when I'm certain that everything is working great and exactly how I want it, I'll file an issue and ask if they are interested in taking in my changes; mostly as a favor to me so I don't have to maintain a fork forever.
reply▲I have come to a similar realization recently - its what I call "Take it home OSS" - i.e. fork freely, modify it to your liking using AI coding agents, and stop waiting for upstream permissions. We seem to be gravitating towards a future where there is not much need to submit PRs or issues, except for critical bugs or security fixes. It's as if OSS is raw material, and your fork is your product.
reply▲Recently I've been air-dropped into such a legacy project at work in order to save a cybersecurity-focused release date. Millions of lines of open-source code checked in a decade ago prior to the Subversion-to-Git migration, then patched everywhere to the point where diffs for the CVEs don't apply and we're not even sure what upstream versions best describe the forks.
By the end, the project manager begged me to turn off my flamethrower, as I was ripping it all out for a clean west manifest to tagged versions and stacks of patches. "Take it home OSS" is like take-out food: if you don't do your chores and leave it out for months or years on the kitchen counter, the next person to enter the apartment is going to puke.
reply▲> west manifest
Zephyr-based project?
reply▲No, it predates it by a couple of decades.
But our modern embedded firmware projects all use Zephyr and west, so I just created a west manifest, stole parts of the scripts/ folder from the Zephyr repository to have a working "west patch" command and went to town. If I had more time to work on it, I'd have gotten "west build", "west flash" and "west debug" working too (probably with bespoke implementations) and removed the cargo cult shell scripts.
You can use west without Zephyr, it's just that by itself it only provides "west init" and "west update".
reply▲tonyarkles8 seconds ago
[-] Awesome! Yeah, I haven't used west-sans-Zephyr yet but interestingly enough NXP decided to use it for their MCUXpresso framework:
https://github.com/nxp-mcuxpresso/mcuxsdk-manifests/blob/mai...I have mixed feelings about west in general, but I like it enough that I'd probably look at doing something like that in the future too for harmony-sake with our existing Zephyr projects.
reply▲This is very shortsighted and it’s like polishing gun to shoot your foot with it.
If it’s "take it home OSS" and "there is not much need to submit PRs or issues" then why would anybody submit PRs and issues for "for critical bugs or security fixes"? If they have fix and it works for them, they’re fine, afterall.
And while we’re at it, why would anybody share anything? It’s just too much hassle. People will either complain or don’t bother at all.
I think that after few years, when LLM coding would be an old boring thing everybody’s used to and people will learn few hard lessons because of not sharing, we’ll come to some new form of software collaboration because it’s more effective than thinking me and LLM are better than me and LLM and thousands or millions people and LLMs.
reply▲> then why would anybody submit PRs and issues for "for critical bugs or security fixes"?
Why do they do that at present? There are plenty of cases where it's a hassle but people still do it, presumably out of a sense of common decency.
reply▲> common decency
Another self-serving reason is so that you can upgrade in the future without having to worry about continually pulling your own private patch set forward.
reply▲Won't be much "raw material" left before long, if everyone takes that view.
reply▲Sure there will, as long as people continue to publish their work including the various forks. The community dynamic will change as will the workflows surrounding dependencies but the core principle will remain.
Vulnerability detection might prove to be an issue though. If we suddenly have a proliferation of large quantities of duplicate code in disparate projects detection and coordinated disclosure become much more difficult.
reply▲This has always been the case, and is really the main practical advantage of open source. Contributing code back is an act of community service which people do not always have time for. The main issue is that over time, other people will contribute back their own acts of community service, which may be bug fixes or features that you want to take advantage of. As upstream and your own fork diverge, it will take more and more work to update your patches. So if you intend to follow upstream, it benefits you to send your patches back.
reply▲I agree with this.
Past month or so I implemented a project from scratch that would've taken me many months without a LLM.
I iterated at my own pace, I know how things are built, it's a foundation I can build on.
I've had a lot more trouble reviewing similarly sized PRs (some implementing the same feature) on other projects I maintain. I made a huge effort to properly review and accept a smaller one because the contributor went the extra mile, and made every possible effort to make things easier on us. I rejected outright - and noisily - all the low effort PRs. I voted to accept one that I couldn't in good faith say I thoroughly reviewed, because it's from another maintainer that I trust will be around to pick up the pieces if anything breaks.
So, yeah. If I don't know and trust you already, please don't send me your LLM generated PR. I'd much rather start with a spec, a bug, a failing test that we agree should fail, and (if needed) generate the code myself.
reply▲> there's this common back-and-forth round-trip between the contributor and maintainer, which is just delaying things.
Delaying what?
reply▲Merging code
reply▲There would be no merge if there isn't a PR in the first place.
reply▲This isn’t about the PR. This is about the back-and-forth.
If the maintainer authors every PR they don’t have to waste time talking with other people about their PR.
reply▲clutter555612 hours ago
[-] If they are willing to feed a bug report to their LLM, then perhaps they can also feed a bug report + PR to their LLM and not make a big fuss out it.
Also, at the point they actively don’t want collaboration, why do open source at all?
Strange times, these.
reply▲dpc_0123412 minutes ago
[-] If you read the post till the end, I am open for lots of forms of collaboration. Its just sending chunks of code diffs around is becoming increasingly like sending diffs to resulting binaries. Just inefficient.
reply▲singpolyma32 hours ago
[-] Open source is about sharing and forking, not collaboration.
Collaboration is a common pattern in larger projects but is uncommon in general
reply▲This is only going to get worse with LLMs. Now people can "contribute" garbage code at 10x the speed. We're entering the era of the "read only" maintainer focused on self-defense.
reply▲I’ve seen it already where someone has set some fully automated agent at the GitHub issues and machine gunned PRs every minute for every reported issue. Likely never looked at or tested by the submitter.
Even if they worked, it would be easier for the maintainer to just do that themselves rather than review and communicate with someone else to resolve issues.
reply▲...that assumes LLMs will contribute garbage code in the first place. Will they, though?
reply▲The problem isn't that it can't write good code. It's that the guy prompting it often doesn't know enough to tell the difference. Way too many vibe coders these days who can generate a PR in 5 seconds, but can’t explain a single line of it.
reply▲That’s 100% the trick to it all. I don’t always write code using LLMs. Sometimes I do. The thing that LLMs have unlocked for me is the motivation to put together really solid design documentation for features before implementing them. I’ve been doing this long enough that I’ve usually got a pretty good idea of how I want it to work and where the gotchas are, and pre-LLMs would “vibe code” in the sense that I would write code based on my own gut feeling of how it should be structured and work. Sometimes with some sketches on paper first.
Now… especially for critical functionality/shared plumbing, I’m going to be writing a Markdown spec for it and I’m going to be getting Claude or Codex to do review it with me before I pass it around to the team. I’m going to miss details that the LLM is going to catch. The LLM is going to miss details that I’m going to catch. Together, after a few iterations, we end up with a rock solid plan, complete with incremental implementation phases that either I or an LLM can execute on in bite-sized chunks and review.
reply▲The LLM isn’t contributing garbage, the user is by (likely) not testing/verifying it meets all requirements. I haven’t yet used an LLM which didn’t require some handholding to get to a good code contribution on projects with any complexity.
reply▲> Users telling me what works well and what could be improved can be very helpful.That's a unicorn.
If I'm lucky, I get a "It doesn't work." After several back-and-forths, I might get "It isn't displaying the image."
I am still in the middle of one of these, right now. Since the user is in Australia, and we're using email, it is a slow process. There's something weird with his phone. That doesn't mean that I can't/won't fix it (I want to, even if it isn't my fault. I can usually do a workaround). It's just really, really difficult to get that kind of information from a nontechnical user, who is a bit "scattered," anyway.
reply▲Realistically, there are a much larger set of things that I don't mind forking these days. It is quite a bit of effort to get to a set of things that mostly don't have bugs, but in the past I might fork a few things[0] but these days, I vendor everything, and some things I just look at the feature list and have Claude rebuild it for me. So I totally understand why actual project maintainers and authors wouldn't want input. I, as a user, am not super eager to buy in to the actual maintainers' future work. It would be super surprising that they want to buy into strangers' work.
0: I liked BurntSushi's Rust projects since they are super easy to edit because they're well architected and fast by default. Easy to write in.
reply▲dpc_012345 minutes ago
[-] I'm currently running on a fork of Helix text editor, which I heavily gutted to replace the block cursor with a beam-style (like one in insert mode, but just all the time). Since the maintainers are drowning in PRs (472 open ATM), I understandably don't expect them to have time for my weird ideas. Then I pile on top whatever PRs I want that I find useful out of these 472, and with a little bit of LLM help I have a very different text editor than the upstream.
reply▲samuelknight2 hours ago
[-] > On top of that, there are a lot of personal and subjective aspects to code. You might have certain preferences about formatting, style, structure, dependencies, and approach, and I have mine.
Code formatting is easy to solve. You write linting tests, and if they fail the PR is rejected. Code structure is a bit tricker. You can enforce things like cyclomatic complexity, but module layout is harder.
reply▲Maybe instead of submitting PRs, people should submit "prompt diffs" so that the maintainer can paste the prompt into their preferred coding agent, which is no doubt aware of their preferred styles and skills, and generate the desired commit themselves.
reply▲"Prompt diff" is just wish. Why not call it what it is? It’s perfectly fine to submit wishes, feature requests, RFCs or - if you want - "prompt diffs".
But there’s no need for LLM to implement it, human can do it. Or not, LLM can. That’s not the point who implements it, it’s a wish.
reply▲Why would anyone bother doing this, prompts are not code, they are not shareable artifacts that give the same results.
reply▲travisjungroth2 hours ago
[-] Neither are bug reports or feature requests.
reply▲bug reports should be reproducable. They may even be statistically reproduceable.
A bug report that cannot be reproduced is worthless.
reply▲It is not worthless; that means you need to work on making it easier to detect and report bugs.
reply▲Do you accept bug reports that just say "it doesn't work" or do you require reproducibility?
reply▲> Why would anyone bother doing this
For the same reason a PR can be useful even if it turns out to be imperfect. Because it reduces the workload for the maintainer to implement a given feature.
Obviously that means that if it looks likely to be a net negative the maintainer isn't going to want it.
reply▲Luckily we have had the perfect paradigm for this kind of mindset for decades: proprietary software. The spirit of open source is already essentially dead due to it being co-opted by companies and individuals working only for their own gain, and for it to rise again we probably need a total reset.
reply▲Couldn't you also just have an LLM review the PR and quickly fix any issues? Or even have it convert the PR into a list of specs, and then reimplement from there as you see fit?
I guess my point being that it's become pretty easy to convert back and forth between code and specs these days, so it's all kind of the same to me. The PR at least has the benefit of offering one possible concrete implementation that can be evaluated for pros and cons and may also expose unforeseen gotchas.
Of course it is the maintainer's right to decide how they want to receive and respond to community feedback, though.
reply▲warmwaffles2 hours ago
[-] > Couldn't you also just have an LLM review the PR and quickly fix any issues? Or even have it convert the PR into a list of specs, and then reimplement from there as you see fit?
Sometimes I'm not a fan of the change in its entirety and want to do something different but along the same lines. It would be faster for me to point the agent at the PR and tell it "Implement these changes but with these alterations..." and iterate with it myself. I find the back and forth in pull requests to be overly tiresome.
reply▲idiotsecant2 hours ago
[-] Why not just have the LLM write the PR in the first place? Because LLMs are imperfect tools. At least for the foreseeable future the human in the loop is still important
reply▲I am assuming that, for the vast majority of code changes moving forward, the PR will be written by an LLM in the first place.
reply▲cadamsdotcom1 hour ago
[-] Yes, reviewing might take 1 hour but taking the PR and using it to guide an implementation also takes 1 hour.
Thank your contributor; then, use the PR - and the time you’d have spent reviewing it- to guide a reimplementation.
reply▲My coworkers just let Claude review the PR now instead of reading the code. It seems the entire contract is broken now.
Submitters use LLMs to generate the code and reviewers use LLMs to review it.
reply▲c0wb0yc0d3r32 minutes ago
[-] > Submitters use LLMs to generate the code and reviewers use LLMs to review it.
This just like my favorite, “We can use LLMs to write the code and write the tests.”
reply▲Thats fine, the cost for me to re-implement your code is nearly zero now, I don’t have to cajole you into fixing problems anymore.
reply▲OkayPhysicist2 hours ago
[-] This is obviously in an open source environment. You never needed to cajole them into fixing problems, you could just fix it yourself. That was always an option. That's literally the entire point of open source.
reply▲charcircuit2 hours ago
[-] People doing work doing work that you can take for free to make money off of is another big point of open source you can't ignore.
reply▲It seems like quite a tower of babel just waiting to happen.. All those libraries that once had thought go into tangled consequences of supporting new similar features and once had ways to identity for their security updates needed will all just be defective clones with 5%-95% compatibility for security exploits and support for integrations that are mostly right but a little hallucinated?
reply▲I think it's more likely that libraries will give way to specified interfaces. Good libraries that provide clean interfaces with a small surface area will be much less affected by thos compared to frameworks that like to be a part of everything you do.
The JavaScript ecosystem is a good demonstration of a platform that is encumbered with layers that can only ever perform the abilities provivded by the underlying platform while adding additional interfaces that, while easier for some to use, frequently provide a lot of functionality a program might not need.
Adding features as a superset of a specification allows compatibility between users of a base specification, failure to interoperate would require violating the base spec, and then they are just making a different thing.
Bugs are still bugs, whether a human or AI made them, or fixed them. Let's just address those as we find them.
reply▲The cost of forking open source code was always effectively zero.
reply▲It's not really, because you now have the cost of maintaining that fork, even if it's just for yourself.
reply▲Which is still true in our brave new llm world.
reply▲That may be part of the issue. Perhaps LLMs are just causing people to reveal how much they consider a maintainer as providing a service for them. Maintainers don't work for you, they let you benefit from the service they perform.
That workload of maintaining a fork doesn't come from nowhere, it's just a workload someone else would have to do before the fork occured.
reply▲I'm talking about the literal process of forking an open source project. You're just making a copy of a set of files.
reply▲Given the supposed quality of top flight models there ought to be a lot more people forking open source projects, implementing missing features and releasing "xyz software that can do a and b".
Somehow it's not really happening.
reply▲jaggederest2 hours ago
[-] I've actually been doing this for my own purposes - an adhoc buggy half-implemented low latency version of Project Wyoming from home assistant.
Repo, for those interested: https://github.com/jaggederest/pronghorn/
I find that the core issues really revolve around the audience - getting it good enough that I can use it for my own purposes, where I know the bugs and issues and understand how to use it, on the specific hardware, is fabulous. Getting it from there to "anyone with relatively low technical knowledge beyond the ability to set up home assistant", and "compatible with all the various RPi/smallboard computers" is a pretty enormous amount of work. So I suspect we'll see a lot of "homemade" software that is definitely not salable, but is definitely valuable and useful for the individual.
I hope, over the long to medium term, that these sorts of things will converge in an "rising tide lifts all boats" way so that the ecosystem is healthier and more vibrant, but I worry that what we may see is a resurgence of shovelware.
reply▲philipkglass2 hours ago
[-] I have already forked open source software to fix issues or enhance it via coding agents. I put it on github publicly, so other people can use it if they see it, but I don't announce it anywhere. I don't want to deal with user complaints any more than the current maintainers do. (I'm also not going to post my github profile here since it has my legal name and is trivially linked to my home address.)
reply▲_verandaguy2 hours ago
[-] This is an unethical take, and long-term and at scale, an unsustainable/impractical one. This kind of mindset results in tool fragmentation, erosion of trust, and ultimately worse quality in software.
reply▲So you're saying people forking open source software is "unethical"? What is open source then? Just a polite offer that it is rude to accept?
As a sidenote: what's with the usage of "take" to designate an opinion instead of the word "opinion" or "view"?
reply▲Mathnerd3142 hours ago
[-] The author sounds like he actually responds to feature requests, though. Typical behavior I'm seeing is that the maintainer just never checks the issue tracker, or has it disabled, but is more likely to read PR's.
reply▲As it should be. Issues are a dime a dozen, sometime coherent, rarely relevant. There is no barrier to submitting a garbage issue.
PRs, at the very least, provide a quality gate. You have to a) express the idea in runnable code and b) pass the tests.
reply▲mactavish882 hours ago
[-] Great example of how to set boundaries. The open source community is slowly healing.
reply▲I don't think this is an example of setting boundries. Usually a boundry would be stopping people from making you do work you don't want to do.
This is just a change in position of what work is useful for others to do.
reply▲Bugs aside, code generated by an LLM is NOT more trustworthy than a drive-by PR, you should review them just as closely. The slop machine doesn't care, it will repeat whatever pattern it found on the Internet no matter who originally wrote it and with what intent. There have been attacks poisoning LLMs with malicious snippets and there will be many more.
reply▲I agree with this mindset. Instead of submitting code diffs, we should be submitting issues or even better tests that prove that bugs exist or define how the functionality should work.
reply▲had the same realization last year after getting a few obviously AI-generated PRs. reviewing them took longer than just writing it myself. maybe the right unit of contribution is going back to being the detailed bug report / spec, not the patch
reply▲It's good that he is upfront about it, but this surely shouldn't be taken as a general advice, since everybody has his own preferences. So this really shouldn't be a blogpost, but rather a "Contributing Guidelines" section in whatever projects he maintains.
reply▲I firmly believe the author's stance should be the default policy of just about every open source project. I don't even write my own code anymore, I sure as hell don't want to deal with your code.
Give me ideas. Report bugs. Request features. I never wanted your code in the first place.
reply▲You believe. So it applies to projects you maintain. It doesn't mean it applies to project I maintain, or anybody else maintains. So this shouldn't be any more default than any other mode. And probably less default, since people generally developed other conceptions about "defaults" of etiquette in open source projects over the last 15 years.
reply▲caymanjim26 minutes ago
[-] I'm not exactly disagreeing with you. I appreciate the author's stance, and I appreciate the blog post making it to HN. I think the author's stance should be widely adopted. If they had simply stuck their blog post into CONTRIBUTING.md in their repo, no one would have seen it. Now it's being more widely disseminated, and in a longer form with good reasoning attached.
reply▲If i do the work for a feature im usually already using it via fork, i offer the patch back out of courtesy. Up to you if you want it I'm already using it.
reply▲cmrdporcupine1 hour ago
[-] Yes I feel very much like what I really want from people is
very detailed bug reports or
well thought through feature requests and even
well specified test scenarios [not in code, but in English or even something more specified]
I know my code base(s) well. I also have agentic tools, and so do you. While people using their tokens is maybe nice from a $$ POV, it's really not necessary. Because I'll just have to review the whole thing (myself as well as by agent).
Weird world we live in now.
reply▲reply▲I am curious to watch this unfold. How long until a clever supply chain attack effects this? What will the response be? Will be interesting to see it.
reply▲> On top of that, there are a lot of personal and subjective aspects to code. You might have certain preferences about formatting, style, structure, dependencies, and approach, and I have mine.
95% of this is covered by a warning that says "I won't merge any PR that a) does not pass linting (configured to my liking) and b) introduces extra deps"
> With LLMs, it's easier for me to get my own LLM to make the change and then review it myself.
So this person is passing on free labour and instead prefers a BDFL schema, possibly supported by a code assistant they likely have to pay for. All for a supposed risk of malice?
I don't know. I never worked on a large (and/or widely adopted) open-source codebase. But I am afraid we would've never had Linux under this mindset.
reply▲I'm with the author here; I don't really feel like dealing with people's PRs on my personal projects. The fact that GitHub only implemented a feature to disable PRs in February is absolutely baffling to me, but I'm glad it's there. Just because a project's source code is made available to the public under a permissive license does not mean the maintainer is under any obligation to merge other people's changes.
It feels like a lot of people assume a sense of entitlement because one platform vendor settled on a specific usage pattern early on.
reply▲OkayPhysicist2 hours ago
[-] > 95% of this is covered by a warning that says "I won't merge any PR that a) does not pass linting (configured to my liking) and b) introduces extra deps"
Maybe I'm not up to date with the bleeding edge of linters, but I've never seen one that adequately flags
let out = []
for(let x of arr){
if(x > 3){
out.append(x + 5)
}
}
Into
let out = arr
.filter(x => x > 3)
.map(x => x + 3)
There's all sorts of architectural decisions at even higher levels than that.
reply▲Indeed, yours has both more allocations and a bug (+3 instead of +5)
reply▲More allocations is a good point but you're being pedantic about the bug... how do you know the +5 isn't the bug? :P
reply▲No. When code is cheaper to generate than to review, it's cheaper to take a (well-written) bug report and generate the code yourself than to try to figure out exactly what the PR does and whether it has any subtle logical or architectural errors.
I find myself doing the same, nowadays I want bug reports and feature requests, not PRs. If the feature fits in with my product vision, I implement and release it quickly. The code itself has little value, in this case.
reply▲I definitely trust my local LLM where I know the prompt that was used. Even if the code generated ends up being near-identical, it'll be way faster to review a PR from someone or something I trust than from some rando on the Internet
reply▲PRs come from your most engaged community members. By banning PRs you won’t get more contributions, you will discourage your (current and future) most active members.
reply▲I like how fast this is changing
The fact-of-life journaling about the flood of code, the observation that he can just re-prompt his own LLM to implement the same feature or optimization
all of this would have just been controversial pontificating 3 months ago, let alone in the last year or even two years. But all of a sudden enough people are using agentic coding tools - many having skipped the autocomplete AI coders of yesteryear entirely - that we can have this conversation
reply▲Why bother having a public repository?
reply