AI has suddenly become more useful to open-source developers
46 points
3 hours ago
| 9 comments
| zdnet.com
| HN
beastman82
1 hour ago
[-]
gotta love a site that hijacks your back button and makes you hit it 3 times.
reply
rtkwe
1 hour ago
[-]
Doesn't for me until I scroll past the end of the article to read the next one. To get 3 you'd have to scroll through multiple articles.
reply
kjkjadksj
1 hour ago
[-]
On mobile you get a little 1.5” strip to read. Rest of the screen is autoplaying ads. No, I didn’t suffer through that to read the article.
reply
klibertp
31 minutes ago
[-]
I learned (from the second paragraph) that 7 out of 12 is "vast majority". I'm a bit reluctant to read on after that...
reply
s_ting765
29 minutes ago
[-]
Coding agents are like asking a genie for code. They will give you the code you ask for alright but you never know what kind of curse has been crontabbed for you.
reply
MithrilTuxedo
2 hours ago
[-]
I'm thinking of Debian and how much effort it takes to maintain stability and security over time.

I can't imagine we'll really be able to trust AI without it's use in open source software where we can see how reliable it is.

reply
ghywertelling
37 minutes ago
[-]
If AI works, imagine 10 years of security updates and possibly 10 years of full OS upgrades for Android phones.
reply
soperj
36 minutes ago
[-]
why would they do that when they want to sell you a new phone?
reply
nout
33 minutes ago
[-]
With AI you can do that, or smaller companies can do that. It levels the field.
reply
ozlikethewizard
2 hours ago
[-]
How many year's end have to pass?
reply
ChrisArchitect
1 hour ago
[-]
reply
Hamuko
1 hour ago
[-]
When people suggest to use AI for open-source projects, what exactly are they advocating for given that the median open-source project budget is pretty much $0/month? Maybe $1/month if the maintainer likes to have a website for the project.
reply
hparadiz
27 minutes ago
[-]
The $20 / month subscription is less than I pay for electricity already and the local models are also capable enough. A lot of the top open source projects have paid devs working on them already.
reply
ivaivanova
31 minutes ago
[-]
Ha, I just shipped an open source library using the Anthropic API. The way I solved the cost concern was by having users bring their own API key. Zero infra cost on my end and they pay for what they use.
reply
fsflover
2 hours ago
[-]
See also: https://news.ycombinator.com/item?id=47547849

AI bug reports went from junk to legit overnight, says Linux kernel czar (theregister.com)

58 points by amarant 4 days ago

reply
supernes
2 hours ago
[-]
TLDR: Greg Kroah-Hartman says that last month something magical happened and AI output is no longer "slop".
reply
abletonlive
16 minutes ago
[-]
For months, people on hackernews and reddit with a bad model of reality and lack of observation and understanding have been telling people that LLMs are useless and they are just toys, that they can't program etc. I have a whole graveyard of replies in my comments section from users on HN saying exactly this.

Nothing has changed this month, it's been good for a while and a small minority could see that it was already decided that it was paradigm shifting.

This is a notice to all of you that are just now changing your mind and crossing over: Your cognition of reality is flawed and you are not as good as you think you are at observing technological progress. The only thing that has changed is that you just now noticed how capable LLMs are. There are many that have been telling you before 2026 that it was here and you all tried to paint us as charlatans.

Reddit is still very far behind. Browsing /r/programming and other software dev releated subs like /r/cscareerquestions is like being at a dinosaur museum

reply
throwaway2027
1 hour ago
[-]
I noticed it last December.
reply
no_shadowban_6
47 minutes ago
[-]
paradigm shift, bro.
reply
dragonelite
27 minutes ago
[-]
Let me guess the big tech cheque finally cleared?
reply
georgemcbay
2 hours ago
[-]
IMO around December of last year LLM output (for coding at least, not for everything) went from "almost 100% certainly slop" to "probably not slop, if you asked for the right thing while being aware of context limitations".

A lot of people seem stuck with their older (correct at the time) views of them still always producing slop.

FWIW I am more of an AI doomer (in the sense that I think the economic results from them will be disastrous for knowledge workers given our political realities) than booster, but in terms of utility to get work done they did pass a clear inflection point quite recently.

reply
bluefirebrand
2 hours ago
[-]
> if you asked for the right thing while being aware of context limitations

So, still pretty likely to produce slop in a large majority of cases

If the most useful place for them is where you've already specced things out to that degree of precision then they aren't that useful?

Speccing things to that precision is the time consuming and difficult work anyways, after all.

reply
georgemcbay
1 hour ago
[-]
I think LLMs currently need to be used by someone who knows what they are doing to produce value, but the jump they made from being endless slop machines to useful tools in the right hands is enough for me to assume it is only a matter of time until they will be useful tools in the hands of even the untrained masses.

I wish this wasn't true because I think it will economically upend the industry in which I have a career, but sadly the universe doesn't care what I wish.

reply
mjr00
1 hour ago
[-]
> assume it is only a matter of time until they will be useful tools in the hands of even the untrained masses.

IMO this vastly overestimates how good the "untrained masses" are at thinking in a logical, mathematical way. Apparently something as basic as Calculus II has a fail rate of ~50% in most universities.

reply
chromacity
14 minutes ago
[-]
How does this follow?

There's nothing "basic" about Calculus II. Calculus is uniquely cursed in mathematical education because everything that comes before it is more or less rooted in intuition about the real world, while calculus is built on axioms that are far more abstract and not substantiated well (not until later in your mathematical education). I expect many intelligent, resourceful people to fail it and I think it says more about the abstractions we're teaching than anything else.

But also, prompting LLMs to give good results is nowhere near as complex as calculus.

reply
isueej
58 minutes ago
[-]
That’s why you can’t generalise opinions on here.

Most people on here don’t belong to that group of people. So ofc they can find a way to create value out of a thing that requires some tinkering and playing with.

The question is can the techniques evolve to become technologies to produce stuff with minimal effort - whilst only knowing the bare minimum. I’m not convinced personally - it’s a pipe dream and overlooks the innate skill necessary to produce stuff.

reply
xyzelement
1 hour ago
[-]
Who cares? People know what they want and need and AI is increasingly able to take it from there.
reply
embedding-shape
1 hour ago
[-]
> People know what they want and need

If they truly did, there wouldn't be a huge amount of humans whose role is basically "Take what users/executives say they want, and figure out what they REALLY want, then write that down for others".

Maybe I've worked for too many startups, and only consulted for larger companies, but everywhere in businesses I see so many problems that are basically "Others misunderstood what that person meant" and/or "Someone thought they wanted X, they actually wanted Y".

reply
mjr00
56 minutes ago
[-]
> People know what they want and need

The multi-decade existence of roles like "business analysts" and "product owners" (and sometimes "customer success") is pretty strong evidence that this is not the case.

reply
PhilipRoman
59 minutes ago
[-]
What they want? Sometimes. What they need? Almost never.
reply
isueej
54 minutes ago
[-]
Right… people knew they wanted an iPhone before it was conceived, right? Lmao

The arrogance of people like you is astonishing.

reply
bluefirebrand
54 minutes ago
[-]
> I wish this wasn't true because I think it will economically upend the industry in which I have a career, but sadly the universe doesn't care what I wish.

I mean, yes. I'm worried about my career too, but for different reasons. I don't think these things are actually good enough to replace me, but I do think it doesn't matter to the people signing the cheques.

I don't believe LLMs are producing anything better than slop. I think people's standards have been sinking for a long time and will continue to sink until they reach the level LLMs produce

The problem isn't just LLMs and the fact they produce slop, it's that people are overall pretty fine with slop

I'm not though, so there's no place for me in most software business anymore

reply
isueej
32 minutes ago
[-]
I’m not a SWE.

But I look at software from the perspective of them as being objects.

Since it’s intangible people can’t see within. So something can look pretty even if underlying it all, it’s slop.

However, there is an implicit trade off - mounting slop makes you more vulnerable from a security standpoint, bugs etc which can destroy trust and experience of using the software. This can essentially put the life of a business at risk.

People aren’t thinking so much about that risk - because it hasn’t happened to anyone large substantially. What I think about is will slop just continue to mount unchecked? Or are people expecting there to be improvements that enable oneself to go back and clean up the slop with more powerful tooling?

If the latter does not come about, I think we will see more firms come under stress.

Overall though, I think too much focus is on the acceleration of output. I never think that’s the most important thing. It’s secondary to having a crystal clear vision. The problem is to have a clear vision requires doing a lot of grunt work - it trains and conditions your mind to think a particular kind of way.

It will be interesting to see how this all plays out.

reply
ramesh31
2 hours ago
[-]
>TLDR: Greg Kroah-Hartman says that last month something magical happened and AI output is no longer "slop".

Opus 4.6 has been a step change. It's simply never wrong anymore. You may need to continue giving it further clarification as to what you want, but it never makes mistakes with what it intends to do now.

reply
Balinares
1 hour ago
[-]
Yo, just because you can't tell when Claude is wrong, doesn't mean it's right.

I do agree that the Q1 2026 models in general have passed a threshold, but goodness almighty Opus 4.6 still screws up a lot.

reply
no_shadowban_6
46 minutes ago
[-]
> Yo, just because you can't tell when Claude is wrong, doesn't mean it's right.

Just because you can't tell when Claude is right, doesn't mean that you are.

This shit is AGI, with decades + billions of dollars of research and development behind it.

So don't get all high and mighty now, acting like you know better than Claude.

reply
binarymax
1 hour ago
[-]
It’s wrong. It made large mistakes on my code literally yesterday.
reply
brcmthrowaway
1 hour ago
[-]
Wrong context
reply
banannaise
14 minutes ago
[-]
Ah, the eternal handwave for anything the AI doesn't do well - it must be user error.
reply
binarymax
1 hour ago
[-]
No. Aside from just making an algorithm that didn’t even run, it refused to use an MCP that it had registered in the same context session.
reply
taormina
35 minutes ago
[-]
Can I get your magical version of Opus that works? 4.6 has been a side grade at best, and worse than prior models most days.
reply
pbiggar
1 hour ago
[-]
> What happened? Kroah-Hartman shrugged: "We don't know. Nobody seems to know why. Either a lot more tools got a lot better, or people started going ..."

Odd sentiment. It's pretty clear the tools crossed a threshold last year (in April as I recall) where they became good enough to actually write entire applications, and just accelerated from there. Today they're amazing and no-one I know is writing artisanal code anymore (at least, not at work).

reply