Stop Slopware
100 points
6 hours ago
| 22 comments
| stopslopware.net
| HN
https://ficd.sh/blog/your-project-sucks/
wibbily
4 hours ago
[-]
It bothers me that so many programmers I know, here and in real life, seem to never actually have cared about the craft of software development? Just about solving problems.

I like problem solving too. But I also like theory and craft, and in my naïveté I assumed most of us were like me. LLMs divorced craft-programming from tool-programming and now it seems like there were never any craft-programmers at all.

It feels like the group I was part of was just a mirage, a historical accident. Maybe craft-painters felt the same way about the camera.

reply
estimator7292
2 hours ago
[-]
The craft is alive and well in embedded spaces.

When I was growing up as a programmer I observed JavaScript and swore I would never, ever touch webdev. As I got older that list kept growing to include mobile, then Apple, then desktop in general.

I like my little world of C where everything must be crafted with care or it just plain doesn't work.

And it's not that there isn't any embedded slopware, but the constraints are so much tighter that you can't really get away with the same level of bad code on 1KB ram and 4KB flash that you can on a 32 core desktop CPU with practically infinite resources.

reply
viraptor
1 hour ago
[-]
> world of C where everything must be crafted with care or it just plain doesn't work.

I hope you track the progress so you're not surprised one day. The research side is way past embedded C and VHDL was of interest a year ago https://dl.acm.org/doi/10.1145/3670474.3685966 In embedded code, recent LLMs can do just fine with popular architectures. It's down to the spec and test harness whether embedded C works or not. The moat is not that big.

reply
viraptor
2 hours ago
[-]
It's a false dichotomy. You can do both, depending on the project. Or even different areas in the same software. There are many things that just need writing and there's really no fancy craft in transcribing that new business rule, regardless how much you care - the difference is just who pushes the buttons for the same result.

This reminds me of the Google/Oracle Java case where one of the example "copied code" fragments was some trivial code with null guards. Anyone could write this and end up with the same code. Human/LLM/whatever doesn't matter. That fragment just needed to exist.

reply
embedding-shape
1 hour ago
[-]
Hear hear! I don't play video games for the same reasons, nor listen to music for the same reasons every time, and same with creating software.

Most of the times, I write software not to write software, but to solve a problem somewhere, often times not related to software itself. Other times I feel like the UX of some dev tool is bad, and if I just quickly fix that, I can solve my problem faster, so down the rabbit hole we go, which is a different type of experience.

Other times I'm focused on figuring out an elegant design/architecture for something that isn't problem solving, but more "neat piece of software", either a library or some other type that needs some sort of interface, be it library API or actual UI. Then I'll go into "craftmanship" mode and then most of the work actually happens away from the computer, mostly with pen and paper or whiteboards.

I still think the latter is needed for improving the former, and high-quality and easy to maintain code is more important than ever, and if you only do the former, you'll get stuck at a ceiling while only doing the latter, you'll also get stuck if there isn't an actual need (at some level, "fun" can be a need) for it.

reply
square_usual
2 hours ago
[-]
Every now and then someone posts the "X programmers (implied good) hate AI, Y programmers (implied bad) hate AI!" and every time people come out of the woodwork to point out that, no, X programmers can also use AI to take out things they can delegate and focus on stuff that's cool. Case in point, Steve Klabnik, a programmer who nobody can say doesn't care about the craft, is working on a new language primarily with AI: https://rue-lang.dev/
reply
wibbily
1 hour ago
[-]
It can be, yeah. I use it sometimes to shim libraries or write one-off scripts. But it's made me disgustingly introspective. Why do I do anything? Where do I draw my lines?

An example. I've been writing a Lisp, and I'm using GNU Readline for text input. Later I found out that Readline can't be built for WebAssembly, and I decided to have Claude write a podunk replacement for it. I now have a bit of code in my Git, attached to my name, that I didn't write

What did I lose by doing that? My goal wasn't "to write a Readline", that's why I was using it in the first place. But my goal also wasn't "to have a working Lisp interpreter" or even like "to know how a Lisp interpreter works". It was a desire to Know More. Surely I'd have learned something useful (in some form) by doing all the minutiae myself. Or would I have learned more by doing none of it and printing out the SBCL source to read over coffee?

Sorry, I ended up rambling. I don't have any answers. I think I'm just butthurt by the "X, Y" sort of comments you mentioned and the solution is (as always) to touch grass

reply
viraptor
1 hour ago
[-]
In practice we've been doing setting the threshold of what we care about for a long time already, with firmware, operating systems, libraries, etc. You can always go one level deeper if you want more. Did you want to know how readline works? What about terminal control characters? What about pty? What about the input? The keyboard interface? HID? Device drivers? USB? Packet transmission? Serial interfaces? Electrical connections? Signal integrity?

LLM code provides just another type of available abstraction where we can stop in learning, but not really something entirely new.

reply
xeromal
3 hours ago
[-]
The one engineer in my life who cared about the craft of software development also refused to have a color profile pic and also did his zooms in black and white and thought Ruby was the only good programming language.
reply
MisterTea
3 hours ago
[-]
That's like one person with terrible taste.
reply
xeromal
2 hours ago
[-]
My point lol.

I think most devs, especially ones that call it a "craft, take themselves too seriously. We're glorified construction workers that get paid a lot

reply
wibbily
2 hours ago
[-]
> We're glorified construction workers that get paid a lot

That attitude is my point. I'm a developer by trade; I have a different set of feelings and concerns about "the industry" and how new tooling will affect it. (I even use it sometimes at work.) But I'm also a computer scientist and I thought more of you all were too.

To beat my original analogy to death: I thought this was a painting forum, but it's more of a "making pictures" forum, and now that it's easier to make a picture, no one cares about paintbrushes.

reply
xeromal
1 hour ago
[-]
Yeah, I went to school for computer science but I am in no way a scientist. I'm not doing research and I'm not innovating anything that I already hasn't been built. I'm just building it for who needs it.

The space for innovation in computer science is pretty limited when it comes to constructs like how we build something considering how fully featured libraries and such are. Literally everything I've built in my career has been built better by someone else. If I am a scientist, it's the person in the lab making 10,000 flu shots a day.

I'll just say that my career in the industry as a programmer has made me very good at working on my car and I think they're more intertwined and connected then anything theoretical

reply
ineedasername
14 minutes ago
[-]
> I thought this was a painting forum, but it's more of a "making pictures" forum

It’s more like the discussions space to talk about things related to the painters of hotel art.

reply
pegasus
2 hours ago
[-]
The brush-strokes are part of the painting (they give texture and structure for example), so a painter would care about them, if he'd care for the end product. But a painter who would instead deeply care about details of the brush incidental to the task of creating paintings, by definition got lost in the woods, or at least stopped being a painter for those moments. It makes sense to care for example about the feel and balance of a brush, because that has a direct impact on the artwork, but say, collecting embellished brushes would be him wearing not a painter's hat (beret?) but a collector's.

My point is that the end-product matters most, and getting wrapped in any other part of the process for its own sake is a failing, or at best a distraction - in both cases.

reply
wkjagt
40 minutes ago
[-]
I quit my big-tech job 4 years ago and it feels like that was just in time. I don't think I would enjoy the current way. I still program a lot, but as a hobby, and still thoroughly enjoy the craft. I sometimes ask chatgpt for a function name, but mostly stay away from ai. I do realize that I'm less and less employable as a software engineer though.
reply
wiether
1 hour ago
[-]
I used to be intrigued by _software craftsmanship_, but I mainly saw people using it to practice heavy gatekeeping. Be it on the job, in code reviews, in job interviews...

Furthermore, given how they behave in a cult-like way, it feels like they are straight-up delusional.

People working as consultants for big retail chains, talking all day long about "the craft". Nobody cares. They sell trash. They don't put marble in their store. They don't want fancy software. Furthermore, if, by trying to force "the craft" to their peers, all they do is making the life of others miserable... Just stop. Please.

Now my approach is as follow:

if stakeholders are only interested in two things (how-much it cost, when it's ready), which is 99% of the case at $JOB, then make something that does the job and that won't make you hate yourself if you have to maintain it in a year

if I'm the stakeholder, like creating internal tooling that nobody asked for but that will solve issues, then yes, I do things as good as I want them to be

same for working on FOSS on my personal time

reply
fnoef
1 hour ago
[-]
The craft has, unfortunately, died by a a bunch of tech companies who promised you the craft, but in really were all about making money ASAP, disregarding the craft.

Nowadays, the craft can be practiced at your home, by yourself.

reply
ivanjermakov
1 hour ago
[-]
> Just about solving problems

Or just about making money :(

reply
tonymet
3 hours ago
[-]
There’s room for both types of people in any trade. Some photographers obsess over the equipment, some only care about the photos. Carpenters with tools. Musicians with instruments & gear. Every craft has people who care about the how and those who focus on the product.

I’ve always enjoyed the craft of software engineering, though even I admit the culture around it can be a bit overly contemplative .

Nevertheless, there is room for both personalities. Just hang out with likeminded people and ignore the rest.

reply
tikhonj
3 hours ago
[-]
Caring about craft in programming is more like a photographer caring about light and composition and creativity and taste than a photographer caring about equipment.
reply
bitbuilder
2 hours ago
[-]
I'm not sure that's a valid analogy. Light, composition and creativity are all experienced directly by viewer, and essentially describe what it is that we notice and appreciate in photography (even if subconciously). The best analogy I can think of to programming is the UX/UI of the application. Given equaly competent developers, nobody is going to notice or care if your application was written in Rust or Cold Fusion.

But the original analogy is flawed too. I wouldn't consider caring about the craft of programming to be similar to obsessing over your photography equipment. GAS is about consumerism and playing with gadgets, at the end of the day.

Caring about the craft of programming is more about being an artist who takes pride in crafting something beautiful, even if they're the only ones experiencing it. I am most definitley not one of those programmers, but have always had nothing but immense respect for those that are.

reply
tonymet
2 hours ago
[-]
In some ways yes. Many “engineers” obsess over “idioms” and other trends to the detriment of performance, correctness and usability. So this analogy is a bit too charitable.
reply
braebo
3 hours ago
[-]
> Just hang out with likeminded people and ignore the rest.

Or find ways to integrate with the rest, challenging one another to facilitate growth.

reply
tonymet
2 hours ago
[-]
While I appreciate your optimism, the cost of conversion is 1000x the cost of reaching & identifying the right people.
reply
codyklimdev
2 hours ago
[-]
I like programming as a craft, but the kinds of coding I do for fun and the kinds of coding I do for profit look very, very different. So, for my work programming I use AI a lot more than I do in my own time.

After all, my work (for the moment) is just about pushing features to keep the PMs happy ¯\_(ツ)_/¯

reply
bigyabai
4 hours ago
[-]
After a certain point, I think "craft" becomes a meaningless and self-flattering scapegoat, similar to how people use "taste" or "courage" as an excuse to make boneheaded decisions. Most software customers aren't buying software because it's tasteful or impressively crafted, but because it fills a gap in their workflow. People who obsess over polish often end up missing the forest for the trees.

Plus, code-based meritocracy flat-out doesn't exist outside the FOSS circle. Many of the people you know are clocking-in at a job using a tech stack from 2004, they aren't paid to recognize good craftsmanship. They show up, close some tickets, play Xbox during on-call and collect their paycheck on Friday.

The people who care might be self-selecting for their own failure. It's hard to make money in tech if your passion for craft is your strongest attribute.

reply
simonsarris
5 hours ago
[-]
> if you’re doing this for your own learning: you will learn better without AI.

This is not the distinction I would want to tell newcomers. AI is extremely good for finding out what the most common practices are for all kinds of situations. That's a powerful learning tool. Besides, learning how to use a tool well (one that we can expect professionals to use) is part of learning.

Now most common practices and best practices are two different things. They are often the same, but not always. That's the major caveat for many fields, but if you can keep it in mind, you're going to do OK.

reply
nartho
4 hours ago
[-]
It is a great learning tool for people who are willing to learn and put in the time and effort. Ask good questions, double check everything, read documentation and make sure they understand everything before they move on. It's a tremendous tool if used correctly. People who just hit tab or past everything Claude generates will get worse. The benefits of "the old way" is that even the people who didn't want to put in the effort where making some improvement if only by friction and time spent.
reply
jayd16
5 hours ago
[-]
You can go to a restaurant to see new dishes (which isn't nothing) but it wont exactly teach you how to cook.
reply
pegasus
2 hours ago
[-]
Unless you happen to meet the unendingly patient and helpful cook who is willing to explain the recipe in any depth one desires.
reply
user432678
13 minutes ago
[-]
You still need to do the cooking yourself to master it. If that cook will be giving you a readymade dish, can you say you can cook? Although yes, that’s the goal for many…
reply
avian
2 hours ago
[-]
You mean the cook who will in the same unendingly patient and helpful manner sometimes confidently suggest putting glue into your dishes and serving your guests rocks for lunch?
reply
pegasus
1 hour ago
[-]
That's part of the cook being helpful. It's how they check that you're not asleep at the wheel and your critical thinking is awake and engaged ;)
reply
viraptor
1 hour ago
[-]
There's a difference between recent frontier coding LLMs and Google doing quick-and-cheap RAG on web results. It's good to understand it before posting cheap shots like this.
reply
square_usual
2 hours ago
[-]
I'm sure these year-old examples of errors from a bad product are still valid!
reply
afro88
3 hours ago
[-]
It's also extremely good at describing what code is doing, architecture, extrapolating why something is done a certain way etc. Invaluable for me for learning how unfamiliar code works in unfamiliar languages
reply
BizarroLand
5 hours ago
[-]
This is like telling people to learn how to draw by only looking at the masters' paintings in person instead of tracing and imitating from possibly stolen but otherwise cheap books at home.

I would say to at least just read what the AI does and ask it questions if you don't understand what it did. You can interactively learn software development from AI in a way that you cannot from a human simply because it won't run out of patience even if it will lie to you.

The results depend mostly on how you use it.

reply
tekkk
26 minutes ago
[-]
The blog post is actually much more rational than some of the comments here. There's a fine balance what I call fetishization of tools and just knowing your craft well. Sometimes, we want to use an abstraction even though simpler approach may be better because we are in hurry, want to learn new things or just dont care particularly.

Whose to judge if it works and ships on time? Well, the fool later down the road who has to maintain it probably. But I've never believed in gate-keeping or preaching without pragmatism - I rather put my energy in teaching what little i can and hope that joy of seeing things improve for better will motivate them towards learning. If not, well it's waste of time either way.

reply
voxleone
1 hour ago
[-]
If i had to give advice to young programmers, it would be this: be comfortable with set theory and notation. It’s interesting that once you collect use cases, sketch the objects, and express everything in a set-theoretical fashion, the programs almost write themselves. You can even feed this set-theoretic notation directly to an LLM and get code in return. It’s a perfectly honest way to incorporate LLMs into your workflow.
reply
armchairhacker
1 hour ago
[-]
I don't think we should shame beginners for using AI. Instead, we should encourage them to try working manually and/or learn fundamentals.

Also, public communities have been overrun with low-quality beginner posts for longer than AI. Moreover, you can't convince all beginners to stop using AI, because some are malicious and/or socially inept. Moreso than shame or encouragement, we need a filter for low-quality projects (perhaps a combination of AI flagging and manual review). That would benefit both the beginners and the people who shame them because they're bothered.

reply
Larrikin
5 hours ago
[-]
Why would any actual software engineer be against slopware?

When it inevitably all comes crashing down because there was no actual software architecture or understanding of the code, someone will have to come in to make the actual product.

Hopefully by then we will have realistic expectations for the LLM, have skilled up, and we as a community treat them as just another feature in the IDE.

reply
jayd16
5 hours ago
[-]
Personally, I'd rather make something good instead of cleaning up a mess.

But beyond that, I'm really not looking forward to trying to discover new good libraries, tools, and such in 5 years time. The signal to noise is surely dropping.

reply
throwaway613745
3 hours ago
[-]
New job title: “vibe coding cleanup specialist”
reply
quectophoton
3 hours ago
[-]
Abbreviated as "VC Cleanup Specialist".

With the ambiguity in the meaning of "VC" being intentional.

reply
da_grift_shift
3 hours ago
[-]
Launch HN: Vibely - VC Cleaners For Your VC Slop (YC S25)
reply
koolba
2 hours ago
[-]
I can see this being a HomeJoy style situation (coincidentally actually backed by YC…), where they claim to clean up all your sloppy code for $40, burn through some more VC (extra funny as it’d be spending one VC’s money to try to clean up another VC’s mistakes), give up on AI and evolve into the usual outsourced body shop, and finally fold when everybody involved realizes the business model is not solvent.
reply
duckerduck
1 hour ago
[-]
reply
gs17
5 hours ago
[-]
> someone will have to come in to make the actual product.

My experience has been more that they expect you to fix the broken mess, not rebuild it properly.

reply
cesarb
1 hour ago
[-]
> Why would any actual software engineer be against slopware? When it inevitably all comes crashing down [...] someone will have to come in to make the actual product.

Why would a window maker be against breaking windows?

reply
zem
50 minutes ago
[-]
because a lot of us care about the software running out there being good quality, given how much of the world depends on it. i would very much rather not see it all come crashing down.
reply
Spivak
12 minutes ago
[-]
I think the problem is that, as a group, people who care about software quality/craft don't actually produce higher quality software. You'll get good quality and garbage software out of the craftsman and pragmatist groups at about equal rates. And folks in the craftsman group tend to have more and stronger opinions which isn't a good or bad thing except that having too many of them on a team can lead to conflict.
reply
danielbln
2 hours ago
[-]
This just reads as copium to me. "Those hacks vomiting out slop.. pah, when they call me, the artisan, in to clean up their mess in a way God intended, then they'll see!"

More realistic: AI assisted tooling will continue to improve as it has, the average code quality will rise as conventions and workflows improve and those who wait to be called in to clean up slop or whatever will wait forever, pushed by the wayside by those who can deliver great quality with the help of these new tools.

reply
cesarb
2 hours ago
[-]
> will wait forever, pushed by the wayside by those who can deliver great quality with the help of these new tools.

That sounded a lot like the "have fun staying poor" argument from the peak cryptocurrency days.

reply
CamperBob2
2 minutes ago
[-]
No, not really.
reply
danielbln
1 hour ago
[-]
Did it? Cryptocurrency enabled gambling and illicit purchases, that's it. In all other ways it was/is a solution in need for a problem.

Current gen AI has a ton of issues, but it nevertheless enables vast amounts of use cases today, right now.

And hoping that slop that is created today will provide work for the artisanal craftsman in the future is wishful thinking at best.

reply
pbalau
3 hours ago
[-]
> keep the commons clean [from the second link]

A glance at the r/python will show that almost every week there is a new pypi package generated by ai, with dubious utility.

I did a quick research using bigquery-public-data.pypi.distribution_metadata and out of 844719 package, 126527 have only 1 release, almost 15%.

While is not unfathomable that a chunk of those really only needed one release and/or were manually written, the number is too high. And pypi is struggling for resources.

I wonder how much crap there is on github and I think this is an even larger issue, with the new versions of LLMs being trained on crap generated by older versions.

reply
viraptor
1 hour ago
[-]
Storage is relatively cheap. Packages with only one release and little usage in the wild will be a rounding error in cost. A few years ago, Pypi required an over million dollars equivalent in CDN traffic per month. Storing a million of small dead packages is not worth the concern.
reply
pbalau
29 minutes ago
[-]
While my research was very shallow, the issue is with the practice. And I didn't look at how large those packages are.

It might not be a storage problem right now, but the practice of publishing crap is dangerous, because it can be easily abused. I think it is very easy to publish via pypi a lot of very heavy packages.

reply
stevesearer
2 hours ago
[-]
The tools I AI slop/vibe code are more like when I'd use spreadsheets for everything rather than real software.

I'm not against taking the time to read the docs, learn to craft code, and ship beautiful projects, but I could have done that before and didn't then either.

The difference is that now I have a hundred small, internal tools that save my team time and energy.

reply
srpinto
1 hour ago
[-]
I've been using Gemini / Antigravity to make a virtual pet for my kids using Love2D / lua. I never coded in my life and have no intention of learning (but I've been learning a lot about game systems and logic, which has been fun). The game is coming along very well and it looks very pretty (I'm a professional illustrator) and if I decide to publish it, no one will care the code is made in AI. It's very high effort, to be honest. You'd had to look into the code to know AI made it.

I now get why so many people are making AI art. I see their "work" as an illustrator and it is absolute slop, but I can see now how it might be fun and even liberating for people who don't make a living with it. So I now think twice before calling AI art "slop". Sure, it may be slop, but it's making a lot of people happy and probably opening up new carreer paths for people.

And yes, I've been affected financially because of this... but I get it.

reply
petercooper
1 hour ago
[-]
Your point also touches on the idea that new things are being created that might well never have. Like your virtual pet. You might have been commissioned to illustrate such a thing but most likely not, and it wouldn’t have been “yours.” It reminds me of when desktop publishing, MIDI sequencers, or PowerPoint took off and people produced all sorts of things that were largely not of high artisanal quality but it was new stuff, people got personal value from it (as it was harder to spread stuff around pre-Internet) and the tools eventually matured into what all the pros now use anyway.

That said, I concede the critics have many valid points and concerns and it’s going to be interesting to see how we navigate this flood of “stuff” at a scale never seen before. (I suspect it’ll end up like YouTube and video. Ultra long tail. Most stuff never seeing more than a few eyeballs and a smaller group getting the lion’s share of attention, as with most things. Did YouTube change TV and video production more generally? Yes! But it also didn’t destroy it..)

reply
simonw
5 hours ago
[-]
> if you’re doing this for your own learning: you will learn better without AI.

I'm certain that's not true. AI is the single biggest gift we could possible give to people who are learning to program - it's shaved that learning curve down to a point where you don't need to carve out six months of your life just to get to a point where you can build something small and useful that works.

AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.

100% rejecting AI as a learner programmer may feel like the right thing to do, but at this point it's similar to saying "I'm going to learn to program without ever Googling for anything at all".

(I do not yet know how to teach people to learn effectively with AI though. I think that's a very important missing piece of this whole puzzle.)

I'm a BIG fan of these three points though:

  rewrite the parts you understand
  learn the parts you don’t
  make it so you can reason about every detail
If you are learning to program you should have a very low tolerance for pieces that you don't understand, especially since we now have a free 24/7 weird robot TA that we can ask questions of.
reply
happytoexplain
5 hours ago
[-]
I think it's a pretty small generosity to implicitly extend what the author is saying to "you will learn better without generating your code". I don't know if that's what they meant, but AI is certainly a good tool for learning how things work and seeing examples (if you don't blindly trust everything it says and use other sources too).
reply
simonw
5 hours ago
[-]
That's fair. I also just noticed that the sentence before the bit I quoted is important:

> AI overuse hurts you:

> - if you’re doing this for your own learning: you will learn better without AI.

So they're calling out "AI overuse", and I agree with that - that's where the skill comes in of deciding how to use AI to help your learning in a way that doesn't damage that learning process.

reply
raincole
5 hours ago
[-]
I think the parallel is photobashing. I've seen art teachers debating how early a student should start photobashing. Everyone knows it's a widely adopted technique in the industry, but some consider it harmful for beginners.

Needlessly to say there is no consensus. I err on the side of photobashing personally.

reply
marcosdumay
1 hour ago
[-]
Rejecting AI as a learning tool looks to me like a very good policy in an academic setting.

And the completely wrong decision in a hobby setting.

reply
TheAceOfHearts
4 hours ago
[-]
It cannot be understated how much of a boon AI-assisted programming has been for getting stuff up and running. Once you get past the initial hurdle of setting up an environment along with any boilerplate, you can actually start running code and iterating in order to figure out how something works.

Cognitive bandwidth is limited, and if you need to fully understand and get through 10 different errors before anything works, that's a massive barrier to entry. If you're going to be using those tools professionally then eventually you'll want to learn more about how they work, but frontloading a bunch of adjacent tooling knowledge is the quickest way to kill someone's interest.

The standard choice isn't usually between a high-quality project and slopware, it's between slopware or nothing at all.

reply
raincole
4 hours ago
[-]
> It cannot be understated

You mean it cannot be overstated?

reply
indigodaddy
4 hours ago
[-]
You got ‘em!!
reply
tjr
5 hours ago
[-]
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.

I think that's very important.

Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.

I have found AI to be a great tool for learning, but I see it -- me, personally -- as a very slippery slope into not learning at all. It is so easy, so trivial, to produce a (seemingly accurate) answer to just about any question whatsoever, no matter how mundane or obscure, that I can really barely engage my own thinking at all.

On one hand, with the goal of obtaining an answer to a question quickly, it's awesome.

On the other hand, I feel like I have learned almost nothing at all. I got precisely, pinpointed down, the exact answer to the question I asked. Going through more traditional means of learning -- looking things up in books, searching web sites, reading tutorials, etc. -- I end up with my answer, but I also end up with more context, and a deeper+broader understanding of the overall problem space.

Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.

I feel like that is both good and bad. I don't want to be too dismissive of the good, but I also feel like it would be unwise to ignore the bad.

Whoa hey though, isn't this just exactly like books? Didn't, like, Plato and all them Greek cats centuries ago say that writing things down would ruin our brains, and what I'm claiming here is 100% the same thing? I don't think so. I see it as a matter of scale. It's a similar effect -- you probably do lose something (whether if it's valuable or not is debatable) when you choose to rely on written words rather than memorize. But it's tiny. With our modern AI tools, there is potential to lose out on much more. You can -- you don't have to, but you can -- do way more coasting, mentally. You can pretty much coast nonstop now.

reply
simonw
5 hours ago
[-]
> Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.

I think you learned something critically important: that the thing you wanted to build is feasible to build.

A lot of ideas people have are not possible to build. You can't prove a negative but you CAN prove a positive: seeing a version of the thing you want to exist running in front of you is a big leap forward from pondering if it could be built.

That's a useful thing to learn.

The other day, at brunch, I had Claude Code on my phone add webcam support (with pinch-to-zoom) to my https://tools.simonwillison.net/is-it-a-bird is-it-a-bird CLIP-in-your-browser app. I didn't even have to look at the code it wrote to learn that it's possible for Mobile Safari to render the webcam input in a box on the page (not full screen) and to have a rough pinch-to-zoom mechanism work - it's pixelated, not actual-camera-zoom, but for a CLIP app that's fine because the zoom is really just to try and exclude things from the image that aren't a potential bird.

(The prompts I used for this are quoted in the PR description: https://github.com/simonw/tools/pull/175)

> Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.

100% agree with that. You need a lot of self-discipline to learn effectively with AI. I'd argue you need self-discipline to learn via other means as well though.

reply
codr7
5 hours ago
[-]
A robot TA that gives the wrong answer 50% of the time isn't very helpful.
reply
simonw
4 hours ago
[-]
Right, but this is a TA that gives a wrong answer more like 10% of the time (or less).

I think it's possible that for learning a 90% accuracy rate is MORE helpful than 100%. If it gets things wrong 1/10th of the time it means you have to think critically about everything it tells you. That's a much better way to approach any source of information than blindly trusting it.

The key to learning is building your own robust mental model, from multiple sources of information. Treat the LLM as one of those sources, not the exclusive source, and you should be fine.

reply
Neywiny
5 hours ago
[-]
You need to choose another field if it takes you 6 months to hello world
reply
happytoexplain
5 hours ago
[-]
Don't speak like this to people. Also, don't put words in people's mouths (they didn't say "hello world").
reply
simonw
5 hours ago
[-]
I deliberately didn't say "hello world", I said "build something small that works" - I'm editing my post now to add the words "and useful".
reply
haolez
5 hours ago
[-]
The author probably meant "coding without AI", not "learning without AI".
reply
i_love_retros
1 hour ago
[-]
I'm not sure if I am just losing my mind at this point, but all this slop everywhere is starting to be funny.

I honest to god am in teams chats at work with high up the food chain architects and leaders (and plain old devs) and people are pasting chatgpt responses either as evidence backing up their claims of how something should be done, or as an actual response to another person as if they typed it themselves.

I have people sending me documents they "put together" that are clearly chatgpt generated, tables and emojis included.

Is this progress?

reply
miningape
1 hour ago
[-]
Do we work at the same company?
reply
noman-land
5 hours ago
[-]
As a practitioner I also inherently believe in well written software but as a lifelong learner, things change, and evolve. There is absolutely no reason why software today has to be written like software of yesterday.

There is no need to be so prescriptive about how software is made. In the end the best will win on the merits. The bad software will die under its own weight with no think pieces necessary.

On the other hand, code might be becoming more like clay than like LEGO bricks. The sculptor is not minding each granule.

We don't know yet if there's long term merit in this new way of crafting software and telling people not to try it both won't work, and honestly looks like old people yelling at clouds.

reply
smolder
5 hours ago
[-]
> There is absolutely no reason why software today has to be written like software of yesterday.

I get what you're saying, but the irony is that AI tools have sort of frozen the state of the art of software development in time. There is now less incentive to innovate on language design, code style, patterns, etc., when it goes outside the range of what an LLM has been trained on and will produce.

reply
williamcotton
20 minutes ago
[-]
I have been writing my own DSL (and LSP) for making web apps and LLMs can do a pretty decent job of writing this new language.

https://github.com/williamcotton/webpipe/tree/webpipe-2.0

https://github.com/williamcotton/webpipe-lsp/tree/webpipe-2....

reply
ctoth
3 hours ago
[-]
> frozen the state of the art

Personally I am experimenting with a lot more data-driven, declarative, correct-by-construction work by default now.

AI handles the polyglot grunt work, which frees you to experiment above the language layer.

I have a dimensional analysis typing metacompiler that enforces physical unit coherence (length + time = compile error) across 25 languages. 23,000 lines of declarative test specs compile down to language-specific validation suites. The LLM shits out templates; it never touches the architecture.

We are still at very very early days.

Specs for my hobby physical types metacompiler tests:

https://gist.github.com/ctoth/c082981b2766e40ad7c8ad68261957...

reply
noman-land
5 hours ago
[-]
Citation needed. I see no reason at all why that's true any more than the screwdriver freezing the state of home design in time.
reply
smolder
4 hours ago
[-]
LLMs aren't like a screwdriver at all, the analogy doesn't work. I think I was clear. LLMs aren't useful outside the domain of what they were trained on. They are copycats. To really innovate on software design means going outside what has been done before, which an LLM won't help you do.
reply
mccoyb
2 hours ago
[-]
No, you weren't clear, nor are you correct: you shared FUD about something it seems you have not tried, because testing your claims with a recent agentic system would dispel them.

I've had great success teaching Claude Code use DSLs I've created in my research. Trivially, it has never seen exactly these DSLs before -- yet it has correctly created complex programs using those DSLs, and indeed -- they work!

Have you had frontier agents work on programs in "esoteric" (unpopular) languages (pick: Zig, Haskell, Lisp, Elixir, etc)?

I don't see clarity, and I'm not sure if you've tried any of your claims for real.

reply
tikhonj
2 hours ago
[-]
> In the end the best will win on the merits.

The last six decades of commercial programming don't exactly bear this out...

The real lesson is that writing software is such a useful, high-leverage activity that even absolutely awful software can be immensely valuable. But that doesn't tell us that better software is useless, it just tells us it is not absolutely necessary.

reply
stuffn
5 hours ago
[-]
Software engineers are desperate to have their work be like machining aircraft parts.

It’s a tool. No one cares about code quality because the person using your code isn’t affected by it. There are better and worse tools. No one cares whether a car is made with SnapOn tools or milled on HAAS machines. Only that it functions.

We know there is no long term merit to this idea just looking back at the last 40 years of coding.

reply
idiotsecant
2 hours ago
[-]
I suspect the effect of AI will be a sort of eternal September, but for software. It's a phase change in how a lot of software will be written and used. There are a lot of applications right now for software where crappy software that mostly gets the job done in a specific way is Good Enough. There's going to be a lot of software written by LLMs that maybe barely compiles and doesn't handle edge cases and has weird behavior but gets a small, specific job done. This is a good thing, since maybe the previous alternative was doing something manually, or not at all.

There will still be major, fundamental, foundational software work for serious engineers to do, but we have to admit that most software needed in the world is not that.

reply
GaryBluto
1 hour ago
[-]
"Stop making software that is buried under heaps of noise" says the 1-page HTML document that somehow has 5 build dependencies listed on it's Git repository. I don't think I've ever seen anything so tone deaf in my life.

https://codeberg.org/ficd/stopslopware.net

reply
wkjagt
23 minutes ago
[-]
Two are optional, and the other 3 are just for a small script to convert the one mardown file to HTML and push it live (of which one (awk) you likely already have on your system). It's actually pretty minimalist. Most tone deaf thing you've seen in your life seems a bit over the top.
reply
GaryBluto
20 minutes ago
[-]
> Two are optional, and the other 3 are just for a small script to convert the one mar[k]down file to HTML and push it live (of which one (awk) you likely already have on your system).

None of which are needed. Why is an intermediate step needed?

> Most tone deaf thing you've seen in your life seems a bit over the top.

To complain about overcomplication via an overcomplicated project is a tad rich.

reply
magnitudes
5 hours ago
[-]
I find these really are not only condescending but also really miss the mark and ironically come off as really uneducated in my opinion, and that really is the most infuriating type of condescension. What you call slopware today is becoming less and less sloppy every six months as new coding models drop. In 2 years the “unmaintainable mess” is going to be far better and far more maintainable than anything the engineers behind these snide websites will make. Do folks realize you can also use the same coding models to ask questions and reason about the “slop” that these code models are writing that somehow is able to do something I would never have been able to do before? I don’t really care if it’s 100% accurate, hit it with a hammer until everything makes sense. Yell at Claude and learn how to wrangle it to get what you want, that skill is an investment that’s going to pay you back far more than following the advice of these folks, that’s my opinion.

Like “you will learn better without AI” is just a bad short sighted opinion dressed up in condescension to appear wise and authoritative.

Learn your tools, learn the limitations, understand where this is going, do the things you want to do and then realize “hey my opinions don’t have to be condescendingly preached to other people as though they are facts”

reply
simonw
5 hours ago
[-]
The author wrote an essay to accompany this site here: https://ficd.sh/blog/your-project-sucks/
reply
yuriks
4 hours ago
[-]
This post mirrors my sentiment and the reasons I dislike these sorts of "projects" much more closely than the main site does, deserved to be the main submission, in retrospect.
reply
dang
5 hours ago
[-]
We'll put that link in the top text, thanks.
reply
charlesabarnes
5 hours ago
[-]
My reflex is to call the website useless because the problem isn't usually software produced by individuals. My problem is the buggy messes that trillion dollar corporations produce.
reply
Jach
2 hours ago
[-]
Indeed, Adobe continues shipping their slop and by all accounts of those who keep using it continue to make it worse, this was all in motion well before modern AI tools. If anything, I bet AI tools are more likely to make things better. I read an anti-AI opinion recently where they had a sense that a small open source project was using AI (and were right, it was easy to see by noticing Claude in the commit history) but what tipped their sense was there were too many "professionalisms" in the code base. Oh no, more code that has documentation and tests and benchmarks, the horror! I also just can't take these "it's stinking up the commons" posts all that seriously -- like where have you been the last couple decades? The baseline seems as likely to improve as to degrade more than it already has. Even the spam of useless pull requests isn't new, the peak of that was probably with peak MOOC popularity some years ago when classes would require submitting pull requests to some open source project as an assignment.
reply
BizarroLand
5 hours ago
[-]
I mean, look at Microsoft. They're a tiny little 5 trillion dollar company and their own cloud storage software can't reliably extract zip files compressed with their own compression software on their own flagship operating system.

How dare some nobody in a third world country use AI resources to accelerate the development of some process that fixes an issue for them and occasionally ask you to buy them a coffee when a poor sad pathetic evil worthless hateful disgusting miserable useless 5 trillion dollar company that actively hates you does the same thing with worse results that makes your life more miserable while lining their pockets with every penny in the entire world?!

reply
tonymet
5 hours ago
[-]
If man-made software was high quality, this problem would resolve itself , because “slopware” would be easily distinguishable.

The best way to resolve this is to write man-made software that’s good quality.

It’s just a tool

reply
sergiotapia
2 hours ago
[-]
If you are an experience software developer, AI lets you fly and build things much faster.

If you are a new software developer, I don't see how you grow to develop taste and experience when everything is a <ENTER> away.

I think we are the last generation of engineers who give a fuck tbh.

reply
colesantiago
5 hours ago
[-]
My “slopware” has brought in $200K a month.

As long as it works and people’s problems are solved, I don’t see any issue with it?

reply
abound
5 hours ago
[-]
This seems directed at people sharing low-effort AI-generated open source projects.
reply
legedemon
5 hours ago
[-]
If possible, could you please explain what your slopware does?
reply
happytoexplain
5 hours ago
[-]
You don't seem to be responding to anything the site says.
reply
indigodaddy
5 hours ago
[-]
What's your software?
reply
singularity2001
5 hours ago
[-]
Written by AI ;)

At least it feels a bit like it

reply
mpalmer
5 hours ago
[-]
You can't be serious.

> When you publish something under the banner of open–source, you implicitly enter a stewardship role. You’re not just shipping files, you’re making a contribution to a shared commons. That carries certain responsibilities: clarity about purpose, honesty about limitations, and a basic alignment with the community’s collaborative ethos.

(from the second link)

You're not just writing angry screeds, you are producing slop prose and asking us to spend our time reading it.

How is this not an implicit repudiation of your entire argument? Are you not hurting yourself by avoiding learning how to write better?

reply
adrianmalacoda
5 hours ago
[-]
That assertion is also highly debatable. To me no part of releasing software under a free license implies a "stewardship role."
reply
simonw
5 hours ago
[-]
I'm going to give the author the benefit of the doubt here. Not all "you're not X, you're Y" was written by an LLM.
reply
happytoexplain
4 hours ago
[-]
I can't tell what you're referring to. The quote reads well.
reply
zahlman
1 hour ago
[-]
It invokes a couple of classic "LLM writing red flag" tropes. But they're ones that are reasonably appropriate in context.

I have caught myself on occasion rewriting to avoid looking too much like an LLM. But I've also introduced em-dashes to my writing — here's a gratuitous example just for fun — simply because the LLM slop writing discourse prompted me to research the X11 input system.

reply