I like problem solving too. But I also like theory and craft, and in my naïveté I assumed most of us were like me. LLMs divorced craft-programming from tool-programming and now it seems like there were never any craft-programmers at all.
It feels like the group I was part of was just a mirage, a historical accident. Maybe craft-painters felt the same way about the camera.
When I was growing up as a programmer I observed JavaScript and swore I would never, ever touch webdev. As I got older that list kept growing to include mobile, then Apple, then desktop in general.
I like my little world of C where everything must be crafted with care or it just plain doesn't work.
And it's not that there isn't any embedded slopware, but the constraints are so much tighter that you can't really get away with the same level of bad code on 1KB ram and 4KB flash that you can on a 32 core desktop CPU with practically infinite resources.
I hope you track the progress so you're not surprised one day. The research side is way past embedded C and VHDL was of interest a year ago https://dl.acm.org/doi/10.1145/3670474.3685966 In embedded code, recent LLMs can do just fine with popular architectures. It's down to the spec and test harness whether embedded C works or not. The moat is not that big.
This reminds me of the Google/Oracle Java case where one of the example "copied code" fragments was some trivial code with null guards. Anyone could write this and end up with the same code. Human/LLM/whatever doesn't matter. That fragment just needed to exist.
Most of the times, I write software not to write software, but to solve a problem somewhere, often times not related to software itself. Other times I feel like the UX of some dev tool is bad, and if I just quickly fix that, I can solve my problem faster, so down the rabbit hole we go, which is a different type of experience.
Other times I'm focused on figuring out an elegant design/architecture for something that isn't problem solving, but more "neat piece of software", either a library or some other type that needs some sort of interface, be it library API or actual UI. Then I'll go into "craftmanship" mode and then most of the work actually happens away from the computer, mostly with pen and paper or whiteboards.
I still think the latter is needed for improving the former, and high-quality and easy to maintain code is more important than ever, and if you only do the former, you'll get stuck at a ceiling while only doing the latter, you'll also get stuck if there isn't an actual need (at some level, "fun" can be a need) for it.
An example. I've been writing a Lisp, and I'm using GNU Readline for text input. Later I found out that Readline can't be built for WebAssembly, and I decided to have Claude write a podunk replacement for it. I now have a bit of code in my Git, attached to my name, that I didn't write
What did I lose by doing that? My goal wasn't "to write a Readline", that's why I was using it in the first place. But my goal also wasn't "to have a working Lisp interpreter" or even like "to know how a Lisp interpreter works". It was a desire to Know More. Surely I'd have learned something useful (in some form) by doing all the minutiae myself. Or would I have learned more by doing none of it and printing out the SBCL source to read over coffee?
Sorry, I ended up rambling. I don't have any answers. I think I'm just butthurt by the "X, Y" sort of comments you mentioned and the solution is (as always) to touch grass
LLM code provides just another type of available abstraction where we can stop in learning, but not really something entirely new.
I think most devs, especially ones that call it a "craft, take themselves too seriously. We're glorified construction workers that get paid a lot
That attitude is my point. I'm a developer by trade; I have a different set of feelings and concerns about "the industry" and how new tooling will affect it. (I even use it sometimes at work.) But I'm also a computer scientist and I thought more of you all were too.
To beat my original analogy to death: I thought this was a painting forum, but it's more of a "making pictures" forum, and now that it's easier to make a picture, no one cares about paintbrushes.
The space for innovation in computer science is pretty limited when it comes to constructs like how we build something considering how fully featured libraries and such are. Literally everything I've built in my career has been built better by someone else. If I am a scientist, it's the person in the lab making 10,000 flu shots a day.
I'll just say that my career in the industry as a programmer has made me very good at working on my car and I think they're more intertwined and connected then anything theoretical
It’s more like the discussions space to talk about things related to the painters of hotel art.
My point is that the end-product matters most, and getting wrapped in any other part of the process for its own sake is a failing, or at best a distraction - in both cases.
Furthermore, given how they behave in a cult-like way, it feels like they are straight-up delusional.
People working as consultants for big retail chains, talking all day long about "the craft". Nobody cares. They sell trash. They don't put marble in their store. They don't want fancy software. Furthermore, if, by trying to force "the craft" to their peers, all they do is making the life of others miserable... Just stop. Please.
Now my approach is as follow:
if stakeholders are only interested in two things (how-much it cost, when it's ready), which is 99% of the case at $JOB, then make something that does the job and that won't make you hate yourself if you have to maintain it in a year
if I'm the stakeholder, like creating internal tooling that nobody asked for but that will solve issues, then yes, I do things as good as I want them to be
same for working on FOSS on my personal time
Nowadays, the craft can be practiced at your home, by yourself.
Or just about making money :(
I’ve always enjoyed the craft of software engineering, though even I admit the culture around it can be a bit overly contemplative .
Nevertheless, there is room for both personalities. Just hang out with likeminded people and ignore the rest.
But the original analogy is flawed too. I wouldn't consider caring about the craft of programming to be similar to obsessing over your photography equipment. GAS is about consumerism and playing with gadgets, at the end of the day.
Caring about the craft of programming is more about being an artist who takes pride in crafting something beautiful, even if they're the only ones experiencing it. I am most definitley not one of those programmers, but have always had nothing but immense respect for those that are.
Or find ways to integrate with the rest, challenging one another to facilitate growth.
After all, my work (for the moment) is just about pushing features to keep the PMs happy ¯\_(ツ)_/¯
Plus, code-based meritocracy flat-out doesn't exist outside the FOSS circle. Many of the people you know are clocking-in at a job using a tech stack from 2004, they aren't paid to recognize good craftsmanship. They show up, close some tickets, play Xbox during on-call and collect their paycheck on Friday.
The people who care might be self-selecting for their own failure. It's hard to make money in tech if your passion for craft is your strongest attribute.
This is not the distinction I would want to tell newcomers. AI is extremely good for finding out what the most common practices are for all kinds of situations. That's a powerful learning tool. Besides, learning how to use a tool well (one that we can expect professionals to use) is part of learning.
Now most common practices and best practices are two different things. They are often the same, but not always. That's the major caveat for many fields, but if you can keep it in mind, you're going to do OK.
I would say to at least just read what the AI does and ask it questions if you don't understand what it did. You can interactively learn software development from AI in a way that you cannot from a human simply because it won't run out of patience even if it will lie to you.
The results depend mostly on how you use it.
Whose to judge if it works and ships on time? Well, the fool later down the road who has to maintain it probably. But I've never believed in gate-keeping or preaching without pragmatism - I rather put my energy in teaching what little i can and hope that joy of seeing things improve for better will motivate them towards learning. If not, well it's waste of time either way.
Also, public communities have been overrun with low-quality beginner posts for longer than AI. Moreover, you can't convince all beginners to stop using AI, because some are malicious and/or socially inept. Moreso than shame or encouragement, we need a filter for low-quality projects (perhaps a combination of AI flagging and manual review). That would benefit both the beginners and the people who shame them because they're bothered.
When it inevitably all comes crashing down because there was no actual software architecture or understanding of the code, someone will have to come in to make the actual product.
Hopefully by then we will have realistic expectations for the LLM, have skilled up, and we as a community treat them as just another feature in the IDE.
But beyond that, I'm really not looking forward to trying to discover new good libraries, tools, and such in 5 years time. The signal to noise is surely dropping.
With the ambiguity in the meaning of "VC" being intentional.
My experience has been more that they expect you to fix the broken mess, not rebuild it properly.
Why would a window maker be against breaking windows?
More realistic: AI assisted tooling will continue to improve as it has, the average code quality will rise as conventions and workflows improve and those who wait to be called in to clean up slop or whatever will wait forever, pushed by the wayside by those who can deliver great quality with the help of these new tools.
That sounded a lot like the "have fun staying poor" argument from the peak cryptocurrency days.
Current gen AI has a ton of issues, but it nevertheless enables vast amounts of use cases today, right now.
And hoping that slop that is created today will provide work for the artisanal craftsman in the future is wishful thinking at best.
A glance at the r/python will show that almost every week there is a new pypi package generated by ai, with dubious utility.
I did a quick research using bigquery-public-data.pypi.distribution_metadata and out of 844719 package, 126527 have only 1 release, almost 15%.
While is not unfathomable that a chunk of those really only needed one release and/or were manually written, the number is too high. And pypi is struggling for resources.
I wonder how much crap there is on github and I think this is an even larger issue, with the new versions of LLMs being trained on crap generated by older versions.
It might not be a storage problem right now, but the practice of publishing crap is dangerous, because it can be easily abused. I think it is very easy to publish via pypi a lot of very heavy packages.
I'm not against taking the time to read the docs, learn to craft code, and ship beautiful projects, but I could have done that before and didn't then either.
The difference is that now I have a hundred small, internal tools that save my team time and energy.
I now get why so many people are making AI art. I see their "work" as an illustrator and it is absolute slop, but I can see now how it might be fun and even liberating for people who don't make a living with it. So I now think twice before calling AI art "slop". Sure, it may be slop, but it's making a lot of people happy and probably opening up new carreer paths for people.
And yes, I've been affected financially because of this... but I get it.
That said, I concede the critics have many valid points and concerns and it’s going to be interesting to see how we navigate this flood of “stuff” at a scale never seen before. (I suspect it’ll end up like YouTube and video. Ultra long tail. Most stuff never seeing more than a few eyeballs and a smaller group getting the lion’s share of attention, as with most things. Did YouTube change TV and video production more generally? Yes! But it also didn’t destroy it..)
I'm certain that's not true. AI is the single biggest gift we could possible give to people who are learning to program - it's shaved that learning curve down to a point where you don't need to carve out six months of your life just to get to a point where you can build something small and useful that works.
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.
100% rejecting AI as a learner programmer may feel like the right thing to do, but at this point it's similar to saying "I'm going to learn to program without ever Googling for anything at all".
(I do not yet know how to teach people to learn effectively with AI though. I think that's a very important missing piece of this whole puzzle.)
I'm a BIG fan of these three points though:
rewrite the parts you understand
learn the parts you don’t
make it so you can reason about every detail
If you are learning to program you should have a very low tolerance for pieces that you don't understand, especially since we now have a free 24/7 weird robot TA that we can ask questions of.> AI overuse hurts you:
> - if you’re doing this for your own learning: you will learn better without AI.
So they're calling out "AI overuse", and I agree with that - that's where the skill comes in of deciding how to use AI to help your learning in a way that doesn't damage that learning process.
Needlessly to say there is no consensus. I err on the side of photobashing personally.
And the completely wrong decision in a hobby setting.
Cognitive bandwidth is limited, and if you need to fully understand and get through 10 different errors before anything works, that's a massive barrier to entry. If you're going to be using those tools professionally then eventually you'll want to learn more about how they work, but frontloading a bunch of adjacent tooling knowledge is the quickest way to kill someone's interest.
The standard choice isn't usually between a high-quality project and slopware, it's between slopware or nothing at all.
You mean it cannot be overstated?
I think that's very important.
Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.
I have found AI to be a great tool for learning, but I see it -- me, personally -- as a very slippery slope into not learning at all. It is so easy, so trivial, to produce a (seemingly accurate) answer to just about any question whatsoever, no matter how mundane or obscure, that I can really barely engage my own thinking at all.
On one hand, with the goal of obtaining an answer to a question quickly, it's awesome.
On the other hand, I feel like I have learned almost nothing at all. I got precisely, pinpointed down, the exact answer to the question I asked. Going through more traditional means of learning -- looking things up in books, searching web sites, reading tutorials, etc. -- I end up with my answer, but I also end up with more context, and a deeper+broader understanding of the overall problem space.
Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
I feel like that is both good and bad. I don't want to be too dismissive of the good, but I also feel like it would be unwise to ignore the bad.
Whoa hey though, isn't this just exactly like books? Didn't, like, Plato and all them Greek cats centuries ago say that writing things down would ruin our brains, and what I'm claiming here is 100% the same thing? I don't think so. I see it as a matter of scale. It's a similar effect -- you probably do lose something (whether if it's valuable or not is debatable) when you choose to rely on written words rather than memorize. But it's tiny. With our modern AI tools, there is potential to lose out on much more. You can -- you don't have to, but you can -- do way more coasting, mentally. You can pretty much coast nonstop now.
I think you learned something critically important: that the thing you wanted to build is feasible to build.
A lot of ideas people have are not possible to build. You can't prove a negative but you CAN prove a positive: seeing a version of the thing you want to exist running in front of you is a big leap forward from pondering if it could be built.
That's a useful thing to learn.
The other day, at brunch, I had Claude Code on my phone add webcam support (with pinch-to-zoom) to my https://tools.simonwillison.net/is-it-a-bird is-it-a-bird CLIP-in-your-browser app. I didn't even have to look at the code it wrote to learn that it's possible for Mobile Safari to render the webcam input in a box on the page (not full screen) and to have a rough pinch-to-zoom mechanism work - it's pixelated, not actual-camera-zoom, but for a CLIP app that's fine because the zoom is really just to try and exclude things from the image that aren't a potential bird.
(The prompts I used for this are quoted in the PR description: https://github.com/simonw/tools/pull/175)
> Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
100% agree with that. You need a lot of self-discipline to learn effectively with AI. I'd argue you need self-discipline to learn via other means as well though.
I think it's possible that for learning a 90% accuracy rate is MORE helpful than 100%. If it gets things wrong 1/10th of the time it means you have to think critically about everything it tells you. That's a much better way to approach any source of information than blindly trusting it.
The key to learning is building your own robust mental model, from multiple sources of information. Treat the LLM as one of those sources, not the exclusive source, and you should be fine.
I honest to god am in teams chats at work with high up the food chain architects and leaders (and plain old devs) and people are pasting chatgpt responses either as evidence backing up their claims of how something should be done, or as an actual response to another person as if they typed it themselves.
I have people sending me documents they "put together" that are clearly chatgpt generated, tables and emojis included.
Is this progress?
There is no need to be so prescriptive about how software is made. In the end the best will win on the merits. The bad software will die under its own weight with no think pieces necessary.
On the other hand, code might be becoming more like clay than like LEGO bricks. The sculptor is not minding each granule.
We don't know yet if there's long term merit in this new way of crafting software and telling people not to try it both won't work, and honestly looks like old people yelling at clouds.
I get what you're saying, but the irony is that AI tools have sort of frozen the state of the art of software development in time. There is now less incentive to innovate on language design, code style, patterns, etc., when it goes outside the range of what an LLM has been trained on and will produce.
https://github.com/williamcotton/webpipe/tree/webpipe-2.0
https://github.com/williamcotton/webpipe-lsp/tree/webpipe-2....
Personally I am experimenting with a lot more data-driven, declarative, correct-by-construction work by default now.
AI handles the polyglot grunt work, which frees you to experiment above the language layer.
I have a dimensional analysis typing metacompiler that enforces physical unit coherence (length + time = compile error) across 25 languages. 23,000 lines of declarative test specs compile down to language-specific validation suites. The LLM shits out templates; it never touches the architecture.
We are still at very very early days.
Specs for my hobby physical types metacompiler tests:
https://gist.github.com/ctoth/c082981b2766e40ad7c8ad68261957...
I've had great success teaching Claude Code use DSLs I've created in my research. Trivially, it has never seen exactly these DSLs before -- yet it has correctly created complex programs using those DSLs, and indeed -- they work!
Have you had frontier agents work on programs in "esoteric" (unpopular) languages (pick: Zig, Haskell, Lisp, Elixir, etc)?
I don't see clarity, and I'm not sure if you've tried any of your claims for real.
The last six decades of commercial programming don't exactly bear this out...
The real lesson is that writing software is such a useful, high-leverage activity that even absolutely awful software can be immensely valuable. But that doesn't tell us that better software is useless, it just tells us it is not absolutely necessary.
It’s a tool. No one cares about code quality because the person using your code isn’t affected by it. There are better and worse tools. No one cares whether a car is made with SnapOn tools or milled on HAAS machines. Only that it functions.
We know there is no long term merit to this idea just looking back at the last 40 years of coding.
There will still be major, fundamental, foundational software work for serious engineers to do, but we have to admit that most software needed in the world is not that.
None of which are needed. Why is an intermediate step needed?
> Most tone deaf thing you've seen in your life seems a bit over the top.
To complain about overcomplication via an overcomplicated project is a tad rich.
Like “you will learn better without AI” is just a bad short sighted opinion dressed up in condescension to appear wise and authoritative.
Learn your tools, learn the limitations, understand where this is going, do the things you want to do and then realize “hey my opinions don’t have to be condescendingly preached to other people as though they are facts”
How dare some nobody in a third world country use AI resources to accelerate the development of some process that fixes an issue for them and occasionally ask you to buy them a coffee when a poor sad pathetic evil worthless hateful disgusting miserable useless 5 trillion dollar company that actively hates you does the same thing with worse results that makes your life more miserable while lining their pockets with every penny in the entire world?!
The best way to resolve this is to write man-made software that’s good quality.
It’s just a tool
If you are a new software developer, I don't see how you grow to develop taste and experience when everything is a <ENTER> away.
I think we are the last generation of engineers who give a fuck tbh.
As long as it works and people’s problems are solved, I don’t see any issue with it?
At least it feels a bit like it
> When you publish something under the banner of open–source, you implicitly enter a stewardship role. You’re not just shipping files, you’re making a contribution to a shared commons. That carries certain responsibilities: clarity about purpose, honesty about limitations, and a basic alignment with the community’s collaborative ethos.
(from the second link)
You're not just writing angry screeds, you are producing slop prose and asking us to spend our time reading it.
How is this not an implicit repudiation of your entire argument? Are you not hurting yourself by avoiding learning how to write better?
I have caught myself on occasion rewriting to avoid looking too much like an LLM. But I've also introduced em-dashes to my writing — here's a gratuitous example just for fun — simply because the LLM slop writing discourse prompted me to research the X11 input system.