One could argue that it was always like this. Low-level languages like C abstracted away assembly and CPU architecture. High-level languages abstracted away low-level languages. Frameworks abstracted away some of the fundamentals. Every generation built new abstractions on top of old ones. But there is a big difference with AI. Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.
I am not saying we cannot use them. I am saying we cannot fully trust them. Yet everyone (or maybe just the bubble I am in) pushes the use of AI. For example, I genuinely want to invest time in learning Rust, but at the same time, I am terrified that all the effort and time I spend learning it will become obsolete in the future. And the reason it might become obsolete may not be because the models are perfect and always produce high-quality code; it might simply be because, as an industry, we will accept “good enough” and stop pushing for high quality. As of now, models can already generate code with good-enough quality.
Is it only me, or does it feel like there are half-baked features everywhere now? Every product ships faster, but with rough edges. Recently, I saw Claude Code using 10 GiB of RAM. It is simply a TUI app.
Don’t get me wrong, I also use AI a lot. I like that we can try out different things so easily.
As a developer, I am confused and overwhelmed, and I want to hear what other developers think.
I don't know where you are in your career, me I am on the backend. But all the time I was working the constant churn of new tools/languages/frameworks and so on, the race to keep up with the vendors just wore me out. And despite all that, building software honestly never changed much.
I have been working with both Codex and Claude, and you are right, you can't trust them. My best practice I have found is constantly play one off against the other. Doing that I seem to get decent, albeit often frustrating results.
Yes, the actual building of the code is either over, or soon to be over. The part that I always considered the "art." I often found code to be beautiful, and enjoyed reading, and writing, elegant code all the time I was working with it.
But the point of code is to produce a result. And it's the result that people pay for. As you mentioned with the evolution of development in your original post, the process and tools might have changed, but the craftsmanship in operation with those using them did not.
You make a fair point that this abstraction is different — prior layers were engineered and traceable, and an LLM output isn't. But I'd argue that makes the human in the loop more important, not less. When the abstraction was deterministic, you could eventually lean on it fully. When it isn't, you can never fully step away. That actually protects the craft.
Until AI becomes a "first mover" god forbid, where there is no human in the chain from inception to product, there will always be a person like you who knows where the traps are, knows what to look out for, and knows how to break a problem down to figure out how to solve it. After all, as I have always said, that is all programming really is, the rest is just syntax.
I you instead "promote" yourself to architect or lead dev and you steer the ai as it it a team of junior dev you must manage you can learn a lot. not only will you have deep architecture discussion with ai where you, together, explore various approaches and ways to do things.
an if you do spec driven ai development where you write the specs you will end up with an app that resemble the way you prefer apps to be written.
Just because ai can cook something up in no time doesn't exclude you from being involved.
For my work I use it extensively. I use Cursor as a senior engineer who breaks down problems and only writes the parts that interests me. I trust AI with other parts and do a review afterwards. AI prompting is a real skill. I don't like it but I don't like my work either.
man. this actually seems so profound to me. I feel the same way overall as far as personal vs. work projects and AI use, but this wording hits it on the nose.
Of course the more into a job the less flexible the vacations become. I probably don’t want to be a staff engineer or a lead for that reason. A senior engineer is the best thing in the engineering world, if you don’t like the job.
This is the argument for actually learning, so you don’t ship half-baked code, because the AI isn’t good enough. The people who are telling you it is likely have a financial interest in pushing the narrative that AI will do it all.
> LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.
This is another problem. A lot of code is written so exactly the same thing happens every time. If AI is going in and changing the logic in subtle ways that aren’t noticed right away when updates are made, because no one understands the code anymore-that’s a problem. In hobby or toy apps, no one cares, but in production code and critical systems, it matters.
That said, in response to this:
> I am terrified that all the effort and time I spend learning it will become obsolete in the future
I am of the belief that there's no way that effort and time that goes into learning something like Rust will be wasted. You'll learn stuff and gather a sense for things that might not concretely be applied, but will be helpful when it comes to reasoning about whatever comes next language wise. Learning often is about the journey and not just the destination. I think it'd still be worth it.
Or at least ... that's what I tell myself now as I try to learn Zig.
> I am deeply sad that we may be losing the craftsmanship side of programming;
Absolutely no company is paying you for your “craftsmanship”. You are getting paid to add business value either by making the company more or saving the company more than your fully allocated cost of employment.
> Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.
The output of the LLM is deterministic code. No one is using an LLM in production to test whether a number is even.
You can run unit and integration tests on the resulting code just like your handcrafted bespoke code. When I delegated tasks to more junior developers, they weren’t deterministic either.
> For example, I genuinely want to invest time in learning Rust
I specialize in cloud + app dev consulting. I know CloudFormation like the back of my hand. I’ve been putting off learning Terraform and the Amazon CDK for years. Last year, I had a project that needed the CDK and then another project that required Terraform. I used ChatGPT for both and verified they created the infrastructure I wanted. Guess what I’m not going to waste time doing now? The client was happy, my company got paid, what’s the point?
> As a developer, I am confused and overwhelmed, and I want to hear what other developers think.
If you are an enterprise developer (like most developers are) your job has been turning into a commodity for a decade because their were plenty of good enough backend/full stack/web/mobile developers and it’s hard to stand out from the crowd. AI has just accelerated that trend.
This is no way meant meant to put myself as more than an enterprise developer who happens to know how to talk to people and “add on to what Becky said” and “look at things from the 1000 foot view”.
By definition, today, AI is the worse that it ever will be.
There's no "craftmanship" there. And I don't want there to be.
Absolutely no one gives a damn about “craftsmanship”. I just got rave reviews from a customer for my completely AI generated website for their internal use. I didn’t look at a single line of code and haven’t done any serious web development since 2002 in classic ASP.
The customer is happy. My company is happy and they continue putting money in my bank account on the 1st and 15th. I use to think like you. But I quickly grew out of it.
LLMs may have removed the critical need for a SW engineer to know details, like the syntax of Rust or the intricacies of its borrow checking semantics. But LLMs, I maintain, didn't remove the critical need for an engineer to learn _concepts_ and have a large, robust library of concepts in your head. Diverse, orthogonal concepts like data structures, security concerns, callbacks, recursion, event driven architecture, big O, cloud computing patterns, deadlocks, memory leaks, etc etc. As long as you are proficient with your concepts, you will easily catch up with the relevant details in any given situation. Once you've ever seen recursion, for example, you will find no trouble recognizing it in any language.
That's the beauty of LLMs : you don't _have_ to be good at technical details any more. But you still have to be very good with concepts, not just to be able to use LLMs properly, but also _be in control_ of their work. LLM slop is dangerous not because of incorrect details like bad syntax. It is dangerous because it misplaces concepts: it may use a list where you need a hash map and degrade performance, it may forget a security constraint and cause a data leak, or it can be specific where it needs to be general, etc. An engineer needs to know and check the concepts if they want to remain in control. (And you absolutely do want that.)
But it is impossible, or very impractical, to just learn an abstract concept out of thin air. The normal way to learn a concept is to see its concrete instantiation somewhere, in all its detailed glory, and then retain its abstract version in your head.
So, the only way to stay relevant and stay in control is to have a robust concept library in your mind. And the only way to get that is to immerse yourself in many real technical situations, the details of which you must crack first, but free to forget later. That is learning, and that is still important today in the age of LLMs.
That didn’t stop me from getting a phd.
If you think it’s all there’s to programming that llm spits out, then the problem is in you somewhere, not in llms.
The idea is interesting though.