It's being mandated by almost all companies. We're forced to use it, whether it produces good results or not. I can change a one-line code faster than Claude Code can, as long as I understand the code. Someday, I'll lose understanding of the code, because I didn't write it. What am I embracing?
I've been wading through vast corporate codebases I never wrote and yet had to understand for the past 20 years. This isn't any different, and AI tools help with that understanding. A lot!
The tools and techniques are out there.
What about code that other (human) engineers write? Do you not understand that code too because you didn't write it?
BTW, that guy received an Oscar for coding. Oh, far far have we fallen since those days and how far we have yet to go...
[0]: https://www.teamten.com/lawrence/writings/coding-machines/
You helped build the company, you should own a proportional part of it.
The issue with the current system is that only the people who provide money, not the people who provide work, get to own the result. Work (and natural resources) is where value comes from. Their money came from work as well but not only their work, they were in a position of power which allowed them to get a larger cut than deserved. Ownership should, by law, be distributed according to the amount and skill level of work.
Then people wouldn't worry about losing their jobs to automation - because they'd keep receiving dividends from the value of their previous work.
If their previous work allowed the company to buy a robot to replace them, great, they now get the revenue from the robot's work while being free to pursue other things in their now free time.
If their previous work allowed an LLM to be trained or rented to replace them, great, they get get the revenue from the LLM's work...
You still don't make sense, mate.
Yes, a better system would be great. Half-baked ideas only stand in its way.
Attitude as old as time itself.
(Yes, one can write C programs with undefined behavior, but C does also have many well-defined properties which allow people to reason about well-defined C programs logically reliably.)
Your analogy is bad. The programmer and the AI both produce working code. The other poster's response was correct.
Also software was always about domain knowledge and formal reasoning. Coding is just notation. Someone may like paper and pen, and someone may like a typewriter, but ultimately it’s the writing that matters. Correctness of a program does not depends on the language (and the cpu only manipulate electric flow).
I argue against AI because most of its users don’t care about the correctness of their code. They just want to produce lots of it (the resurgence of the flawed LoC metric as a badge of honor).
This is remarkably sloppy for someone who codes. No facts, just opinion, claimed with confidence.
Strong opinions, loosely held.
What I’ve seen seems to confirm that opinion, so I’m still holding on to it.
From my perspective, knowing how it gets down to machine code makes it more useful and easier to control, but that doesn't mean I want to stop writing English now that we can.
Since the dice is loaded heavily, this happens quite often. This makes people think that the dice can program.
I think the road to this is pretty clear now. It’s all about building the harness now such that the AI can write something and get feedback from type checks, automated tests, runtime errors, logs, and other observability tools. The majority of software is fairly standardized UI forms running CRUD operations against some backend or data store and interacting with APIs.
This type of works seems to be happening as a lot of orchestrator projects that pop up here every once in a while, and I've just been waiting for one with a pipeline 'language' malleable enough to work for me that I can then make generic enough for a big class of solutions. I feel doomed to make my own, but I feel like I should do my due diligence.
Have you ever look at Debian’s package list?
Most CRUD apps are just a step above forms and reports. But there’s a lot of specific processing that requires careful design. The whole system may be tricky to get right as well. But CRUD UI coding was never an issue.
DDD and various other architecture books and talks are not about CRUD.
Isn't this just a new programming language? A higher level language which will require new experts to know how to get the best results out of it? I've seen non-technical people struggle with AI generated code because they don't understand all the little nuances that go into building a simple web app.
any examples? I'd love to watch how that went
I would like to know how the author concluded that this is the majority opinion.
After all they are asking questions. And they are not dumb ass who don’t learn. They are also motivated to learn to adapt to AI.
IMO the best value of humans right now is to provide skills, as fuel of the future. Once we burn up then the new age shall come.
What is often overlooked is that we are not trying to just produce programs, we are trying to produce "systems", which means systems where computers and humans interact beneficially.
In the early days of computers, "System Analyst" was a very cool job-description. I think it will make a comeback with AI.
People do overlook that. Software is written to solve a problem. And it interacts with other software and data from the real world. Everything is in flux. Which is why you need both technical expertise (how things work) and domain knowledge (the purpose of things) to react properly to changes.
Creating such systems is costly (even in the AI edge), which is why businesses delegate parts that are not their core business to someone else.
This is survivorship bias, a form of sample bias.
Confirmation bias is a form of motivated reasoning where you search for evidence that confirms your existing beliefs.
I myself have feelings like this, as a software engineer by trade.
"We will forever be useful!" As a sounding cry against radical transformation. I hope that's the case, but some of these pieces just seem like copium.
Yeah no shade to the author, it was a well written piece, but these arguments increasingly seem like a form of self-soothing to me. On current trajectories I don't see how AI won't swallow up the majority of the software development field in ways that completely devalue not just software engineers, but everyone up the stack in software-focused companies, eventually removing the need for most of the companies to even exist.
(Why have a software company employing someone who is an expert at gathering customer requirements to feed to the LLM when the customer can just solve their own problem with an LLM? They'll build the wrong thing the first time... and the fifth time... and maybe the tenth time... but it'll still be way faster for them to iterate their own solution than to interface with another human).
It'll take longer for senior engineers to suffer than juniors, and longer for system architects to suffer than senior engineers, and so on up the chain but I'm just not seeing this magical demarcation line that a lot of people seem to think is going to materialize prior to the rising tide of AI starting to go above their own neck.
And this is likely to all happen very quickly, it was less than a year ago when LLMs were awful at producing code.
For how long? Do I get to feel smug about this for 10 days, 10 weeks, or 10 years? That radically changes the planned trajectory of my life.
" At this particular moment, human developers are especially valuable, because of the transitional period we’re living through."
You and GP are both attacking him on a strawman: it's not clear why.
We're seeing countless AI slop and the enshittification and lower uptime for services day after day.
To anyone using these tools seriously on a daily basis it's totally obvious there are, TODAY*, shortcomings.
TFA doesn't talk about tomorrow. It talks about today.
> "But for now, real programmers will always win."
"for now ... always", not a good phrasing.