That said, I have experience. I could absolutely see myself falling into this as a junior or even mid level dev. I'd no doubt not feel that feeling on my neck if it wasn't scarred from code review lashings early in my career by knowledgeable mentors.
I code review everything that Claude produces, and I'd estimate about 90-95% of the time, my reaction is WOW it works but too much code dude, let's take 3 hours to handhold you through simplifying it until nothing more can be removed.
> let's take 3 hours to handhold you through simplifying it until nothing more can be removed.
This is why I'm unconvinced that AI code makes me faster. Sure, I could produce a million lines an hour but are we running a sprint or a marathon? I don't know about you but I can't sprint a marathon.I think much of the world of software has become incredibly myopic. I get it, it's a lot harder to win a war than it is to win a battle but just usually taking the easy way out is just deferring the costs to your future. Problem is that those costs accrue interest... Personally? I'm lazy and a cheapskate.
When did programmers stop becoming lazy and start becoming lazy? More importantly, why?
The reason we aimed for minimal "accidental complexity" up to now was directly related to the cost/pain of changing and maintaining that code. Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?
I think a bit of refactoring, renaming and restructuring has been helpful for maintainability but recently I've been a little less inclined to worry about the easy readability of function bodies and fine implementation details. It still feels wrong but I can't justify the effort anymore.
> it's worth asking whether lots of relatively straightforward verbose code is actually significantly worse than the least code necessary for the problem.
The question is wrong because reality isn't binary. "We've" never aimed for minimal, except maybe in the very early days or some real edge casesIf you're writing the minimal code you're either writing something very compact/simple[0], or you're wasting too much time and not balancing things.
If you're rewriting everything then you're wasting too much time and introducing too much complexity[1].
You can't write good code by slapping together a bunch of libraries but that doesn't mean you shouldn't use libraries either.
[0] "simple" is an overloaded term. If you're upset by me saying "simple", I'm using the other definition
[1] sed -i [0] "s/simple/complex/g"
Not while context windows cause decay and larger bills.
The AI's max cognitive load C is larger than a human's, but if codebase size grows unbounded the minimum context needed for a change will eventually surpass C.
It is also a bad idea to let your codebase become only readable by a machine when we are still in the dark about the role machines and people will take in the future. What if you have to go back to manual dev in a now gargantuan codebase?
This unused code gets further modified as time goes on: new functionality is wired in, or it gets further refactored. Usually it’ll still have tests that cover it. It gives the impression of being live code, but it’s not: it’s zombified.
So you get situations where it gets wired up to something and then that something doesn’t work and you wonder why and so you start digging about and you discover it’s because it has been wired into a path that is never executed.
The fog of relatively recent changes sometimes makes it hard to figure out if the code should be unused or if someone just forgot to hook it in as part of a bigger piece of work. Then you find nobody else is really sure either.
So that extra complexity comes at a cost. It can slow you down or trip you up; catch you by surprise.
The answer to your question is really obvious. The high-effort manually coded projects stick around and the low-effort vibe-coded projects are forgotten about quickly. In the end LLM-driven programming is always going to bring you to a dead-end. There's certain things where I can predict that they're going to fail because it's going to involve certain kinds of complexity they can't and will never be able to deal with. The code gets so bad that even if an expert programmer wanted to make changes it either wouldn't be possible or worth it. A lot of the time the vibecoders are so high off the low-effort sense of empowerment that they don't even realize what they made is completely broken.
Well written software has staying power because it can be understood and built upon. Understanding a problem deeply enough to devise an elegant solution even leads to new possibilities and ideas that will never be conceived with a more superficial understanding.
For example I have a game I've been working on for a few years, I do stuff like "implement this simple psuedo physics system to make the bot follow the character like so...etc"
After some planning and back and forth.
It returns mostly working code a little odd on some edge case.
But as I've hand coded this thing for years. I could easily look at it. Laugh my ass off, it had multiple classes and around 1k lines of code, all kinds of crazy non performant crap.
The exact thing I needed, I reprogrammed in around 5 lines of very simple code that did exactly what I needed with no edge case weirdness.
Now the vibe coders actually ship that shit. I like to read vibe code games now and again, and there is no possible way those guys are ever shipping a real game, as every single decision is verbose along with the worst performance decisions over and over everywhere.
Sure it can get you some cute little toy projects, but it will absolutely fall apart if you are trying to make real games.
Don't know about saas apps or whatever. Maybe that stuff doesn't matter at all.
Abstractions are like the structural elements of a house, security is like plumbing or electrical, but individual features are like carpet and paint. When it's working on the superficial stuff, who cares what it gets wrong? Just go rip up the carpet and do it again if you have to.
I sincerely believe that extensive accidental complexity will ALSO be bad for AI agents. Their quality will diminish as their context windows get filled up with endless amounts of spaghetti and accidental complexity. I feel like we won't fully start feeling those effects for another year or so.
I think I need to work up a Claude skill named marie-kondo, so that when it breathlessly presents its triumphant solution, I can go “yes, but does it spark joy?” And have it go into an aggressive refactor loop with me.
—
But this is never the problem. Claude WILL NOT abstract and WILL NOT use your abstractions. It finds them all “ceremonial” and the idea that you could add something that might seem indirect that actually dramatically reduces the problem space is almost impossible to convey.
You can watch this in action for any API whose design you’re familiar with in a domain you understand well. If you attempt to design the same API with Claude, your will invariably get a mess of flat, insane types and no reuse. I’m talking an array of tuples of maps of set to map type insanity.
What has been helping is a mandatory pass of “Claudisms”, but even then it can only find the problem and never the solution.
It is so frustrating.
I just removed an entire graphql endpoint - 500 lines of front and back-end code. I may need to be hosed down.
A good human developer might see that the better way to address the review is to backtrack and pick a different approach. The ai agents seem more prone to getting stuck down bad branches of the decision tree.
What to do if you're just one dev in an org of 50? Who are all pushing more and more code every PR? I'm gonna have to leave aren't I :(
For me what throws me off most of the time is the structure on the mid-level. It usually makes sense in the loc and maybe project level, but on the file and folder level it just loses reference on what it already has or what it does not need to be too verbose about.
You can also tell it to periodically summarize the "lessons learned" from the recent session(s)
You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :)
Restrict that data to just the best of the best, the tersest of the tersest, and we’d see better output. I don’t think people are sharing that kinda stuff (Jane Street’s gems stay locked up), and even if they did my presumption is that it’d be too narrow and demanding for general audiences.
Big hopes for the long future, damned to some degree of mediocrity in the near term mass product.
I don't buy it. I think a much more likely reason it leans towards adding code is because deleting code carries inherent risk: it can break things in major ways or minor ways or very visibly or invisibly. Adding new code, on the other hand, is a lot safer: the only parts that can break are those the AI touched inside its own working context. So it doesn't have to go down rabbit holes and potentially create bigger and bigger messes.
Then I only have to spend one hour handholding the clanker to get it perfect. I usually do a lot of manual refactoring as well during that time.
Look at the doc hub pattern if your {agent}.md file is getting more than ~100 lines.
Tell it "Do not change any files yet, just listen." Then we discuss the problem. Then I have it write to a file it's understanding of the change.
I review that carefully. Then I let it implement. I approve each change after manually looking at it. I already know what it should be doing.
Make smaller changes and check each one carefully before and after.
I end most of my pre-plan prompts with "KISS - Keep it simple" to keep it mostly under control.
I also keep each file under 1000 lines and do a full scan of code and docs for cruft every 20-30 task cycles.
Been working on the same project for six montha and glad to say there is minimal bloat.
Don't vibe-code, it's a joke someone coined in the moment, that somehow the industry decided shouldn't be a joke, and some people think it's a feasible way of developing stuff, it's not.
Find a better way of working together with agent, where you get the review what's important to be reviewed by a human, and "outsource" the rest, and you'll end up with code and a design that works the way you'd program it yourself, you just get there faster. I probably end up reviewing maybe 90% of the code that the agent writes, but still it's a hell of a lot more pleasant writing/dictating a few prompts over typing tens of thousands of characters and constantly moving between files. Maybe I'm just tired of typing...
Python for example is vibe coding compared to C. You pip install some library and just use it. Wanna modify a class instance variable and not use the proper accessor function? Sure, go right ahead.
The big thing about vibe coding is, as ironic as it sounds, prompt engineering. You can have tons of slop, but if it works, it works. The key defining factor is what constitutes as working. Namely, defining input output contracts, and automatic checks.
That's always been one of the problems, though. Writing code for class is much less stressful than writing code that other people will rely on.
Other than that agentic coding has not really been working that well for me at our main codebase though.
If you do this work for a wage and are nearly fully alienated from the value of your labor, I understand the distaste for applying it in any circumstance. You'll care more for your personal experience of the work: how informed you appear when reporting on it to your colleagues, how your boss/colleagues will judge you when an issue arises, how much you feel you are learning from the work, how frustrating it feels to return to items at the behest of others, etc. Vibe coding in these circumstances is unpleasant.
Personal tooling especially, since you want to be able to just do small changes over long periods of time, it's important it makes sense when you come back to it, even if you forgotten all about it since your last change.
If you try and get AI to do anything meaningful, it will be riddled with footguns and bizarre choices. Maybe if you have hundreds of dollars worth of tokens that might not be the case - but for someone who spends $10 a month, it's just not worth the headache.
Besides, for me these are hobby projects and writing code is still fun, I just make AI write the boring parts (good examples: saving and loading, parsing of data files and settings menu functionality) - but I keep it away from anything that needs a humans judgement to create.
Which way is it going to go?
i) “Seniors” also get superseded by even more capable models that can do all of the things which currently require experience.
ii) Linguistics become the new higher order abstraction (English is the new high-level programming language) _but_ there are different / orthogonal ways of approaching software development than the way we do things now — which “juniors” become more adept at more quickly.
A junior has managers pushing them to do more, faster. You review the code but do you really understand it the same as if you struggled through it? Do you ever build the muscle memory of what works and what doesn't?
It is the thought process that builds skills. I've seen some projects trying to be deliberate about learning from the agent as it writes to code - but I'm not sure there is a substitute for struggling and learning by doing.
I think traditional coding experience will be a lot more valuable in 5-10 years, given the apparent inverse relationship between that and LLM usage, and the number of people who seem to already be heavily reliant on LLMs today.
The next killer app on the scale of today's LLMs could be an LLM (or call it whatever) that can un-spaghettify the reams of code that are currently being generated by LLMs.
And probably the least valued it has ever been.
That's still 2-3x the velocity, but you get a better result because you went deeper on the paths-not-taken when designing.
I don't think this makes me dumb though, I've just moved up stack. Rather than caring about assembly language or source code, I'm focused on requirements, architectural decisions, engineering process, and ever more automation.
Don't have AI do anything you want to stay sharp on.
That's what drives it, and I don't really think the extrinsic things about the way you learned (while helpful) have that much bearing on it. It comes from you and you should take credit for it.
I think if you were learning today you'd probably find have the same feeling and do just fine because of it.
I am trying super hard to give the tools to validate everything to AI.
I finish by opening a draft PR and then I go through doing a deep review myself.
If I didn't already have 10+ years experience, it would be hard to learn and not atrophy with easy shortcuts being so available.
You still need people who know stuff in detail and can own the code... for now
Can relate but the only thing I do different is I teach AI how to cleanup after herself in followup prompts, sessions and refining AGENTS.md. Static code quality analysis tools are also really good to keep the agent on its toes.
Reviewing code is pain, reviewing requirements and giving feedback feels more productive. I have to confront the full shape of the problem and I usually discover a few cans of worms that make me rethink my approach.
Then I'll usually go and implement at least one piece of that. If I get stuck, I'll ask for some help. Then, once I'm happy with it, I'll ask the AI to review what I came up with. Then typically ask it to stamp the pattern around the codebase. And often to just iterate through writing out unit tests.
So I just did this for getting dense output from interpolants for an ODE integrator that I maintain. I did the work to make Tsit5 work by hand. I asked AI to stamp out the same pattern for DP5 and BS3, because it was just gene splicing those changes into a very similar RK integrator. I can review the diffs and see that it faithfully stamped out the same pattern with two prompts and a couple of minutes.
I'm still maintaining pretty strong contact with the codebase by doing a lot of my own programming, and fighting with the design while I'm writing that first piece of it, but then I use the AI to stamp out the mindlessly repetitive stuff.
That just seemed like the obvious way to me to go about programming with AI rather than pure-vibecoding and never touching anything other than prompts.
Also, you probably run out of tokens a lot faster if you're pure-vibecoding.
Plus you should spend some time debugging your own code. Even if AI could find and fix a bug in a minute or three that would take you 20 minutes, it is generally going to be better for you to burn that 20 minutes on trying to fix it before asking for help.
Of course, unlike another poster in this comment thread, I never cheated in college and spent a lot of time on "academic" side projects that weren't part of any course I was taking.
Once the vibecoders and cheats are done spamming a billion lines of AI generated code into industry, there's probably going to be positions for people who can (with AI assistance) sort out the mess and get production stable again.
Scar tissue from production going down and staying down is probably powering those code reviews and I think will be teaching this wave of vibe projects a few hard lessons. I've had to learn a few things the hard way like this and it's as effective as it is painful.
I'm very pro ai-generated-software in the right context. I think being able to vibe out software as needed is awesome and could finally unlock the potential of our computer and data dominated world. I also think we haven't yet learned as a culture where this new thing is different from traditional software and misunderstanding that is where a lot of the pain will be felt.
But the industry is changing around you fast.
If MIT-bred devs were already building crap in faang before, the trend has been getting nothing short of worse across the industry.
Expectations are rising, the field is becoming a rat race of which engineer can output the most mediocre/acceptable/good enough amount of features in the least time as possible.
Let me make this clear: you're in an increasingly rarer bubble where you have a luxury that is disappearing in this industry, plain and simple.
I have the fortune of having stellar devs around me, people that contributed to projects and software you use every day.
They are also outputting magnitude of order more than they ever did, and none of them is getting genuinely better at the craft, but it is what it is.
There must be an epistemic problem with just how fast these SOTA models run. I don't think it's just that my local model is dumber, I think it's more that the speed of token gen trains my brain with different expectations. There's no way it'll just generate hundreds of files by itself. When it can via a opencode loop with thought files, letting it run for a day is the only way you get that.
On the flip side, I'm working on stuff FAR more challenging than I would ever be able to do on my own.
My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.
AI might be making me a worse coder, but I don't care. If it hasn't "solved" coding now, I'm pretty confident it will long before my career is over. I don't have a job because I can write code - that's a small part of it. I have a job because I can get things to work. Anyone can code things that don't work (especially AI).
AI is certainly making me a far better overall engineer. Instead of spending my time trying to make the compiler happy (or fixing dynamic type errors at runtime), I can spend my time trying to solve substantially harder problems that I would never even dare try without an entire team to back me up (i.e. never).
Coding - imo - is VERY low on the totem pole of engineering skills.
I don't care if the function is pretty. I care if the system is upholding invariants and performing as expected, and there's adequate testing in place to PROVE to me that it ACTUALLY works.
High performance concurrent code has always blurred the line between sorcery and arcana... Go didn't really solve that. Rust/Tokio didn't. Zig didn't. C certainly hasn't.
It might be easier to prove to yourself, if you're the one doing all the writing, but at the end of the day, code is rarely just for you...
You probably should have the same level of proof whether you wrote it yourself and just trust yourself bro, or whether a Chinese Room wrote if for you.
I feel like I'm living in a Brave New World, and - at least for the time being - I'm enjoying it, even if it feels like I'm sprinting as fast as I can and still unable to keep up.
This is not a good thing. You should understand what your code does. Writing code nobody can understand is not a flex.
It is not hard to understand what a line of code does...
It is hard to keep up with solving the problem I'm trying to solve...
I just can't understand this as a programmer. I use AI a lot as well when I program but I still write a lot of the code by myself just because it is easier to write the code I want, specially if I know what I want, than to make AI understand what kind of implementation I'm thinking of.
After using LLMs for a while, I have to admit it's pretty nice, and I like using it. I've been vibecoding a few apps, and it's a good dopamine hit to immediately see your ideas come to life. However, based on my experience, it will bite you if you trust it blindly. Even in my vibecoded projects, it keeps adding "features" without me asking for them. Since they're just pet projects, I don’t really care as long as the end result is what I'm expecting, but I don’t think companies will be as flexible. I also don't think customers would like it if features changed or got added with every new fix or update.
So this could go in a bunch of different directions from here, but to summarize the current situation:
A lot of companies are heading in this direction.
Without proper engineering, AI will easily write more code and potentially change the application unintentionally.
We will have fewer junior engineers entering the market because of fear around AI and reduced hiring.
AI usage will hit a critical point where it is making massive amounts of changes, and the people "prompting" it might start getting overwhelmed.
We will end up with more features that people have to keep in their heads. I don’t think we can trust LLMs 100%, and because of that, developers will still need to know exactly what the application does.
Eventually, there will be a lot of bugs, and developers will complain that we need additional human resources.
Hiring starts again.
I think, right now, the toughest position is for new developers, and the best position is for people already in the market.The process often went the way you're describing. Initial rapid success as prototypes got up and running quickly with the messiest code imaginable. But over time progress slowed more and more as tech debt and poor decisions created an increasingly large drag, eventually resulting in a stalled/dead project.
Maybe this time is different, but a lot of my early work was cleaning up projects that fit this pattern. I hope the new up and coming developers will get the same opportunity.
Ditto, but as an infra guy, doubly so. People are making infrastructure decisions, they are writing config files, they are writing scripts and hacks and everything with AI. People are vibe networking entire ISPs. I saw one guy showing off how he had replaced his entire monitoring system with Claude. Its going to be good money for people with abstract problem solving skills. I cant imagine all the fun new problems people are vibing up for me right now.
I just hope I can last that long.
For one thing, the efficiency gains are massive. Bigger than any other tool, for any other price. Our company's main product is a web-app. We've been working on a re-write of our main product over the last few years. In one afternoon, I set up a new project with our desired stack, and was able to vibe-code an MVP of our product that we've been working on in a matter of hours. It wasn't perfect, of course, but I prompted feature after feature in bite-size prompts, each one taking between 5-10 minutes to complete. It looked pretty professional, and by any measure it was certainly "good enough." Given a little more time, I could solo ship and maintain what has taken us a few years to build as a small dev team. Unfortunately, this is more like a cheap "full team-replacement" rather than an efficiency-improving tool
Then there's the non-technical CEO AI hype-train. Our CEO (and the rest of our directors) have fully embraced the Claude suite of agentic tools. They're all regularly spinning up mockups, apps, and toolchains every single day. I can tell they're addicted to it, and they see the gains first-hand. In fact, while it hasn't happened yet, I wouldn't be surprised if the CEO laid off the majority of the dev team and vibe-coded the entire app himself (along with a few experienced devs). For now, they hold the view that "AI is a multiplier, not a replacer!" and in the same sentence will say "if this allows us to go the next few years without hiring again, that's a win!" I was asked point-blank why we couldn't just vibe-code our whole app. I didn't really have an answer. Yeah, there's the nice thoughts like "we wouldn't know how to maintain our app" -- but Claude would do a decent job in a single dev's hands, or "AI will potentially change the application unintentionally and introduce bugs" -- but proper observability, testing, and further prompting could fix those things in minutes to hours.
Frankly, it just doesn't make sense for companies to keep their whole dev team around anymore. No matter how many projects you launch and initiatives you tackle, the backlog will rapidly shrink, while individual dev capacity grows to exorbitant heights. Non-technical CEOs don't care about tech-debt, cognitive debt, poor software design practices, learning to code, keeping devs smart, the joy of problem solving, the art of a good algorithm or architecture; they care about shipping a product that works reasonably well, provides value, is worth paying for, and doing so for the cheapest investment possible. Unfortunately, AI fits THAT bill in nearly every single way.
I'm hoping you're right, and that the sheer volume of software being created now will increase demand. I'm worried, though, that it will never be enough to offset the massive capacity gains we get from AI.
I was thinking about that the other day when I was automating a workflow: I hooked up Jira to Claude so that bug reports would automatically get a pull request. Opus 4.7 is pretty good at it. And compared to dev costs it's still quite cheap.
It's nice to not be distracted by simple bugs, but aren't I killing my own job?
For as long as the investor money runway has asphalt left.
Sorry but this just sounds fake. Team spending a "few years" rewriting a web app, Claude doing it in a matter of hours. What the hell were you guys doing the last three years.
I recently started a new job and I find that AI is making it so much harder for me to onboard. I am adjusting to my role much slower than my peers who are using AI less. I am coding in a language I am unfamiliar with, which makes the lure of vibe coding stronger. I am at least skilled enough to recognize when Claude gives me an answer that either makes no sense or is unnecessarily verbose. But the more time I spend asking Claude to write code, the less I feel like I'm developing the skills that the job requires. Plus, when I submit a PR, I lack the necessary confidence in my own work, which just feels bad.
Honestly, another part of this is that I'm asking Claude to search through Slack and docs for answers to questions when I should just ask another person. The AI is feeding my social anxiety, luring me into avoiding human contact that I know will be good for my understanding as well as my general need for social interaction.
That all sounds like I am absolving myself of responsibility, but I think it's important to point out how a given technology is especially addictive for a certain type of person, and traps them in a negative behavioral cycle. If I hold off on relying on AI now, I suspect I can grow in my skills to the point that I can delegate tasks to AI that are rote and easy for me to verify their results. It feels challenging, but it's necessary.
I've been doing this, and it's a nice balance for me. Having Claude code things when you don't know how to evaluate it's code seems like madness to me, but I guess I'm in the minority on this.
I strongly believe that it's better to just take a couple week of slow times and read a good book about the technical matter you're dealing with. Having bite sized answers isn't a good way to be proficient.
It’s one of the real use cases I’ve had other than basically vibe coding
What's actually happened is you took a break from a highly technical skill. Every person on the planet will "forget" some part of that technical skill if they don't use it in a while. But the information is not gone, it's just been de-prioritized for other more pertinent information. The information comes back once you give yourself a refresher.
Before AI, it would be months between me writing a full program in one of several languages. I would forget simple things like how to start a function definition. But I did not really forget, because after a quick glance at an existing function, I remembered all the other possible syntax in the function definition. There's no need to panic, your brain is working normally.
Are you sure people aren't?
I do write initial proof of concept crude prototypes (not commented, hardcoded variables, etc), and AI does the productionizing of them. It has really allowed me to command a team of agents instead of keeping track of a bunch of humans of varying work ethic, skill, and ability to maintain high code quality. And often AI is very good at maintaining patterns used in the code base or even keeping them to industry best practices.
When using AI you will no longer be writing so much in programming languages—English or whatever language you talk to the LLM will be the main language.
Granted I'm not polishing up a prototype, I'm maintaining, evolving and modernizing a non-trivial 8+ year old product.
It helped me with developing a scoring system (for example, I gather slot data for restaurants to see how scarce they are) to rank restaurants (higher score being harder to book) to determine signals of busyness. It also helped me build a backlog system to make snapshots of every restaurant's slot availability on 3 different providers (OpenTable, Tock, Resy).
Maybe I misspoke, I’m not saying the code it wrote is perfect. But the code produced by GPT-5.5 frontier model is easily miles better than a junior developer. But it is very competent. Perfectly competent was maybe more hyperbole, but I still drive the point.
How much of this rote, mundane code do you honestly have in any given project?
The cool thing is that I was able to let an agent manage this while I did the human task of steering it to work. Some of this work is novel, yet much of this work includes well-understood software patterns, such as scatter-gather and so on.
I absolutely do not enjoy using JavaScript or TypeScript at all. So it was a huge relief when I was able to get AI to help with UI work on Next.js and React Native/Expo.
I've gotten it to help with a cross-cloud cross-region setup on Terraform with shared secrets on AWS and GCP with GHA workflows for sandbox, staging, and prod promotion deployments. And I have no background as a DevOps guy.
Again, it didn't propose the exact shape of my system design. But chatting with it really allowed me to form the shape. It really is a bicycle for your brain. It helped me look at the tradeoffs of different products (PubSub vs Cloud Tasks, or AWS equivalents).
And the problem is, that these plans are obliterated in a few hours and then you have to analyze and iterate on the output to weed out the idiocies.
And handling multiple agent outputs... so continous context switches. Well, good luck with that in the long term.
If you let agents run wild and build whatever, the output is almost surely will be a horrible mess. End of story.
Unfortunate, those types of refactorings are my favorite, since they're tightly scoped, easy to verify correctness, and it's like a little puzzle. Bonus points if you write your own collector or use some more obscure parts of the stdlib.
The people that don't value good design will absolutely have a lot more rope from which to hang themselves.
Do you think that in 2026 maybe rapid progress can also come from using the same primitives faster?
I'm still figuring this out but I'm certainly open to the possibility.
Same techniques for both - the right abstractions, the right app architecture choices and layering.
I find myself learning exponentially faster and more. For example, I am working with spectroscopy hardware currently (raman, nmr) where I got Claude to write code that interfaces with equipment on a hardware level. Instead of me going through data sheets and writing out a bunch of wrapper code, Claude did it for me.
I am able to progress much faster by using Claude to discuss various techniques, implement them, and test it out. This loop would have probably taken me 5-10x more time previously.
And I am learning so much more about these machines/techniques/data than I would have if I had to expend the mental effort to write menial code just to see a result.
I have more than a decade of experience as a developer. I am glad that we are finally moving towards a world where we can utilize code as a tool rather than constantly trying to think how to make it into a product.
Maybe that's just you. Code as a tool rather than just a product has always existed.
I love coding, it always felt like Legos for adults. Not that Legos aren't also Legos for adults.
But there's no fighting the fact that we won't be writing 99% of the code anymore so I take pleasure in crafting the specs and requirements clearly, that's where I put the effort.
And then to avoid having to babysit the agents to get them to stick to the plan, I built a super robust external orchestrator that forces multiple review and fix rounds until I get the result I want.
I'll be fully open sourcing that soon also https://engine.build
From this, can we infer you're not exactly "all in" on coding via LLM Walter?
I, for one, am not. I've gone on the record with my thoughts in prior LLM related comment threads. They will make us dumb.
But why would anyone use AI to write documents or articles? Do you really respect your recipients so little that you can't be bothered to share your own thoughts?
I might as well get an AI to call my own mother on mother's day.
I'm not the world's best writer by any means, and rather self-critical in general, but even I don't have the compulsion to have an AI rework my thoughts for public consumption. Perhaps for a set of slides or a big README, but not for one-off ideas and communication. In fact, I quite resent when someone sends me some AI generated psuedo-thoughts that are clearly not their own and devoid of the real context.
Working in /plan mode bouncing ideas back and forth with the AI, me catching its wrong assumptions, it filling in knowledge gaps with a clear explanation when needed, is very intellectually stimulating and I think is making me a better engineer. The key for me has been to be socratic with the AI, think through everything it is proposing carefully, and don't get hypnotized by its confidence, perfectly structured arguments, etc.
I work for one of the largest companies on the planet doing work with a budget in the millions (meaning I am a nobody, but also not "nobody") and I don't touch AI personally. There is a group of very rich people who want to get much richer and see humanity as the thing hindering that dream who keep telling us that "if we don't train their new pet how to replace us, we'll be replaced" but I'm going to go ahead and say that AI is a thing, and probably relevant, but like in the way crypto is, not in the way the internet is. Does that make sense?
It's a scam for oligarchs. I'm saying AI is a scam for oligarchs and has very limited use cases and we're already starting to bump up against edges. Also a lot of people REALLY hate it, and that headwind didn't exist for things like "the internet" (you just had a large percentage of the population who initially didn't care, this is active animosity).
I'm at the other spectrum of what the author feels. I feel smarter and more capable with AI, and I'm actually surprised how helpful it is in my workflow. I still write code by hand but I know way more than I would without it.
Granted, I'm the "accidental programmer in a team that's completely non technical" and AI is simply a senior I'd never have otherwise. YMMV but I think if you use the tool as a more expressive Google search it can be a great companion.
Pure vibe coding is not far from "let's outsource everything", it's just a bit cheaper and more available.
I get the most mileage out of this as well. It's the middle ground option. Everyone's either saying AI is useless, or saying how it's so good that writing code is an obsolete task already. Personally, I'm learning a ton with AI as a research tool, and implementing my code by hand with that knowledge. It also naturally solves the "you can't review 20,000 lines of code a day to even properly understand that it's correct" problem as a side effect too. But I am still building things much faster.
I think, if you're not feeling challenged, you're probably just doing the same work but faster. You should try to tackle harder problems, too!
I've learned an insane amount in a very short period of time, and have been engaging in much more challenging problems.
Instead of "what's the right syntax for this for loop again?" I'm asking "what's the business critical module in this system and how do I structure the test suite to prove it's working to spec?"
I unironically believe this is a very good habit. When it comes to writing, instead of starting with AI, finish a chapter by hand first then ask AI to review it strikes the best balance.
But I think the author was commenting on how it felt like a compulsive urge stemming from fear and self-doubt.
It’s a good insight about their frame of mind, and I applaud them for resisting the urge.
You need to do things the hard way by yourself in order to learn.
Reflexively copy + pasting into chatgpt is a good way to become completely dependent on the tool and have your skills atrophy.
It reminds me of people who can hardly drive anywhere without GPS, even if they’ve been living in the same place for years.
"My works deserve to be reviewed before being posted to the public" is not a bad mentality. It's not imposter syndrome either. It's just consciousness.
This is where I'm at. I feel like I need AI to review everything.
This is revisionist nonsense.
Programmers used to be cowboys, by and large, outside a handful of critical domains. Systematic use of code review, automated tests, source control and so on are relatively new.
What was different is an entire program could fit in one person's head. The stack of abstractions wasn't nearly as deep, necessarily, since you couldn't afford the cost in memory and CPU.
That delivered a different kind of intellectual control, a kind that is exceedingly rare nowadays outside hobby projects.
I think it's vital that you keep strict control, and really try to understand what the AI is doing. And especially when you're doing something really complex, even Claude Opus can get lost or lose track of the context, and you need to be paying attention when that happens.
Also I feel like it’s fine to let AI write your code. I felt very much like the OP did. A couple of things help keep my sanity. one is that as developers I think our job has evolved to knowing what decision an AI makes is good and which one is bad, this can be code or design – but there is nowhere a developer(or for that matter a knowledge worker) can hide from ai. In this world you will be forced to communicate with them. Partly because as a community we have decided(for better or worst) that AI should bring non trivial amounts of productivity gains to software development.
The other one is something I want to validate which is for those of us who are mediocre at coding, it might be a gift because it would free up some time and thus mind space to consider what we are actually good at.
There are things that I think are very cool; there are lots of projects that I've sort of wanted to do for the last decade that I have pushed off because they're reasonably high effort and I don't want them that much, so being able to have a pretend intern write it for me has been great.
On the other hand, I do think that using Claude/Codex to do all the coding at work has become a little soul sucking. Now instead of being paid to do fun software work, a lot of my work still boils down to babysitting interns.
When I do get to work on projects that are interesting, it's still fun because I can justify writing TLA+, and using that as a guiding spec for my projects. The problem is that most work really isn't that interesting; a lot of it is glorified SQL queries, or CRUD, or "put thing into Kafka in one place, and take it out in another place". Those jobs can be tedious, but they aren't interesting, and now instead of even getting that, I yell at Codex to do it and I awkwardly sit and wait.
I didn't think I'd miss writing stupid CRUD apps, but here we are.
Codex and Claude have been a bit soul sucking. I feel like I'm doing less of the planning and the like. I acknowledge that most code that makes it into production doesn't have to be amazing, but I would still take some level of pride when I would figure out an interesting optimization, even for a simple CRUD app, and now I am somewhat deprived of that kind of stuff.
I am try to be coding at all times on complex issues while I am offloading a boring, non architecture, boiler plate heavye etc. task to it in the background in a git worktree.
I ask it to work in small iterations and commit every step of the way. After my coding session is done I can go back and review it's code.
I personally think it can be a great tool for learning but it's so easy to fall into the trap of getting AI to do everything for you.
I've also used it for personal projects like a Chip8 emulator I wrote in C where I'd managed to run a few basic games and ran out of steam. Used AI to help me implement the rest.
I find that AI fails at things that are truly creative. I have been thoroughly unimpressed with ideas it has had or things it’s written for me. There’s still a lot of room for human creativity.
I have a pet theory that perhaps the optimal way to use AI will be more like an "exoskeleton" that turns you into a super-human programmer. Something that plugs the deficiencies of the human programmer, rather than replacing you entirely.
This sounds a lot like "You can skip the fundamentals of basketball and just focus on dunking!"
Yes -- now let's talk about the correct form of fighting back.
It is not "I don't want to feel self-doubt so I will suppress that feeling."
It is, "The self-doubt is valuable -- it's pushing me to improve."
The AI is never going to be able to say what you really mean. But it may inspire you to push harder to improve your ability to do that.
* actually writing more on my own - created a personal blog just to get myself to write more
* upleveling my thinking - think more about problems and framing
* leverage my experience - guide (or sometimes force) the AI assistant to leverage my experience to avoid problems
* learning new things - rather than let AI just replace things I can do, I use AI to help me learn new things/technology faster than I would have pre-AI
I wonder lately, doesn't that all new knowledge push out the old knowledge? As in new things replace old things we know. I don't know any studies on this but do we have infinite capacity for knowledge?
What about retaining it? I catch myself asking AI wondering about random things that pop into my head, reading it, maybe using that knowledge once and later no longer remembering what it was. Maybe if you use that knowledge in practice from the get go but projects get so complicated sometimes it seems like there is not enough space in my brain for things AI is learning me.
Another way of looking at what you said is that the practicing the new knowledge takes the place of practicing the old knowledge. So it isn't the knowledge that is replaced, but the learning (imprinting).
Retaining (again just speaking for myself) requires actually using / applying the knowledge at some point within some timeframe of learning it. Otherwise yeah it fades to the point of disappearing over time.
To do the thing I hate and use an analogy: It's not like asking a furniture maker to start using power tools; it's like asking a furniture maker to start telling a robot to make the furniture, in English. Yes, the people who were already good at furniture-making will have an advantage in how to direct the robot - but the salient point is that it's a recipe for misery for many people.
Can you point a LLM at a body of code, and tell it "give me a concise UML chart of what this does"? I'm not advocating humans writing UML, but some representation like that may be useful to AIs. Except that they don't really do graphs very well. We may need a specification language intended to be read and written by AIs, readable by humans but seldom written by them. Going directly from natural language specifications to code causes the LLM blithering problem to generate too much code.
I think they were making a joke about us getting dumber that I am confused about the premise of.
You seem to be suggesting we are going to fill spreadsheets (which claude already does pretty well) and that spatial reasoning is an insurmountable problem instead of just something that doesn’t emerge naturally from training on text/code corpi.
I don't think I think less when writing Clojure or Rust than I would writing raw assembly code, I just broaden the scope of my projects to fill up my thinking capacity.
Like GC langauges help me do more productive work by hiding useless info about memory
Assembly is a stretch (albeit a few applications still need it), but otherwise that sentiment (and people who actually believe it) speaks a lot to me about what makes today's PCs slower, more latent, and less enjoyable to use than the machines of the past. Today's world sucks.
2. we will think of verification loop more - tasks will be chosen that have more ability to be easily verified
3. the concept of the difference between "generation" and "verification" will be more mainstream [1]
4. spec driven development will become more common
5. scenario testing will become mainstream
i have few more predictions like these.
[1] I wrote a blog post on this explaining why I keep this generation vs verification difference in many parts of life https://simianwords.bearblog.dev/the-generation-vs-verificat...
> 4. spec driven development will become more common
I do believe both of these, recently someone created an rar open source alternative for all its version using LLM agents because of that specs and in some sense verification/easy debug (or compile time) aspect.
On the other hand, I was making a GUI application (a rough scratchpad app) in Odin and there were so many bugs that I had to explain it and even then it was like lottery or just about unpredictable would be the better word as it would fix one thing and break another or just not fix it.
At the end of the day for GUI apps, it just doesn't have any way of testing them that greatly perhaps. There are many GUI things which I feel like LLM's are still underwhelming in, especially if you wish to create a GUI in say any niche language.
It can do that but the workflow is so bad that it might just not be worth it. i do wonder if GUI development becomes the one thing that AI can't do and their software development jobs are safe.
I was just scrolling upwork randomly and I saw tons of flutter & wordpress jobs.
Im still concerned enough about the specifics to show concern about background refresh tokens silently failing in OAuth in a mission critical real-time system.
Im not coding it, but im still thinking it. That's the important part, ain't it? Is it dumb or just clever delegation?
I like making things myself, I have self-navigating robotics projects I do on my own time, but I'm not gonna use an AI to do it for me, the joy I get is figuring it out myself.
I will use AI if I'm stuck on something or need a specific algo written that I've spent enough time on and couldn't figure out.
You lose some ideas of thinking if AI does all the coding unless you study that code. But that’s still different to creating the code yourself.
Same with authors or songwriters. Their brain is used to create stories and songs that‘s why it’s easier for them to come up with ideas.
If you are just a reader or listener your brain doesn’t get wired in the same way.
AI is a lesser problem for already experienced developers, because they just lose some abilities but new developers will never get those abilities in the first place, which will limit their thinking especially for edge cases that need creativity
I do think that what people are being paid might get adjusted to whatever is happening.
Firstly off-shores, then now Tech companies have convenient response to lay off people and they genuinely believe that companies can be 5-10x shorter with AI and 90% of code will be written by AI.
They then push it on engineers and some adopt, some don't. It becomes a goodhart's law and people just start spending tokens to look good too and just spearhead using AI because hey 1) the corporate is recommending you to do this and then 2) the points you talked about.
The AI bill blows up (Cloudflare spends 5 million $ per month probably more in AI bills iirc) and with all of these, the company fires people off.
The amount of software engineers laid off all then try to create another AI tool (...using AI) or try to overcompete when the job market is at one of its all time low. Combine this with the overall trillion dollar and more of US stock market which is attached to the AI bubble.
I do think that you are paying a price in all of this, I feel like job insecurity is at an all time high, people are just scared of losing jobs from my understanding within this career. Some are closer to retirement than others but that's about it.
I think that nobody is that happy to be honest, the software engineer is worried about his job, the CEO is worried about being replaced or his product replaced by AI, the AI company is worried about how it would be profitable in first place, the investors are worried if they got into a bubble, the government is worried about all these people and other so distractions (think UFO files for example) and wars are happening and its successfully diverting our attention from real issues.
I don't know but I think that we are all paying a price and I say this as I sometimes feel the most over-empowered by AI, (like young guy in his teens) but I just feel like we lost something more critical along the way. We lost some senses of our humanity and peace as we are embedding this technology and just have people who are only thinking about it 24/7. I have to be honest but I do sometimes feel like I would've fared off okay without the AI thing too and I don't care about my personal gains so much sometimes but I do think that the world would've probably been net positive if AI's plateaued or were never created.
This thing will explode in our faces sooner or later. Also makes me feel like an imposter rather than an engineer.
Maybe that’s actually what I have become.
You need to spend time on coding without agents and writing without AI as practice if nothing else.
You should not get complacent in offloading all detail oriented work to agents.
And how are people forgetting to code by using LLMs? Do they just mean they forgot the syntax of a particular language? Or forgot how to architect features or how the development lifecycle works?
I've mostly used LLMs to build more complex things that would have been a lot to manage previously, or to build something completely new and learn how it works. I feel like I've only become a better engineer (and programmer too) because of LLMs.
Today I'm forcing myself to learn SwiftUI and type each character with my hands, there is a part of me asking "Why are you wasting your time instead of prompting it and getting the UI you want in minutes?". Well, even I use AI I must know the domain I'm operating in to create good products instead of useless slop. Even though I've been coding for 20 years now, I still need to be humble to grown in anything new. I can vibecode full apps but I'm not gonna pretend that my experience isn't playing a massive role in guiding the models.
Don't let AI take away your joy for building stuff, it's totally fine not being "productive" and taking your time. Just force yourself to have, at least, 2 AI days off every week.
I've been using AI coding tools a lot lately, though I'm always in the loop. I write most of the important code by hand, but I like to send Claude Code or Codex off to try to come up with a solution in parallel to compare.
Having reviewed so much of my hand-written code side by side with AI-written alternatives, I am still amazed that anyone admits to letting AI write all of their code. Either you're working on much simpler problems than I am, or you don't really care about anything other than making the tests go green and waiting for bug reports to come back so you can feed them back into the LLM again.
Some times the coding tools come back with better ideas than I came up with. Some times my idea is much better. Most often with medium to high complexity problems, if the AI comes up with a working solution it has enough problems that an attentive human reviewer would have rejected it at best. At worst, it creates a mess of spaghetti code with maintenance time bombs ticking away. And that's for one change. I can't imagine what a codebase would look like if you completely deferred to AI tools to do everything.
This quote is even weirder because they claim to have been doing this for two years! Two years ago, coding tools were much worse than they are today. Using AI to write all of your code 2 years ago would have been a weird choice.
When I read posts like this I don't know what to think. Is this real? Or is it exaggerated for effect?
I also roll my eyes a little bit at the idea that not writing code for 1-2 years means you forget how to code. I've been back and forth between 100% management and 100% IC in my career. While there is a warm-up time to get back into coding, you should not completely forget how to code after such a short time. The only reason this person feels like they've forgotten how to code is that they've made a choice not to code for 2 years and, apparently, they don't feel like making any effort to change this. For someone who claims to love writing code, I don't get it. Something doesn't make sense about this writing.
Thanks for your comments. I didn't realise until I read this.
During the "don't make me think" era of software design, if you wanted to make software you got really good at identifying the use case and using design thinking to optimize the paths to goal. You could make a business around a very narrow set of flows. The only thinking a user had to do was pick The App for That. They never had to think about how they want to approach their task, which is a skill in itself.
AI isn't like that. There's a million ways to use it. That's a big part of what makes it cool, but it requires the user to thoughtfully approach their workflows. Not everyone is used to doing that.
We'll have AGI not because AI is getting smarter, but because we are getting dumber.
Basically only use it for anything I wouldn’t otherwise have the time to write but isn’t important to be written by me.
I actually can’t fathom using it for writing as a principle, to me it’s just a keyboard extension for code generation, never a replacement for the written word that should be in my voice and fully a stream coming from my mind that I should have full editorial awareness and memory of.
Now that I think about it I’m a snob in this regard, I turn my nose up at people that use AI to write things that are purely written, in my mind using it for writing is defeating the purpose of writing!
If coding a new feature, I do one step and check the code, doing git diff, reading changes, or just asking Codex, to show me changes.
If writing an article, I ask for only one paragraph. I read paragraph and if it is ok, I accept it, if it doesn't show off my thoughts I work on one paragraph.
If doing data analysis with AI, I do one step of analysis and ask AI to display intermediate results so I can see if all is going in good direction and there are no hallucinations, additionally I have follow-up prompts for AI to do results verification. If all looks good, then I continue to the next step.
I don't like situation when I ask AI to do all code changes, or all article, or all data analysis in one pass with one prompt. It is simply impossible to check if AI is correct and results are not satisfactory. You can easily see this when asking AI to write a deep article with one prompt - you clearly see that it doesn't reflect your thoughts.
Maybe step-by-step is the approach to use AI and not feel dumber.
As far as the topic on hand, I work with someone whenever you ask them a question they say "AI says..." I'm not a big fan of that.
AI would have to know I have a VPN active
However, if I were to release a solution that I 'vibe-coded' into the wild, then I would feel quite a bit of shame if someone figured out that I used an LLM to write the entire thing. I know it may come off as a bit silly, but it is a feeling I cannot seem to shake. A feeling that prevents me from wanting to adopting the technology in full force because... Well, I did not truly create the software if AI did all the work. Sure, the software might have been my idea, but that does not bring me much fulfillment.
I know programming is just a means to an end, but I feel like I have put in a lot of hard work over the past decade and a half just to barely scratch the surface of mediocrity. I was attracted to this field because I saw a sense of beauty in computer science (and programming). It felt like one of the few remaining options for a creative job that was spared from the cutthroat nature of the a career in the arts.
Like the Samurai class during the early industrialization of Japan, maybe it's time for me to lay down my sword too.
The work rhythm has ballooned and as every co worker is now pushing work (generally mediocre but acceptable due to strong codebase fundamentals and them being good engineers) it is increasingly becoming a rat race of who delivers more. Companies don't even need to promote AI productivity because engineers being engineers will engineer the minimum effort required to deliver as much output that makes stakeholders happy.
I am less and less fond of this work.
I'm sure there will be people with different experiences, but I've never worked as much as I did in the last two years, but I'm too burned out. I genuinely feel I've regressed as an engineer and I see the same in my coworkers, some of them contributors to the highest impact OSSs you can think of.
Every day, I'm more and more leaning into changing industry.
I love code and programming and solving product problems. But the job has changed dramatically.
If the pay+comfort ratio wasn't that good I would've done that already.
It's hard to give up to 6/7k+ net per month in southern Mediterranean. I'm way better off financially than most US devs making even more, there's no comparison.
Based on the MIT and MSFT studies.
1. a general coding AI: Completely broken. Should auto-comment, but never does anymore. Stopped a while back, nobody seems to know why.
2. another general AI: You have to at-chat it. It reacts to the message with <eyes emoji>, but never actually posts a comment?
3. a security bot. Comments, when it thinks there's a problem, in the most obtuse way possible. "SAST findings". But the findings are behind a link, and none of us devs are given access.
I could lean on and press the various people shoving AI down my gullet to like … look at this, and the actual lived experience of devs trying to derive productivity from this mess? But IDK what's in it for me, really.
Even Claude, when it worked, would comment in the most sociopathic manner possible: an English prose description of the problem, attached to an utterly unrelated line of code. Part of that is probably Github, who does let you attach comments to arbitrary lines of code in a review, only the blesséd lines can have comments. Literally none of our AI can format their complaint with a freaking suggested change (i.e., the Github feature, no, instead I get English prose).
Honestly for all I know we failed to pay the bill or something inane, but it would be nice if the AI could format an error message, or something.
And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.
Entering the workforce happens at an age where people have built (some more rudimentary than others) a level of understanding and self control regarding delayed gratification and Type II fun.
Did you have the kind of life where you were never really challenged to build that skillset, or is the mental stimulation so strong for you when you use AI that it overcomes executive function?
And I'm not alone here. Like I said, I was discussing this with a bunch of friends who are also quite senior and accomplished in their engineering careers, and the sentiment was familiar for us all.
Do you really think phrasing a question like this will ever induce a productive response?
I think it's pretty normal to be able to reflect on the difference in life skills between myself and those I see in others. There are things I've struggled with throughout adulthood because through some happenstance I was able to avoid the class of challenge as a child.
I didn't learn how to study until my 20s. I didn't have will-power over eating and exercise until my body changed around 30 and I suddenly got fat, then I talked with friends that teased me for being less skilled at something than a teenage version of themselves.
What's the saying: someone who's never smoked doesn't have to learn how to quit smoking?
So for example, once AI deleted my project, I was able to recover it but I lost version control through series of mistakes and IMO I lost a good version. (I think after abandoning that project and coming back, I was able to accomplish it)
Another example which is the one which is biting me the most is that I wanted to create a copy.sh/v86 based thing where you are able to edit the .img files of distros and save them all within the browser. I was able to run v86 custom way but I wasn't able to mount or have a proper way for making it work.
And now although I mean this is just an optional project and I just thought hey it would be fun to edit .img files in browser but now it feels like I get disappointed.
I think that disappointment is in both say a frustration of thing not working and secondly, just realizing that I might be dropping this idea altogether. Now I must admit that this is a field that I have absolutely no expertise at all in, but still, it feels disappointing to me and I kept thinking about it for sometime now.
I wonder how many people just feel that if AI is unable to make their project, to then either get frustrated/disappointed and even a salt of panic. I think its just wrong for how damn much we are relying on LLM's at this point. It feels like the whole economy is just doing what I am doing but with billions of dollars.
Another thing that I feel like is that both young and elderly people are really much like the same in vibe-coding. (Yes specs can help but LLM's are still autocorrect on steroids), I feel like we are both forsaking the junior developers and also forsaking the expertise created by senior developers as we replace it with these LLM's
If you don’t do this constantly, LLMS can certainly lead you right down the Dunning-Kruger path (though that’s a big oversimplification of a whole collection of psychological features from idee fixe to narcissism to fear of failure/criticism). If you really work at getting the LLM into the proper state it will happily rip your work apart in a rather cruel and indifferent manner, like an unsympathetic corporate gatekeeper who relishes exposing your flaws in a public setting. Debate club is another tactic that’s a bit less harsh, you have the LLM flip back and forth between defense and prosecution of your work.
I think this should be the default setting, but it doesn’t encourage engagement, the average customer will think the LLM is a mean jerk if it starts off like that.
Most people, given a nail gun, cant build a house, thats where the skill is...
Im not someone whose validation came from the lines of code, but from the resulting working system.