Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.
I think if you orient your experimentation right you can think of some good tactics that are helpful even when you're not using AI assistance. "Making this easier for the robot" can often align with "making this easier for the humans" as well. It's a decent forcing function
Though I agree with the sentiment. People who have been doing this for less than a year convinced that they have some permanent lead over everyone.
I think a lot about my years being self taught programming. Years spent spinning my wheels. I know people who after 3 months of a coding bootcamp were much further than me after like ... 6 years of me struggling through material.
or, perhaps, in the same way that google-fu over time became devalued as a skill as Google became less useful for power users in order to cater to the needs of the unskilled, it will not really be a portable skill at all, because it is in the end a transitory or perhaps easily attainable skill once the technology is evenly distributed.
It's like saying if you don't learn to use a smartphone you'll be left behind. Even babies can use it now.
The AI will get better at compensating, but I think some of it's weaknesses are fundamental, and are going to be showing up in some form or another for a while yet
Ex, the AI doesn't know about what you don't tell it. There's a LOT of context we take for granted while programming (especially in a corporate environment). Recognizing what sort of context is useful to give the AI without distracting it (and under what conditions it should load/forget context), I think is going to be a very valuable skill over the next few years. That's a skill you can start building now
I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)
If I could have the best of both worlds, that would be a genuine win, and I don't think it's impossible. It won't save as much time as pure vibe coding promises to, of course.
> I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)
When I review code, I try to genuinely understand it, but it's a huge mental drain. It's just a slog, and I'm tired at the end. Very little flow state.
Writing code can get me into a flow state.
That's why I pretty much only use LLMs to vibecode one-off scripts and do code reviews (after my own manual review, to see if it can catch something I missed). Anything more would be too exhausting.
An alternative that occurred to me the other day is, could a PR be broken down into separate changes? As in, break it into a) a commit renaming a variable b) another commit making the functional change c) ...
Feel like there are PR analysis tools out there already for this :)
I can confidently say that being able to prompt and train LoRAs for Stable Diffusion makes zero difference for your ability to prompt Nano Banana.
Using nano banana does not require arcane prompt engineering.
People who have not learnt image prompt engineering probably didn't miss anything.
The irony of prompt engineering is that models are good at generating prompts.
Future tools will almost certainly simply “improve” you naive prompt before passing it to the model.
Claude already does this for code. Id be amazed if nano banana doesnt.
People who invested in learning prompt engineering probably picked up useful skills for building ai tools but not for using next gen ai tools other people make.
Its not wasted effort; its just increasingly irrelevant to people doing day-to-day BAU work.
If the api prevents you from passing a raw prompt to the model, prompt engineering at that level isnt just unnecessary; its irrelevant. Your prompt will be transformed into an unknown internal prompt before hitting the model.
Nano Banana is actually a reasoning model so yeah it kinda does, but not in the way one might assume. If you use the api you can dump the text part and it's usually huge (and therefore expensive, which is one drawback of it. It can even have "imagery thinking" process...!)
Until then, I keep up and add my voice to the growing number who oppose this clear threat on worker rights. And when the bubble pops or when work mandates it, I can catch up in a week or two easy peasy. This shit is not hard, it is literally designed to be easy. In fact, everything I learn the old way between now and then will only add to the things I can leverage when I find myself using these things in the future.
Oooh, let me dive in with an analogy:
Screwdriver.
Metal screws needed inventing first - they augment or replace dowels, nails, glue, "joints" (think tenon/dovetail etc), nuts and bolts and many more fixings. Early screws were simply slotted. PH (Philips cross head) and PZ (Pozidrive) came rather later.
All of these require quite a lot of wrist effort. If you have ever screwed a few 100 screws in a session then you know it is quite an effort.
Drill driver.
I'm not talking about one of those electric screw driver thingies but say a De W or Maq or whatever jobbies. They will have a Li-ion battery and have a chuck capable of holding something like a 10mm shank, round or hex. It'll have around 15 torque settings, two or three speed settings, drill and hammer drill settings. Usually you have two - one to drill and one to drive. I have one that will seriously wrench your wrist if you allow it to. You need to know how to use your legs or whatever to block the handle from spinning when the torque gets a bit much.
...
You can use a modern drill driver to deploy a small screw (PZ1, 2.5mm) to a PZ3 20+cm effort. It can also drill with a long auger bit or hammer drill up to around 20mm and 400mm deep. All jolly exciting.
I still use an "old school" screwdriver or twenty. There are times when you need to feel the screw (without deploying an inadvertent double entendre).
I do find the new search engines very useful. I will always put up with some mild hallucinations to avoid social.microsoft and nerd.linux.bollocks and the like.
This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.
The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.
The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.
My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.
Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.
They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.
Testing medical drugs is doing science. They test on mice because it's dangerous to test on humans, not to restrict scope to small increments. In doing science, you don't always want to be extremely cautious and incremental.
Trying to build a browser with 100 parallel agents is, in my view, doing science, more than adopting science. If they figure out that it can be done, then people will adopt it.
Trying to become a more productive engineer is adopting science, and your advice seems pretty solid here.
You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.
I notice that I get into this automatically during AI-assisted coding sessions if I don't lower my standards for the code. Eventually, I need to interact very closely with both the AI and the code, which feels similar to what you describe when coding manually.
I also notice I'm fresher because I'm not using many brainscycles to do legwork- so maybe I'm actually getting into more situations where I'm getting good ideas because I'm tackling hard problems.
So maybe the key to using AI and staying sharp is to refuse to sacrifice your good taste.
When people talk about this stuff they usually mean very different techniques. And last months way of doing it goes away in favor of a new technique.
I think the best you can do now is try lots of different new ways of working keep an open mind
Note, if staying on the bleeding edge is what excites you, by all means do. I'm just saying for people who don't feel that urge, there's probably no harm just waiting for stuff to standardize and slow down. Either approach is fine so long as you're pragmatic about it.
Everything slows down eventually. What makes you think this won't?
The real profits are the companies selling them chips, fiber, and power.
Another good alike wager I remember is: “What if climate change is a hoax, and we invested in all this clean energy infrastructure for nothing”.
It's both. It's using the AI too much to code, and too little to write detailed plans of what you're going to code. The planning stage is by far the easiest to fix if the AI goes off track (it's just writing some notes in plain English) so there is a slot-machine-like intermittent reinforcement to it ("will it get everything right with one shot?") but it's quite benign by comparison with trying to audit and fix slop code.
There is zero evidence that LLM's improve software developer productivity.
Any data-driven attempts to measure this give ambivalent results at best.
Put another way, the ability to use AI became an important factor in overall software engineering ability this year, and as the year goes on the gap between the best and worst users or AI will widen faster because the models will outpace the harnesses
Is it, lol? Know any case where those “the best users of AI” get salary bumps or promotions? Outside of switching to the dedicated AI role that is? So far I see clowns doing triple the work for the same salary.
If you just mean, "hey you should learn to use the latest version of Claude Code", sure.
Until coding systems are truly at human-replacement level, I think I'd always prefer to hire an engineer with strong manual coding skills than one who specializes in vibe coding. It's far easier to teach AI tools to a good coder than to teach coding discipline to a vibe coder.
You think it's going to get harder to use as time goes on?
that's nowhere near guaranteed
A simple plan -> task breakdown + test plan -> execute -> review -> revise (w/optional loops) pipeline of agents will drastically cut down on the amount of manual intervention needed, but most people jump straight to the execute step, and do that step manually, task by task while babysitting their agent.
Boot time.
Understandability. A Z80 processor was a lot more understandable than today's modern CPUs. That's worse.
Complexity. It's great that I can run python on a microcontroller and all, but boring old c was a lot easier to reason about.
Wtf is a typescript. CSS is the fucking worst. Native GUI libraries are so much better but we decided those aren't cool anymore.
Touchscreens. I want physical buttons that my muscle memory can take over and get ingrained in and on. Like an old stick shift car that you have mechanical empathy with. Smartphones are convenient as all hell, but I can't drive mine after a decade like you can a car you know and feel, that has physical levers and knobs and buttons.
Jabber/Pidgin/XMPP. There was a brief moment around 2010? when you didn't have to care what platform someone else was using, you could just text with them on one app. Now I've got a dozen different apps I need to use to talk to all of my friends. Beeper gets it, but they're hamstrung. This is a thing that got worse with computers!
Ever hear of wirths law? https://en.wikipedia.org/wiki/Wirth%27s_law
Computers are stupid fast these days! why does it take so long to do everything on my laptop? my mac's spotlight index is broken, so it takes it roughly 4 seconds to query the SQLite database or whatever just so I can open preview.app. I can open a terminal and open it myself in that time!
And yes, these are personal problems, but I have these problems. How did the software get into such a state that it's possible for me to have this problem?
A godsend.
> Native GUI libraries are so much better but we decided those aren't cool anymore.
Lolno.
My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.
Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.
Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.
Have you tried explicitly asking them about the latter? If you just tell them to code, they aren't going to work on figuring out the software engineering part: it's not part of the goal that was directly reinforced by the prompt. They aren't really all that smart.
Also, it prevents repetitive strain injury. At least, it does for me.
The workflow that seems more perilous is the one where the developer fires up gas town with a vague prompt like "here's my crypto wallet please make me more money". We should be wielding these tools like high end anime mech suits. Serialized execution and human fully in the loop can be so much faster even if it consumes tokens more slowly.
I have like 15 personalized apps now, mostly chrome extensions
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
You can choose to grow in both areas.
[0]: https://kerrick.blog/articles/2025/kerricks-wager/
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
There's a good reason that most successful examples of those tools like openspec are to-do apps etc. As soon as the project grows to 'relevant' size of complexity, maintaining specs is just as hard as whatever other methodology offers. Also from my brief attempts - similar to human based coding, we actually do quite well with incomplete specs. So do agents, but they'll shrug at all the implicit things much more than humans do. So you'll see more flip-flopped things you did not specify, and if you nail everything down hard, the specs get unwieldy - large and overly detailed.
That's a rather short-sighted way of putting it. There's no way that the spec is anywhere as unwieldly as the actual code, and the more details, the better. If it gets too large, work on splitting a self-contained subset of it to a separate document.
I disagree - the spec is more unwieldy, simply by the fact of using ambiguous language without even the benefit of a type checker or compiler to verify that the language has no ambiguities.
But also, you don't have to upgrade every iteration. I think it's absolutely worthwhile to step off the hamster wheel every now and then, just work with you head down for a while and come back after a few weeks. One notices that even though the world didn't stop spinning, you didn't get the whiplash of every rotation.
At the end of the day, it doesn’t matter if a cat is black or white so long as it catches mice.
——
Ive also found that picking something and learning about it helps me with mental models for picking up other paradigms later, similar to how learning Java doesn’t actually prevent you from say picking up Python or Javascript
Addiction occurs because as humans we bond with people but we also bond with things. It could be an activity, a subject, anything. We get addicted because we're bonded to it. Usually this happens because we're not in fertile grounds to bond with what we need to bond with (usually a good group of friends).
When I look at addicted people a lot of them bond with people that have not so great values (big house, fast cars, designer clothing, etc.), adopt those values themselves and get addicted to drugs. This drugs is usually supplied by the people they bond with. However, they bond with those people in the first place because of being aimless and receiving little guidance in their upbringing.
I'm just saying all that to make it more concrete with what I mean about "good people".
Back to LLMs. A lot of us are bonding with it, even if we still perceive it as an AI. We're bonding with it because when it comes to certain emotional needs, they're not being fulfilled. Enter a computer that will listen endlessly to you and is intellectually smarter than most humans, albeit it makes very very dumb mistakes at times (like ordering +1000 drinks when you ask for a few).
That's where we're at right now.
I've noticed I'm bonded with it.
Oh, and to some who feel this opinion is a bit strong, it is. But consider that we used to joke that "Google is your best friend" when it just came out and long thereafter. I think there's something to this take but it's not fully in the right direction I think.
No, it's different from other skills in several ways.
For one, the difficulty of this skill is largely overstated. All it requires is basic natural language reading and writing, the ability to organize work and issue clear instructions, and some relatively simple technical knowledge about managing context effectively, knowing which tool to use for which task, and other minor details. This pales in comparison with the difficulty of learning a programming language and classical programming. After all, the entire point of these tools is to lower the required skill ceiling of tasks that were previously inaccessible to many people. The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I would argue that the results depend far more on the user's familiarity with the domain than their skill level. Domain experts know how to ask the right questions, provide useful guidance, and can tell when the output is of poor quality or inaccurate. No amount of technical expertise will help you make these judgments if you're not familiar with the domain to begin with, which can only lead to poor results.
> might be useful now or in the future
How will this skill be useful in the future? Isn't the goal of the companies producing these tools to make them accessible to as many people as possible? If the technology continues to improve, won't it become easier to use, and be able to produce better output with less guidance?
It's amusing to me that people think this technology is another layer of abstraction, and that they can focus on "important" things while the machine works on the tedious details. Don't you see that this is simply a transition period, and that whatever work you're doing now, could eventually be done better/faster/cheaper by the same technology? The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
I'm of the opinion that this goal is unachievable with the current tech generation, and that the bubble will burst soon unless another breakthrough is reached. In the meantime, your own skills will continue to atrophy the more you rely on this tech, instead of on your own intellect.
You’re right. I’m going back to writing assembly. These compilers have totally atrophied my ability to write machine code!
And it seemed pretty clear to me that they would have to do with the sort of evergreen, software engineering and architecture concepts that you still need a human to design and think through carefully today, because LLMs don't have the judgment and a high-level view for that, not the specific API surface area or syntax, etc., of particular frameworks, libraries, or languages, which LLMs, IDE completion, and online documentation mostly handle.
Especially since well-designed software systems, with deep and narrow module interface, maintainable and scalable architectures, well chosen underlying technologies, clear data flow, and so on, are all things that can vastly increase the effectiveness of an AI coding agent, because they mean that it needs less context to understand things, can reason more locally, etc.
To be clear, this is not about not understanding the paradigms, capabilities, or affordances of the tech stack you choose, either! The next books I plan to get are things like Modern Operating Systems, Data-Oriented Design, Communicating Sequential Processes, and The Go Programming Language, because low level concepts, too, are things you can direct an LLM to optimize, if you give it the algorithm, but which they won't do themselves very well, and are generally also evergreen and not subsumed in the "platform minutea" described above.
Likewise, stretching your brain with new paradigms — actor oriented, Smalltalk OOP, Haskell FP, Clojure FP, Lisp, etc — gives you new ways to conceptualize and express your algorithms and architectures, and to judge and refine the code your LLM produces, and ideas like BDD, PBT, lightweight formal methods (like model checking), etc, all provide direct tools for modeling your domain, specifying behavior, and testing it far better, which then allow you to use agentic coding tools with more safety or confidence (and a better feedback loop for them) — at the limit almost creating a way to program declaratively in executible specifications, and then convert those to code via LLM, and then test the latter against the former!
You'll probably be forming some counter-arguments in your head.
Skip them, throw the DDD books in the bin, and do your co-workers a favour.
But it should be a philosophy, not a directive. There are always tradeoffs to be made, and DDD may be the one to be sacrificed in order to get things done.
https://www.amazon.com/Learning-Domain-Driven-Design-Alignin...
It presents the main concepts like a good lecture and a more modern take than the blue book. Then you can read the blue book.
But DDD should be taken as a philosophy rather than a pattern. Trying to follow it religiously tends to results in good software, but it’s very hard to nail the domain well. If refactoring is no longer an option, you will be stuck with a non optimal system. It’s more something you want to converge to in the long term rather than getting it right early. Always start with a simpler design.
Thanks for the recommendation!
I use AI for the mundane parts, for brainstorming bugs. It is actually more consistent than me in covering corner cases, making sure guard conditions exist etc. So I now focus more on design/architecture and what to build and not minutea.
- AI creating un-opinionated summaries of PRs to help me get started reviewing
- AI being an interactive tutor while I’ll still do the hard work of learning something new [1]
- AI challenging my design proposal QA style, making me defend it
- boilerplate and clear refactorings, while I’ll build the abstractions
[1] https://www.dev-log.me/jokes_on_you_ai_llms_for_learning/
The initial speed is exactly what the article describes, a Loss Disguised as a Win.
Thank you for not using an LLM.
I would have thought sanity checking the output to be the most elementary next step.
A lot of the time the issue isn't actually the code itself but larger architectural patterns. But realizing this takes a lot of mental work. Checking out and just accepting what exists, is a lot easier but misses subtleties that are important.
So vibers may be assuming the AI is as reliable, or at least can be with enough specs and attempts.
That's most code when you're still working on it, no?
> Also, multiple agents can run at once, which is a workflow for many developers. The work essentially doesn't come to a pausing point.
Yeah the agent swarm approach sounds unsurvivably stressful to me lol
People seem to think that just because it produces a bunch of code you therefore don’t need to read it or be responsible for the output. Sure you can do that, but then you are also justifying throwing away all the process and thinking that has gone into productive and safe software engineering over the last 50 years.
Have tests, do code reviews, get better at spec’ing so the agent doesn’t wing it, verify the output, actively curate your guardrails. Do this and your leverage will multiply.
I’ve found also AI assisted stuff is remarkable for algorithmically complex things to implement.
However one thing I definitely identify with is the trouble sleeping. I am finally able to do a plethora of things I couldn’t do before due to the limits of one man typing. But I don’t build tools I don’t need, I have too little time and too many needs.
AI is really good to rubber duck through a problem.
The LLM has heard of everything… but learned nothing. It also doesn't really care about your problem.
So, you can definitely learn from it. But the moment it creates something you don't understand, you've lost control.
You had one job.
I have the exact same experience... if you don't use it, you'll lose it
But yes, I usually constrain my plans to one function, or one feature. Too much and it goes haywire.
I think a side benefit is that I think more about the problem itself, rather than the mechanisms of coding.
The failure mode isn't "AI writes bad code." It's "developer accepts bad code without reading it." Those are very different problems with very different solutions.
I've found the sweet spot is treating AI output like a pull request from a very fast but somewhat careless junior dev — you still review every line, you still understand the architecture, you still own the decisions. But the first draft appears in seconds instead of hours. The time savings compound when you know the codebase well enough to immediately spot when the AI is heading in the wrong direction.
The people getting burned are the ones skipping the review step and hoping for the best.
1. It's turning the Engineering work into the worst form of QA. It's that quote about how I want AI to do my laundry and fold my clothes so I have time to practice art. In this scenario the LLM is doing all the art and all that's left is the doing laundry and folding it. No doubt at a severely reduced salary for all involved.
2. Where exactly is the skill to know good code from bad code supposed to come from? I hear this take a lot I don't know any serious engineer that can honestly say that they can recognize good code from bad code without spending time actually writing code. It's makes the people asking for this look like that meme comic about the dog demanding you play fetch but not take the ball away. "No code! Only review!" You don't get one without the other.
Answer: Books. Two semesters of "Software Engineering" from a CS course. A CS course. CS classes: Theory of Computing. (Work. AKA Order(N) notation. Turing machines. Alphabets. Search algorithms and when/why to use them.) Data Structures. (Teaches you about RAM vs. Disk Storage.) Logic a.k.a. Discrete Math. (Hardware stuff = Logic. Also Teaches you how to convert procedures into analytic solutions into numerical solutions aka a single function that gives you an answer through determining the indeterminate of an inductive reasoning (converting a series, procedure or recursive function into an equation that gives you an answer instead of iterating and being dumb.) Networking. (error checking techniques, P2P stuff) Compilers. (Dragon book.) Math. Linear Algebra. (Rocket science) Abstract Algebra (Crypto stuff, compression) Theory of Equations (functional programming). Statistics (very helpful). Geometry. (Proofs).
Taking all these classes makes you smart and a good programmer. "Programming" without them means you're... well. Hard to talk to.
I don't think you need to write any code to be a good programmer. IMHO.
Also again, this logic only works on absolute greenfield project. If you write enterprise code in large organizations, you also have to consider the established architecture and patterns of the code-base. There's no book or usually cohesive documentation to that. There's a reason a lot of devs aren't considered fully on-boarded until after a year.
If you leverage the LLM to write the code for you. Then you never learn about your own codebase. Thus you cannot preform good code review. Which again is why I say reviewing code while never writing code is a paradox statement. You don't have the skills to do the former without doing the latter.
Even if you're take was that typing code into a keyboard was never the main part of your job then the question is ok what is it? And if the answer was being an architect then I ask you. How can you know what code patterns work for this specific business need when you don't write code?
(i.e. I don't think that's your honest opinion and you're just trolling)
If you keep some for yourself, there’s a possibility that you might not churn out as much code as quickly as someone delegating all programming to AI. But maybe shipping 45,000 lines a day instead of 50,000 isn’t that bad.
The people on the start of the curve are the ones who swear against LLMs for engineering, and are the loudest in the comments.
The people on the end of the curve are the ones who spam about only vibing, not looking at code and are attempting to build this new expectation for the new interaction layer for software to be LLM exclusively. These ones are the loudest on posts/blogs.
The ones in the middle are people who accept using LLMs as a tool, and like with all tools they exercise restraint and caution. Because waiting 5 to 10 seconds each time for an LLM to change the color of your font, and getting it wrong is slower than just changing it yourself. You might as well just go in and do these tiny adjustments yourself.
It's the engineers at both ends that have made me lose my will to live.
When someone vibe-codes a project, they typically pin whatever dependency versions the LLM happened to know about during training. Six months later, those pinned versions have known CVEs, are approaching end-of-life, or have breaking changes queued up. The person who built it doesn't understand the dependency tree because they never chose those dependencies deliberately — the LLM did. Now upgrading is harder than building from scratch because nobody understands why specific libraries were chosen or what assumptions the code makes about their behavior.
This is already happening at scale. I work on tooling that tracks version health across ecosystems and the pattern is unmistakable: projects with high AI-generation signals (cookie-cutter structure, inconsistent coding style within the same file, dependencies that were trendy 6 months ago but have since been superseded) correlate strongly with stale dependency trees and unpatched vulnerabilities.
The "flow" part makes it worse — the developer feels productive because they shipped features fast. But they're building on a foundation they can't maintain, and the real cost shows up on a delay. It's technical debt with an unusually long fuse.
By not going through this process, you loose intent, familiarity, and opinions.
It's the exact same as vibe-coding.
Fortunately, I've retired so I'm going focus on flooding the zone with my crazy ideas made manifest in books.
Back in 2020, GPT-3 could code functional HTML from a text description, however it's only around now that AI can one-shot functional websites. Likewise, AI can one-shot a functional demo of a saas product, but they are far from being able to one-shot the entire engineering effort of a company like slack.
However, I don't see why the rate of improvement will not continue as it has. The current generation of LLM's haven't been event trained yet on NVidia's latest Blackwell chips.
I do agree that vibe-coding is like gambling, however that is besides the point that AI coding models are getting smarter at a rate that is not slowing down. Many people believe they will hit a sigmoid somewhere before they reach human intelligence, but there is no reason to believe that besides wishful thinking.
idk what ya'll are doing with AI, and i dont really care. i can finally - fiiinally - stay focused on the problem im trying to solve for more than 5 minutes.
Like I don’t remember syntax or linting or typos being a problem since I was in high school doing Turbo Pascal or Visual Basic.
I've never had problems with any of those things after I learned what a code editor was.
The differences are subtle but those of us who are fully bought in (like myself) are working and thinking in a new way to develop effectively with LLMs. Is it perfect? Of course not - but is it dramatically more efficient than the previous era? 1000%. Some of the things I’ve done in the past month I really didn’t think were possible. I was skeptical but I think a new era is upon us and everyone should be hustling to adapt.
My favorite analogy at the moment is that for awhile now we’ve been bowling and been responsible for knocking down the pins ourselves. In this new world we are no longer the bowlers, rather we are the builders of bumper rails that keep the new bowlers from landing in the gutter.
Not everyone who plays slot machines is worse off — some people hit the jackpot, and it changes their life. Also, the people who make the slot machines benefit greatly.
https://fortune.com/2026/01/29/100-percent-of-code-at-anthro...
Of course you can choose to believe that this is a lie and that Anthropic is hyping their own models, but it's impossible to deny the enormous revenue that the company is generating via the products they are now giving almost entirely to coding agents.
And people claiming it's a lie are in for a rough awakening. I'm sure we will see a lot of posters on HN simply being too embarrassed to ever post again when they realize how off they were.
If you had midas touch would you rent it out?
https://sequoiacap.com/podcast/training-data-openai-imo/
The thing however is the labs are all in competition with each other. Even if OpenAI had some special model that could give them the ability to make their own Saas and products, it is more worth it for them to sell access to the API and use the profit to scale, because otherwise their competitors will pocket that money and scale faster.
This holds as long as the money from API access to the models is worth more than the comparative advantage a lab retains from not sharing it. Because there are multiple competing labs, the comparative advantage is small (if OpenAI kept GPT-5.X to themselves, people would just use Claude and Anthropic would become bigger, same with Google).
This however may not hold forever, it is just a phenomena of labs focusing more on heavily on their models with marginal product efforts.
Of course at a certain point, you have to wonder if it would be faster to just type it than to type the prompt.
Anyways, if this was true in the sense they are trying to imply, why does Boris still have a job? If the agents are already doing 100% of the work, just have the product manager run the agents. Why are they actively hiring software developers??
It’s not. It’s either 33% slower than perceived or perception overestimates speed by 50%. I don’t know how to trust the author if stuff like this is wrong.
She's not wrong.
A good way to do this calculation is with the log-ratio, a centered measure of proportional difference. It's symmetric, and widely used in economics and statistics for exactly this reason. I.e:
ln(1.2/0.81) = ln(1.2)-ln(0.81) ≈ 0.393
That's nearly 40%, as the post says.
It’s more obvious if you take more extreme numbers, say: they estimated to take 99% less time with AI, but it took 99% more time - the difference is not 198%, but 19900%. Suddenly you’re off by two orders of magnitude.
Still an interesting observation. It was also on brown field open source projects which imo explains a bit why people building new stuff have vastly different experiences than this.
Not sure why we'd want a tool that generates so much of this for us.
Was this actually a failed prediction? A article claiming with 0 proof that it failed is not good enough for me. With so many people generating 100% of their code using AI. It seems true to me.
Note: the study used sonnet-3.5 and sonnet-3.7; there weren’t any agents, deep research or similar tools available. I’d like to see this study done again with:
1. juniors ans mid-level engineers
2. opus-4.6 high and codex-5.2 xhigh
3. Tasks that require upfront research
4. Tasks that require stakeholder communication, which can be facilitated by AI
I’d be thrilled if that AI could finally make one of our most annoying stakeholders test the changes they were so eager to fast track, but hey, I might be surprised.
Of course, all of that can be done by humans, too. But this discussion is about average speed of a developer, and there’s a reason many companies employ product owners for the stakeholder communication.
I wonder if there's something similar going on here.
Which frankly describes pretty much all real world commercial software projects I've been on, too.
Software engineering hasn't happened yet. Agents produce big balls of mud because we do, too.
Maybe they need to start handing out copies of the mythical man month again because people seem to be oblivious to insights we already had a few decades ago