That was true for almost seventy years until roughly last year.
AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
When concrete things like that start to happen, then I will start to believe in the 10x claim.
So increasing individual output by itself is not enough to affect the argument. It could, if you also reduce the size of people needed for a project, where people are everyone included in the project, not just SWE. But there are strong forces in large orgs to pull toward larger project sizes: budgeting overhead and other similar large orgs optimize for legibility kind of arguments.
IMO the only way this will change is when new companies will challenge existing big guys. I think AI will help achieve this (e.g. agentic e-commerce challenging the existing players), but it will take time.
Clearly..it still wasn't a silver bullet. Because output as a metric is a bad one. I thought it was only one managers valued..but apparently Anthropic has convinced devs to value it finally? i guess it def hits that dopamine receptor hard.
Extreme example, but exemplifies point
Features are harder to show the limits of, but have you ever had a client or boss who didn't know what they wanted, they just kept asking for stuff? 100 sequential tickets to change the contrast of some button can be closed in record time, but the final impact is still just the final one of the sequence.
Or have you experienced bike-shedding* from coworkers in meetings? It doesn't matter what metaphorical colour the metaphorical bike shed gets painted.
Or, as a user, had a mandatory update that either didn't seem to do anything at all, or worse moved things in the UX around so you couldn't find features you actually did use? Something I get with many apps and operating systems; I'd say MacOS's UX peaked back when versions were named after cats. Non-UX stuff got better since then, but the UX (even the creation of SwiftUI as an attempt to replace UIKit and AppKit) feels like it was CV-driven development, not something that benefits me as a user.
You can add a lot of features and close a lot of tickets while adding zero-to-negative business value. When code was expensive, that cost could be used directly as a reason to say "let's delay this"; now you have to explain more directly to the boss or the client why they're asking for an actively bad thing instead of it being a replacement of an expensive gamble with a cheap gamble. This is not something most of us are trained to do well, I think. Worse, even those of us who are skilled at that kind of client interactions, the fact of code suddenly being cheap means that many of us have mis-trained instincts on what's actually important, in exactly the way that those customers and bosses should be suspicious of.
there are entire C corps of monkeys out there
Also, I know that there will be a lot of boilerplate applications that just don't look good or seem to have been well thought out early on.
Folks will use that as a cope mechanism, but huge changes are coming.
I don't think anyone has really wrestled with the implications of that yet - we've started talking about "deskilling" and "congnitive debt" but mostly in the context of "programmers are going to forget how to structure code - how to use the syntax of their languages, etc et etc)." I'm not worried about that as it's the same sort of thing we've seen for decades - compilers, higher-order languages, better abstracts, etc etc etc.
The fact that LLMs are able to wrestle with essential complexity means that using them is going to push us further and further from the actual problems we're trying to solve. Right now, it's the wrestling with problems that helps us understand what those problems are. As our organizations adopt LLMs that are able to take on _those_ problems - that is, customer problems, not problems of data, scaling, and so forth - will we hit a brick wall where we lose that understanding? Where we keep shipping stuff but it gets further and further from what our customers need? How do we avoid that?
> AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
Those are not the same.
You can add 5 different features to a project and still provide less value that the 5 lines diff that resolves a performance bottleneck.
I don't know if, overall, it's a 10x improvement or 6x or 14x but it's a serious contender. Part of it is the LLMs are very uneven in their performance across domains. If all I build is simple landing pages, it might be a 100x improvement. If I work on more complex, proprietary work where there aren't great examples in the training data then it might be a 10% improvement (it helps me write better comments or something)
The premise is that the software development had been mostly "essential complexity" rather than "accidental complexity." But I think anyone who worked as SE in the past decade would have found the opposite is true.
For the human makers of things, the incompletenesses and inconsistencies of our ideas become clear only during implementation.
Conceptual integrity is the most important consideration in system design.
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement in productivity.
---
These ideas still apply very well to modern society. but, Personally, I hope science advances to the point where nine women really can have a baby in parallel.
We may need that to prevent demographic collapse and keep the pension system from running out of money.
“Open the refrigerator door, HAL”
“I can’t do that right now”
>I always look to staff up a project at the beginning as much as possible, looking for doing as much in parallel up-front as we can.
Ah, maybe this is what you think he would take issue with? Fair enough. Perhaps I should have said:
>I always look to staff up as much as is economically and organizationally optimal, to exploit all genuine parallelism opportunities, being careful not to overstaff.
This is the reason why AI-assisted programming has not turned out to be the silver bullet we have been hoping for, at least yet. Muddled prompting by humans gets you the Homer Simpson car you wished for, that will eventually collapse under its own weight.
I've been thinking a lot about Programming as Theory Building [0] as the missing piece in AI-assisted engineering. Perhaps there are approaches which naturally focus on the essence while ignoring the accidents, but I'm still looking for them. Right now the state of the art I see ignores both accident and essence alike, and degrades the ability to make progress.
Please inform me if there are any approaches you know that work! And lest this sound pessimistic, far from it. This state of affairs is actually intoxicatingly motivating. Feels like we have found silver, and just need to start learning to mould bullets.
[0] Another classic required reading of the industry https://pages.cs.wisc.edu/~remzi/Naur.pdf
Vibe coded software is the Marvel green screen movie equivalent.
Fred Brooks wrote that book when they were programming IBM operating systems in assembly language.
Times have really, really changed - do not pay attention to the messages of this book unless for historical fun.
That book isn't, it's built from humility and a rare bright light in this god forsaken field.
Martin Fowler, the author of the blog, may be a bit different than that.