The Economics of Software Teams: Why Most Engineering Orgs Are Flying Blind
70 points
2 hours ago
| 14 comments
| viktorcessan.com
| HN
leokennis
46 minutes ago
[-]
> The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.

Then I'd wager it's the same for the courses and workshop this guy is selling...an LLM can probably give me at least 75% of the financial insights for not even .1% of what this "agile coach" is asking for his workshops and courses.

Maybe the "agile coach LLM" can explain to the "coding LLM's" why they're too expensive, and then the "coding LLM's" can tell the "agile coach LLM" to take the next standby shift then, if he knows so much about code?

And then we actual humans can have a day off and relax at the pool.

reply
bonesss
17 minutes ago
[-]
Ceding the premise that the AGI is gonna eat my job, my job involves reading the spec to be able verify the code and output so the there’s a human to fire and sue. There are five layers of fluffy management and corporate BS before we get to that part, and the AGI is more competent at those fungible skills.

With the annoying process people out of the picture, even reviewing vibeslop full time sounds kinda nice… Feet up, warm coffee, just me and my agents so I can swear whenever I need to. No meetings, no problems.

reply
pydry
28 minutes ago
[-]
Exactly. I think it's been a while since I've read an LLM hot take which couldnt have been written by an LLM and this one is no exception.

There's a 99% chance that the training materials on sale are equally replaceable with a prompt.

reply
kaon_2
10 minutes ago
[-]
True. And yet, as an organization when you buy OP's training, you don't buy the material. You buy the feeling that you make your organization becomes more productive. You buy the signal to your boss that you are innovative and working to make your organization more productive. And you buy the time and headspace from your engineers that they are thinking if at least for 2 hours about making the organization more productive. The latter can be well worth the cost, and the former surely too.
reply
boron1006
22 minutes ago
[-]
> A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today.

I’ve been on 2 failed projects that have been entirely AI generated and it’s not that agents slow down and you can just send more agents to work on projects for longer, it’s that they becoming completely unable to make any progress whatsoever, and whatever progress they do make is wrong.

reply
nishantjani10
18 minutes ago
[-]
this is the part of the article that I did not sit well with me either. Code is agent generated, agent can debug it but will alway be human owned.

unless anthropic tomorrow comes in and takes ownership all the code claude generates, that is not changing..

reply
iamflimflam1
20 minutes ago
[-]
Very much like humans when they drown in technical debt. I think the idea that a messy codebase can be magically fixed is laughable.

What I might believe though is that agents might make rewrites a lot more easy.

“Now we know what we were trying to build - let’s do it properly this time!”

reply
Cthulhu_
17 minutes ago
[-]
Potentially, yes, but as with other software, you need to know AND have (automated) verifications on what it does, exactly.

And of course, make the case that it actually needs a rewrite, instead of maintenance. See also second-system effect.

reply
jaccola
29 minutes ago
[-]
I think the only thing that matters is whether the people on the team care deeply about the product; whether they care more about the product than their own careers (in the short term). Without that, any metric or way of thinking can and will be gamed.

Unfortunately, even with all the management techniques in the world, there are just some projects that are impossible to care about. There’s simply a significantly lower cap on productivity on these projects.

reply
lknuth
31 minutes ago
[-]
Making it solely about the extraction of dollars is a great recipe to make something mediocre. See Hollywood or Microslop.

Its like min-maxing a Diablo build where you want the quality of the product to be _just_ above the "acceptable" threshold but no higher because that's wasting money. Then, you're free to use all remaining points to spec into revenue.

reply
cmarot
3 minutes ago
[-]
Exactly. In addition, sometimes a good software "only" makes you save 1% of your time, but that 1% was a terrible burden that induced mental fatigue, made you take bad decisions, etc. It can even make a great Engineer stay when he would have left with the previous version.
reply
InfinityByTen
33 minutes ago
[-]
When I see someone just throwing a lot of numbers and graphs at me, I see that there are in to win an argument, and not propose an idea.

Of late, I've come across a lot of ideas from Rory Sutherland and my conclusion from listening to his ideas is that there are some people, who're obsessed with numbers, because to them it's a way to find certainty and win arguments. He calls them "Finance People" (him being a Marketing one). Here's an example

"Finance people don’t really want to make the company money over time. They just thrive on certainty and predictability. They try to make the world resemble their fantasy of perfect certainty, perfect quantification, perfect measurement.

Here’s the problem. A cost is really quantifiable and really visible. And if you cut a cost, it delivers predictable gains almost instantaneously."

> Choosing to spend three weeks on a feature that serves 2% of users is a €60,000 decision.

I'd really want to hire the Oracle of a PM/ Analyst that can give me that 2% accurately even 75% of the time, and promise nothing non-linear can come from an exercise.

reply
tweetle_beetle
18 minutes ago
[-]
As with most things, isn't the truth somewhere in the middle? True cost/value is very hard to calculate, but we could all benefit by trying a bit harder to get closer to it.

It's all too common to frame the tension as binary: bean counters vs pampered artistes. I've seen it many times and it doesn't lead anywhere useful.

reply
SpicyLemonZest
5 minutes ago
[-]
Here I think the truth is pretty far to one side. Most engineering teams work at a level of abstraction where revenue attribution is too vague and approximate to produce meaningful numbers. The company shipped 10 major features last quarter and ARR went up $1m across 4 new contracts using all of them; what is the dollar value of Feature #7? Well, each team is going to internally attribute the entire new revenue to themselves, and I don’t know what any other answer could possibly look like.
reply
sdevonoes
14 minutes ago
[-]
Still don’t understand what regular people (like the author) gain from selling how wonderful AI is. I get that the folks at Anthropic and openai shove AI through our throats every day, but nobodies?
reply
csomar
2 minutes ago
[-]
He is selling consulting around AI/LLM.
reply
danpalmer
5 minutes ago
[-]
> even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today

Citation needed. A human engineer can grok a lot in 10 days, and an agent can spend a lot of tokens in 10 days.

reply
consp
34 minutes ago
[-]
The estimate cost number is for very large companies with massive overhead bulk. Dump the management overhead, the HR machine and other things smaller companies do not have and this number comes down massively.
reply
petetnt
27 minutes ago
[-]
> This does not mean that Slack’s engineering investment was wasted, because Slack also built enterprise sales infrastructure, compliance capabilities, data security practices, and organizational resilience that a fourteen-day prototype does not include.

The LLM-agent team argument also misses the core point that the engineering investment (which actually encompasses business decisions, design and much more than just programming) is what actually got Slack (or any other software product) to the point where is it is now and where it's going in the future and creating a snapshot of the current status is, while maybe not absolutely trivial, still just a tiny fraction of the progress made over the years.

reply
tgdn
22 minutes ago
[-]
I get "This site can’t be reached"
reply
ares623
24 minutes ago
[-]
The "author" used someone's vibecoded Slack clone to justify his conclusions. I think he believes that the majority of Slack's value lies in the slick CSS animations.

I do agree with his thesis in the middle, about how the ZIRP decade and the cultures that were born from that period were outrageous and cannot survive the current era. It's a brave new world, and it's not because of AI. It's because there's just not enough money flowing anymore, and what little is left is sucked up by the big boys (AI).

reply
jiusanzhou
46 minutes ago
[-]
The 3-5x return threshold is the part most eng leaders never internalize. I've seen teams spend entire quarters on internal tooling that saves maybe 20 minutes per developer per week — nowhere near break-even, let alone a healthy return. The uncomfortable truth is that most prioritization frameworks (RICE, WSJF, etc.) deliberately avoid dollar amounts because nobody wants to see the math on their pet project. Once you attach real costs to sprint decisions, half the roadmap becomes indefensible.
reply
cpinto
20 minutes ago
[-]
You’re absolutely right, but just to a point. It should be easy to clearly quantify the desired financial outcome of a sprint, but not of its components. I don’t want to spend a single minute figuring out the financial outcome of a single ticket.
reply
SpicyLemonZest
49 minutes ago
[-]
> The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.

I keep seeing this assumption that "unmanageable" caps out at "kinda hard to reason about", and anyone with experience in large codebases can tell you that's not so. There are software components I own today which require me to routinely explain to junior engineers (and indeed to my own instances of Claude) why their PR is unsound and I won't let them merge it no matter how many tests they add.

reply
snowe2010
31 minutes ago
[-]
Yeah this really breaks down when you put the logic up against ANY sort of compliance testing. Ok you don’t meet compliance, your agents have spent weeks on it and they’re just adding more bugs. Now what are you going to do? You have to go into the code yourself. Uh oh.
reply
lynx97
35 minutes ago
[-]
Using ‘blind’ to mean ‘ignorant’ is like using any disability label as a synonym for ‘bad’—it turns a real condition into an insult.
reply
Smaug123
32 minutes ago
[-]
"Flying blind" is a completely standard idiom originating from flying while blinded by e.g. cloud or darkness. Its meaning is a figurative transplant of a literal description.
reply
lynx97
21 minutes ago
[-]
I know it’s an idiom. The point is that it still uses blindness as a stand-in for incompetence/unsafe guessing. Being common doesn’t make it harmless. Common just means we’ve normalized it. And you defending it shows that weve normalized it to a point where the double-meaning is seemingly only apparent to blind people.
reply
anonymous908213
5 minutes ago
[-]
It absolutely does not use blindness as a stand-in for incompetence, that is your own outrage-seeking interpretation of it. A neutral interpretation would be that "flying blind" is to "operate without perfect information". It is a simple description of operating conditions, not a derogatory term in any way. Your reply is worded in such a way as to indicate that you think the person you're replying to deserves to be shamed for 'defending' it, but having a disability does not entitle you to browbeat the world into submission and regulate all usage of any words associated with your disability as you see fit. This is quite benign and people are perfectly well within their right to object to somebody trying to police plainly descriptive language.
reply