Trillions spent and big software projects are still failing
150 points
7 hours ago
| 34 comments
| spectrum.ieee.org
| HN
rossdavidh
30 minutes ago
[-]
It's a great article, until the end where they say what the solution would be. I'm afraid that the solution is: build something small, and use it in production before you add more features. If you need to make a national payroll, you have to use it for a small town with a payroll of 50 people first, get the bugs worked out, then try it with a larger town, then a small city, then a large city, then a province, and then and only then are you ready to try it at a national level. There is no software development process which reliably produces software that works at scale without doing it small, and medium sized, first, and fixing what goes wrong before you go big.
reply
solatic
57 seconds ago
[-]
That's what works for products, not software systems. Gradual growth inevitably results in loads of technical debt that is not paid off as Product adds more feature requests to deliver larger and larger sales contracts. Eventually you want to rewrite to deal with all the technical debt, but nobody has enough confidence to say what is in the codebase that's important to Product and what isn't, so everybody is afraid and frozen.

Scale is separately a Product and Engineering question. You are correct that you cannot scale a Product to delight many users without it first delighting a small group of users. But there are plenty of scaled Engineering systems that were designed from the beginning to reach massive scale. WhatsApp is probably the canonical example of something that was a rather simple Product with very highly scaled Engineering and it's how they were able to grow so much with such a small team.

reply
BirAdam
3 hours ago
[-]
I study and write quite a bit of tech history. IMHO from what I've learned over the last few years of this hobby, the primary issue is quite simple. While hardware folks study and learn from the successes and failures of past hardware, software folks do not. People do not regularly pull apart old systems for learning. Typically, software folks build new and every generation of software developers must relearn the same problems.
reply
malfist
2 hours ago
[-]
I work at $FANG, every one of our org's big projects go off the rails at the end of the project and there's always a mad rush at the end to push developers to solve all the failures of project management in their off hours before the arbitrary deadline arrives.

After every single project, the org comes together to do a retrospective and ask "What can devs do differently next time to keep this from happening again". People leading the project take no action items, management doesn't hold themselves accountable at all, nor product for late changing requirements. And so, the cycle repeats next time.

I led and effort one time, after a big bug made it to production after one of those crunches that painted the picture of the root cause being a huge complicated project being handed off to offshore junior devs with no supervision, and then the junior devs managing it being completely switched twice in the 8 month project with no handover, nor introspection by leadership. My manager's manager killed the document and wouldn't allow publication until I removed any action items that would constrain management.

And thus, the cycle continues to repeat, balanced on the backs of developers.

reply
ajkjk
35 minutes ago
[-]
Of course the reason it works this way is that it works. As much as we'd like accountability to happen on the basis of principle, it actually happens on the basis of practicality. Either the engineers organize their power and demand a new relationship with management, or projects start going so poorly that necessity demands a better working relationship, or nothing changes. There is no 'things get better out from wisdom alone' option; the people who benefit from improvements have to force the hand of the people who can implement them. I don't know if this looks like a union or something else but my guess is that in large part it's something else, for instance a sophisticated attempt at building a professional organization that can spread simple standards which organizations can clearly measure themselves against.

I think the reasons this hasn't happened is (a) tech has moved too fast for anyone to actually be able to credibly say how things should be done for longer than a year or two, and (b) attempts at professional organizations borrowed too much from slower-moving physical engineering and so didn't adapt to (a). But I do think it can be done and would benefit the industry greatly (at the cost of slowing things down in the short term). It requires a very 'agile' sense of standards, though.. If standards mean imposing big constraints on development, nobody will pay attention to them.

reply
malfist
11 minutes ago
[-]
I agree wholeheartedly that collective action is how we stop balancing poor management on the backs of engineers, but good luck getting other engineers to see it that way. There's heaps of propaganda out there telling engineers that if they join a union their high salary will go away, even though unions have never been shown to reduce wages.
reply
ajkjk
3 minutes ago
[-]
My hunch is that software engineers are averse to unions because they correctly perceive that unions are a wide angle away from the type of professional organization that would be most beneficial to them. The industry is sufficiently different that the normal union model is just not very good and has a 'square leg round hole' feeling.

For instance by and large the role of organizing to not to get more money but rather to reduce indignities... Wasted work, lack of forethought, bad management, arbitrary layoffs, etc. So it is much more about governing management with good practices than about keeping wages up; at least for now wages are generally high anyway.

there are also reasons to dedend jobs/wages in the face of e.g. outsourcing... But it's almost like a separate problem. Maybe there needs to be both a union and a uncoupled professional standard or something?

reply
lazyasciiart
16 minutes ago
[-]
For one project I got so far as to include in the project proposal some outcomes that showed whether or not it was a success: quote from the PM “if it doesn’t do that then we should not have bothered building this”. They objected to even including something so obviously required in the plan.

Waste of my bloody time. Project completed, taking twice as many devs for twice as long, great success, PM promoted. Doesn’t do that basic thing that was the entire point of it. Nobody has ever cared.

Edit to explain why I care: there was a very nice third party utility/helper for our users. We built our own version because “only we can do amazing direct integration with the actual service, which will make it far more useful”. Now we have to support our worse in-house tool, but we never did any amazing direct integration and I guarantee we never will.

reply
SoftTalker
1 hour ago
[-]
Glad to hear that $FANG has similar incompetency as every other mid-tier software shop I've ever worked in. Your project progression sounds like any of them. Here I was thinking that $FANG's highly-paid developers and project management processes were actually better than average.
reply
jvanderbot
1 hour ago
[-]
They can afford to try a lot, why try better?
reply
game_the0ry
5 minutes ago
[-]
^ This. Not at FAANG, but I am too familiar with this.

This is why software projects fail. We lowly developers always take the blame and management skates. The lack of accountability among decision makers is why things like the UK Post Office scandals happen.

Heads need to be put on pikes. Start with John Roberts, Adam Crozier, Moya Greene, and Paula Vennells.

reply
fishmicrowaver
35 minutes ago
[-]
Reminds me of the military. Senior leaders often have no real idea of what is happening on the ground because the information funneled upward doesn't fit into painting a rosy report. The middle officer ranks don't want to know the truth because it impacts their careers. How can executives even hope to lead their organizations this way?
reply
ndiddy
2 minutes ago
[-]
Well the US has lost every military conflict it's entered for the past 70 years. Since there's been no internal pressure to change methodology, maybe the US military doesn't view winning as necessary.
reply
Sevii
2 hours ago
[-]
For how much power they have over team organization and processes, software middle management has nearly no accountability for outcomes.
reply
AlotOfReading
2 hours ago
[-]
Is it middle management that has no accountability, or executive? Middle and line managers are nearly as targeted by layoff culling as ICs these days in FAANG. The broad processes they're passing down to ICs generally start with someone at director level or higher.
reply
darth_avocado
17 minutes ago
[-]
> For how much power they have over team organization and processes, software middle management has nearly no accountability for outcomes.

Can we also address the fact that “software spend” is distributed disproportionately to management at all levels and people who actually write the software are nickel and dimed. You’d save billions in spend and boost productivity massively if the management is bare bones and is held accountable like the rest of the folks.

reply
MichaelZuo
2 hours ago
[-]
The real question is why would smart competent people continue working under management with blatant ulterior motives that negatively affect them?

Why let their own credibility get dragged down for a second time, third time, fourth time, etc…?

The first time is understandable but not afterwards.

reply
darth_avocado
16 minutes ago
[-]
In today’s market it’s mostly because of the lack of other options to earn a livelihood
reply
pixelpoet
2 hours ago
[-]
Astronomical salaries probably has something to do with it.
reply
MichaelZuo
2 hours ago
[-]
Yeah that could convince smart competent people to grind their teeth and take a second chance under the same management.

But I don’t think a self respecting person would do that over and over.

reply
jrochkind1
4 minutes ago
[-]
You may be over-estimating how many people are self-respecting?
reply
raincom
1 hour ago
[-]
When people live in multi million dollar homes, self-respect doesn't pay monthly mortgage.
reply
teeray
28 minutes ago
[-]
So it's really not the astronomical salary, it's the astronomical debt.
reply
mschuster91
47 minutes ago
[-]
Joke is, most of these homes aren't worth anywhere close to their paper value.

Cy Porter's home inspection videos... jeez. How these "builders" are still in business is mind-blowing to me (as a German). Here? Some of that shit he shows would lead to criminal charges for fraud.

reply
raincom
11 minutes ago
[-]
The land is worth more than the structure in these areas.
reply
lazide
1 hour ago
[-]
Depends on the paycheck.

People will do crazy things for just $100. Including literally get fucked in the ass by a stranger.

7 figures? Ho boy. They’ll use way fancier words though for that.

reply
zem
6 minutes ago
[-]
serious answer - you find a team with a good direct manager who handles all the upward interactions themselves, and then you basically work for that manager, rather than for the company.
reply
taeric
1 hour ago
[-]
Did they go off the rails at the end, or deadlines force acknowledging that the project is not where folks want it to be?

That said, I think I would agree with your main concern, there. If they question is "why did the devs make it so that project management didn't work?" Seems silly not to acknowledge why/how project management should have seen the evidence earlier.

reply
ludicrousdispla
1 hour ago
[-]
I was a developer for a bioinformatics software startup in which the very essential 'data import' workflow wasn't defined until the release was in the 'testing' phase.
reply
franktankbank
2 hours ago
[-]
> wouldn't allow publication until I removed any action items that would constrain management.

Thats what we call blameless culture lol

reply
01100011
45 seconds ago
[-]
Software folks treat their output as if it's their baby or their art projects.

Hardware folks just follow best practices and physics.

They're different problem spaces though, and having done both I think HW is much simpler and easier to get right. SW is often similar if you're working on a driver or some low-level piece of code. I tried to stay in systems software throughout my career for this reason. I like doing things 'right' and don't have much need to prove to anyone how clever I am.

I've met many SW folks who insist on thinking of themselves as rock stars. I don't think I've ever met a HW engineer with that attitude.

reply
bane
2 hours ago
[-]
I've also considered a side-effect of this. Each generation of software engineers learns to operate on top of the stack of tech that came before them. This becomes their new operating floor. The generations before, when faced with a problem, would have generally achieved a solution "lower" down in the stack (or at their present baseline). But the generations today and in the future will seek to solve the problems they face on top of that base floor because they simply don't understand it.

This leads to higher and higher towers of abstraction that eat up resources while providing little more functionality than if it was solved lower down. This has been further enabled by a long history of rapidly increasing compute capability and vastly increasing memory and storage sizes. Because they are only interacting with these older parts of their systems at the interface level they often don't know that problems were solved years prior, or are capable of being solved efficiently.

I'm starting to see ideas that will probably form into entire pieces of software "written" on top of AI models as the new floor. Where the model basically handles all of the mainline computation, control flow, and business logic. What would have required a dozen Mhz and 4MB of RAM to run now requires TFlops and Gigabytes -- and being built from a fresh start again will fail to learn from any of the lessons learned when it was done 30 years ago and 30 layers down.

reply
seeknotfind
2 hours ago
[-]
Yeah, people tend to add rather than improve. It's possible to add into lower levels without breaking things, but it's hard. Growing up as a programmer, I was taught UNUX philosophy as a golden rule, but there are sharp corners on this one:

To do a new job, build afresh rather than complicate old programs by adding new "features".

reply
RaftPeople
1 hour ago
[-]
> While hardware folks study and learn from the successes and failures of past hardware, software folks do not

I've been managing, designing, building and implementing ERP type software for a long time and in my opinion the issue is typically not the software or tools.

The primary issue I see is lack of qualified people managing large/complex projects because it's a rare skill. To be successful requires lots of experience and the right personality (i.e. low ego, not a person that just enjoys being in charge but rather a problem solver that is constantly seeking a better understanding).

People without the proper experience won't see the landscape in front of them. They will see a nice little walking trail over some hilly terrain that extends for about a few miles.

In reality, it's more like the Fellowship of the Rings trying to make it to Mt Doom, but that realization happens slowly.

reply
avemg
1 hour ago
[-]
> In reality, it's more like the Fellowship of the Rings trying to make it to Mt Doom, but that realization happens slowly.

And boy to the people making the decisions NOT want to hear that. You'll be dismissed as a naysayer being overly conservative. If you're in a position where your words have credibility in the org, then you'll constantly be asked "what can we do to make this NOT a quest to the top of Mt Doom?" when the answer is almost always "very little".

reply
Wololooo
33 minutes ago
[-]
Impossible projects with impossible deadlines seems to be the norm and even when people pull them off miraculously the lesson learned is not "ok worked this time for some reason but we should not do this again", then the next people get in and go "it was done in the past why can't we do this?"
reply
hackthemack
2 hours ago
[-]
I have a theory that the churn in technology is by design. If a new paradigm, new language, new framework comes out every so many years, it allows the tech sector to always want to hire new graduates for lower salaries. It gives a thin veneer of we want to always hire the person who has X when really they just do not want to hire someone with 10 years of experience in tech but who may not have picked up X yet.

I do not think it is the only reason. The world is complex, but I do think it factors into why software is not treated like other engineering fields.

reply
jemmyw
1 hour ago
[-]
The problem with that is that it would require a huge amount of coordination for it to be by design. I think it's better to look on it as systemic. Which isn't to say there aren't malign forces contributing.
reply
hackthemack
19 minutes ago
[-]
I agree. Perhaps, "by design" is not the correct phrasing. Many decisions and effects go through a multi weighted graph of complexity (sort of like machine learning).
reply
tra3
42 minutes ago
[-]
Indeed. How does that saying go? Don’t attribute to malice what can be explained by stupidity?

On the other hand Microsoft and taceboook did collude to keep salaries low. So who knows.

reply
hackthemack
37 seconds ago
[-]
Anyone in tech should read up on https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...

It was more tech companies in collusion than many people realize. 1) Apple and Google, (2) Apple and Adobe, (3) Apple and Pixar, (4) Google and Intel, (5) Google and Intuit, and (6) Lucasfilm and Pixar.

It was settled out of court. One of the plaintiffs was very vocal that the settlement was a travesty of justice. The companies paid less in the settlement than the amount they saved by colluding to keep wages down.

reply
SoftTalker
1 hour ago
[-]
Constantly rewriting the same stuff in endless cycles of new frameworks and languages gives an artificial sense of productivity and justifies its own existence.

If we took the same approach to other engineering, we'd be constantly tearing down houses and rebuilding them just because we have better nails now. It sure would keep a lot of builders employed though.

reply
Hemospectrum
2 minutes ago
[-]
> If we took the same approach to other engineering, we'd be constantly tearing down houses and rebuilding them just because we have better nails now. It sure would keep a lot of builders employed though.

This is almost exactly what happens in some countries.

reply
hackthemack
15 minutes ago
[-]
I agree. But, I think the execs just say, "How can we get the most bang for our buck? If we use X, Y, Z technologies, that are the new hotness, then we will get all the new hordes of hires out there, which will make them happy, and has the added benefit of paying them less"
reply
MarcelOlsz
28 minutes ago
[-]
I think this is a downstream of effect of there being no real regulation or professional designations in software which leads to every company and team being wildly different leading to no standards leaving no time for anything but crunching since there are no barriers restricting your time, so nobody spends time doing much besides shipping constantly.
reply
raincom
33 minutes ago
[-]
Some consequences of NOT learning from prior successes and failures: (a) no more training for the next generation of developers/engineers (b) fighting for the best developers, and this manifests in leetcode grinding (c) decrease in cooperation among team mates, etc.
reply
QuercusMax
2 hours ago
[-]
I think part of it is that reading code isn't a skill that most people are taught.

When I was in grad school ages ago, my advisor told me to spend a week reading the source code of the system we were working with (TinyOS), and come back to him when I thought I understood enough to make changes and improvements. I also had a copy of the Linux Core Kernel with Commentary that I perused from time to time.

Being able to dive into an unknown codebase and make sense of where the pieces are put together is a very useful skill that too many people just don't have.

reply
nitwit005
1 hour ago
[-]
Most of the time, there's no need to study anything. Any experienced software engineer can tell you about a project they worked on with no real requirements, management constantly changing their mind, etc.
reply
alangibson
2 hours ago
[-]
"While hardware folks study and learn from the successes and failures of past hardware, software folks do not." Couldn't be further from the truth. Software folks are obsessed with copying what has been shown to work to the point that any advance quickly becomes a cargo cult (see microservices for example).

Once you've worked in both hardware and software engineering you quickly realize that they only superficially similar. Software is fundamentally philosophy, not physics.

Hardware is constrained by real world limitations. Software isn't except in the most extreme cases. Result is that there is not a 'right' way to do any one thing that everyone can converge on. The first airplane wing looks a whole lot like a wing made today, not because the people that designed it are "real engineers" or any such BS, but because that's what nature allows you to do.

reply
jemmyw
1 hour ago
[-]
Software doesn't operate in some magical realm outside of the physical world. It very much is constrained by real world limitations. It runs on the hardware that itself is limited. I wonder if some failures are a result of thinking it doesn't have these limitations?
reply
SoKamil
10 minutes ago
[-]
> It very much is constrained by real world limitations. It runs on the hardware that itself is limited

And yet we scale the shit out of it, shifting limitations further and further. On that scale different problems emerge and there is no single person or even single team that could comprehend this complexity in isolation. You start to encounter problems that have never been solved before.

reply
Sharlin
1 hour ago
[-]
What you and the GP said are not mutually exclusive. Software engineers are quick to drink every new Kool-Aid out there, which is exactly why we’re so damned blind to history and lessons learned before.
reply
ctkhn
2 hours ago
[-]
In my experience, a lot of the time the people who COULD be solving these issues are people who used to code or never have. The actual engineers who might do something like this aren't given authority or scope and you have MBAs or scrum masters in the way of actually solving problems.
reply
mbesto
2 hours ago
[-]
I would boil this down to something else, but possibly related: project requirements are hard. That's it.

> While hardware folks study and learn from the successes and failures of past hardware, software folks do not. People do not regularly pull apart old systems for learning.

For most IT projects, software folks generally can NOT "pull apart" old systems, even if they wanted to.

> Typically, software folks build new and every generation of software developers must relearn the same problems.

Project management has gotten way better today than it was 20 years, so there is definitely some learnings that have been passed on.

reply
tristor
3 hours ago
[-]
This is one part of the issue. The other major piece of this that I've seen over more than two decades in industry is that most large projects are started by and run by (but not necessarily the same person) non-technical people who are exercising political power, rather than by technical people who can achieve the desired outcomes. When you put the nexus of power into the hands of non-technical people in a technical endeavor you end up with outcomes that don't match expectations. Larger scale projects deeply suffering from "not knowing what we don't know" at the top.
reply
mbesto
2 hours ago
[-]
If this were true all of the time then the fix would be simple - only have technical people in charge. My experience has shown that this (only technical people in charge) doesn't solve the problem.
reply
fragmede
1 hour ago
[-]
If people didn’t work, maybe we should put an LLM in charge instead.
reply
cjbgkagh
3 hours ago
[-]
Sometimes giving people what they want can be bad for them; management wants cheap compliant workers, management gets cheap compliant workers, and then the projects fall apart in easily predictable and preventable ways.

Because such failures are so common management typically isn’t punished when they do so it’s hard to keep interests inline. And because many producers are run on a cost plus basis there can be a perverse incentive to do a bad job, or at least avoid doing a good one.

reply
wesammikhail
3 hours ago
[-]
Agree 100%.

I know a lot of people on here will disagree with me saying this but this is exactly how you get an ecosystem like javascript being as fragmented, insecure, and "trend prone" as the old school Wordpress days. It's the same problems over and over and every new "generation" of programmers has to relearn the lessons of old.

reply
Salgat
2 hours ago
[-]
The difficulty lies in the fact that most software is quite cheap to generate very complex designs compared to hardware. For software designs treated similarly to hardware (such as in medical devices or at NASA), you do gain back those benefits at great expense.
reply
pphysch
2 hours ago
[-]
There are rational explanations for this. When software fails catastrophically, people almost never die (considering how much software crashes every day). When hardware fails catastrophically, people tend to die, or lose a lot of money.

There's also the complexity gap. I don't think giving someone access to the Internet Explorer codebase is necessarily going to help them build a better browser. With millions of moving parts it's impossible to tell what is essential, superfluous, high quality, low quality. Fully understanding that prior art would be a years long endeavor, with many insights no doubt, but dubious.

reply
mstipetic
3 hours ago
[-]
I was so annoyed when I found out the OTP library and realized we’ve been reinventing things for 20+ years
reply
jcelerier
2 hours ago
[-]
... are you saying that hardware projects fail less than software ones? just building a bridge is something that fails on a regular occurence all over the world. Every chip comes with list of erratas longer than my arm.
reply
neilv
1 hour ago
[-]
On some of the infamous large public IT project failures, you just have to look at who gets the contract, how they work, and what their incentives are. (For example, don't hire management consulting partner smooth talkers, and their fleet of low-skilled seat-warmers, to do performative hours billing.)

It's also hard when the team actually cares, but there are skills you can learn. Early in my career, I got into solving some of the barriers to software project management (e.g., requirements analysis and otherwise understanding needs, sustainable architecture, work breakdown, estimation, general coordination, implementation technology).

But once you're a bit comfortable with the art and science of those, big new challenges are more about political and environment reality. It comes down to alignment and competence of: workers, internal team leadership, partners/vendors, customers, and investors/execs.

Discussing this is a little awkward, but maybe start with alignment, since most of the competence challenges are rooted in mis-alignments: never developing nor selecting for the skills that alignment would require.

reply
ChrisMarshallNY
1 hour ago
[-]
> While hardware folks study and learn from the successes and failures of past hardware, software folks do not.

I guess that’s the real problem I have with SV’s endemic ageism.

I was personally offended, when I encountered it, myself, but that’s long past.

I just find it offensive, that experience is ignored, or even shunned.

I started in hardware, and we all had a reverence for our legacy. It did not prevent us from pursuing new/shiny, but we never ignored the lessons of the past.

reply
pork98
1 hour ago
[-]
Why do you find it offensive? It’s not personal. Someone who thought webvan was a great lesson in hubris could not have built an Instacart, right? Even evolution shuns experience, all but throwing most of it out each generation, with a scant few species as exceptions.
reply
Bjartr
1 hour ago
[-]
> Someone who thought webvan was a great lesson in hubris could not have built an Instacart, right?

Not at all. The mistake to learn from in Webvan's case was expanding too quickly and investing in expensive infrastructure all before achieving product-market fit. Not that they delivered groceries.

reply
pkilgore
1 hour ago
[-]
I think you're mistaking the funding and starting of companies with the execution of their vision through software engineering -- the entire point of the article, and the OP.
reply
0xbadcafebee
2 hours ago
[-]
Software projects fail because humans fail. Humans are the drivers of everything in our world. All government, business, culture, etc... it's all just humans. You can have a perfect "process" or "tool" to do a thing, but if the human using it sucks, the result will suck. This means that the people involved are what determines if the thing will succeed or fail. So you have to have the best people, with the best motivations, to have a chance for success.

The only thing that seems to change this is consequences. Take a random person and just ask them to do something, and whether they do it or not is just based on what they personally want. But when there's a law that tells them to do it, and enforcement of consequences if they don't, suddenly that random person is doing what they're supposed to. A motivation to do the right thing. It's still not a guarantee, but more often than not they'll work to avoid the consequences.

Therefore if you want software projects to stop failing, create laws that enforce doing the things in the project to ensure it succeeds. Create consequences big enough that people will actually do what's necessary. Like a law, that says how to build a thing to ensure it works, and how to test it, and then an independent inspection to ensure it was done right. Do that throughout the process, and impose some kind of consequence if those things aren't done. (the more responsibility, the bigger the consequence, so there's motivation commensurate with impact)

That's how we manage other large-scale physical projects. Of course those aren't guaranteed to work; large-scale public works projects often go over-budget and over-time. But I think those have the same flaw, in that there isn't enough of a consequence for each part of the process to encourage humans to do the right thing.

reply
farrelle25
9 minutes ago
[-]
> Software projects fail because humans fail. Humans are the drivers of everything in our world.

Ah finally - I've had to scroll halfway down to find a key reason big software projects fail.

<rant>

I started programming in 1990 with PL/1 on IBM mainframes and over the past 35 years have dipped in and out of the software world. Every project I've seen fail was mainly down to people - egos, clashes, laziness, disinterest, inability to interact with end users, rudeness, lack of motivation, toxic team culture etc etc. It was rarely (never?) a major technical hurdle that scuppered a project. It was people and personalities, clashes and confusion.

</rant>

Of course the converse is also true - big software projects I've seen succeed were down to a few inspired leaders and/or engineers who set the tone. People with emotional intelligence, tact, clear vision, ability to really gather requirements and work with the end users. Leaders who treated their staff with dignity and respect. Of course, most of these projects were bland corporate business data ones... so not technically very challenging. But still big enough software projects.

Gez... don't know why I'm getting so emotional (!) But the hard-core sofware engineering world is all about people at the end of the day.

reply
beezlebroxxxxxx
2 hours ago
[-]
If software engineers want to be referred to as "engineers" then they should actually learn about engineering failures. The industry and educational pipeline (formal and informal) as a whole is far more invested in butterfly chasing. It's immature in the sense that many people with decades of experience are unwilling to adopt many proven practices in large scale engineering projects because they "get in the way" and because they hold them accountable.
reply
ThaDood
2 hours ago
[-]
So, I'm not a dev nor a project manager but I found this article very enlightening. At the risk of asking a stupid question and getting a RTFM or a LMGTFY can anyone provide any simple and practical examples of software successes at a big scale. I work at a hospital so healthcare specific would be ideal but I'll take anything.

FWIW I have read The Phoenix Project and it did help me get a better understanding of "Agile" and the DevOps mindset but since it's not something I apply in my work routinely it's hard to keep it fresh.

My goal is to try and install seeds of success in the small projects I work on and eventually ask questions to get people to think in a similar perspective.

reply
hi_hi
11 minutes ago
[-]
This is a noble and ambitious goal. I feel qualified to provide some pointers, not because I have been instrumental in delivering hugely successful projects, but because I have been involved, in various ways, in many, many failed projects. Take what you will from that :-)

- Define "success" early on. This usually doesn't mean meeting a deadline on time and budget. That is actually the start of the real goal. The real success should be determined months or years later, once the software and processes have been used in a production business environment.

- Pay attention to Conways Law. Fight this at your peril.

- Beware of the risk of key people. This means if there is a single person who knows everything, you have a risk if they leave or get sick. Redundancy needs to be built into the team, not just the hardware/architecture.

- No one cares about preventing fires from starting. They do care about fighting fires late in the project and looking like a hero. Sometimes you just need to let things burn.

reply
BenoitEssiambre
2 hours ago
[-]
Unix and Linux would be your quintessential examples.

Unix was an effort to take Multics, an operating system that had gotten too modular, and integrate the good parts into a more unified whole (book recommendation: https://www.amazon.com/UNIX-History-Memoir-Brian-Kernighan/d...).

Even though there were some benefits to the modularity of Multics (apparently you could unload and replace hardware in Multics servers without reboot, which was unheard of at the time), it was also its downfall. Multics was eventually deemed over-engineered and too difficult to work with. It couldn't evolve fast enough with the changing technological landscape. Bell Labs' conclusion after the project was shelved was that OSs were too costly and too difficult to design. They told engineers that no one should work on OSs.

Ken Thompson wanted a modern OS so he disregarded these instructions. He used some of the expertise he gained while working on Multics and wrote Unix for himself (in three weeks, in assembly). People started looking over Thompson's shoulder being like "Hey what OS are you using there, can I get a copy?" and the rest is history.

Brian Kernighan described Unix as "one of" whatever Multics was "multiple of". Linux eventually adopted a similar architecture.

More here: https://benoitessiambre.com/integration.html

reply
prmph
1 hour ago
[-]
Are you equating success with adoption or use? I would say there are lot's of software that are widely used but are a mess.

What would be a competitor to linux that is also FOSS? If there's none, how do you assess the success or otherwise of Linux?

Assume Linux did not succeed but was adopted, how would that scenario look like? Is the current situation with it different from that?

reply
gishh
15 minutes ago
[-]
> What would be a competitor to linux that is also FOSS? If there's none, how do you assess the success or otherwise of Linux?

*BSD?

As for large, successful open source software: GCC? LLVM?

reply
spit2wind
18 minutes ago
[-]
I heard Direct File was pretty successful. Something like a 94% reported it as a positive experience.
reply
shagmin
2 hours ago
[-]
I find it kind of hard to define success or failure. Google search and Facebook are a success right? And they were able to scale up as needed, which can be hard. But the way they started is very different from a government agency or massive corporation trying to orchestrate it from scratch. I don't know if you'd be familiar with this, but maybe healthcare.gov is a good example... it was notoriously buggy, but after some time and a lot of intense pressure it was dealt with.
reply
fragmede
1 hour ago
[-]
The untold story is of landing software projects at Google. Google has landed countless software projects internally in order for Google.com to continue working, and the story of those will never reach the light of day, except in back room conversations never to be shared publicly. How did they go from internal platform product version one to version two? it's an amazing feat of engineering that can't be shown to the public, which is a loss for humanity, honestly, but capitalism isn't going to have it any other way.
reply
SoftTalker
1 hour ago
[-]
Are you saying this from firsthand experience? Because it sounds like the sort of myth that Google would like you to believe. Much more believable is that their process is as broken and chaotic as most software projects are, they are just so big that they manage to have some successes regardless. Survivorship bias. A broken clock is still right twice a day.
reply
fragmede
1 hour ago
[-]
I was an SRE on their Internet traffic team for three years, from 2020 til 2023. The move from Sisyphus to Legislator is something I wish the world could see documented in a museum, like the moving of the Cape Hatteras Lighthouse.
reply
serial_dev
1 hour ago
[-]
This is what I’ve been thinking about when I talk to other people in software development when they can’t stop talking about how efficient they are with AI… yet they didn’t ship anything in their startup, or side project, or in a corporate setting, the project is still bug riddled, the performance is poor, now there code quality suffers too as people barely read what Cursor (etc) are spitting out.

I have “magical moments” with these tools, sometimes they solve bugs and implement features in 5 minutes that I couldn’t do in a day… at the same time, quite often they are completely useless and cause you to waste time explaining things that you could probably just code yourself much faster.

reply
mdavid626
3 hours ago
[-]
It’s so “nice” to know, that trillions spent on AI not only won’t make this better, but it’ll make it significantly worse.
reply
keeda
51 minutes ago
[-]
Not really, by most indications AI seems to be an amplifier more than anything else. If you have strong discipline and quality control processes it amplifies your throughput, but if you don't, it amplifies your problems. (E.g. see the DORA 2025 report.)

So basically things will still go where they were always going to go, just a lot faster. That's not necessarily a bad thing.

reply
mdavid626
36 minutes ago
[-]
Yes, AI can help, but it won’t. That’s my point.

In practice, it will make people even less care or pay attention. These big disasters will be written by people without any skills using AI.

reply
fransje26
1 hour ago
[-]
"Worse" won't even start to describe the economical crisis we will be in once the bubble bursts.

And although that, in itself, should be scary enough, it is nothing compared to the political tsunami and unrest it will bring in its wake.

Most of the Western world is already on shaky political ground, flirting with the extreme-right. The US is even worse, with a pathologically incompetent administration of sociopaths, fully incapable of coming up with the measures necessary to slow down the train of doom careening out of control towards the proverbial cliff of societal collapse.

If the societal tensions are already close to breaking point now, in a period of relative economical prosperity, I cannot start to imagine what they will be like once the next financial crash hits. Especially one in the multi trillion of dollars.

They say that humanity progresses through episodes of turmoil and crisis. Now that we literally have all the knowledge of the world at our fingertips, maybe it is time to progress past this inadequate primeval advancement mechanism, and to truly enter an enlightened age where progress is made from understanding, instead of crises.

Unfortunately, it looks like it's going to take monumental changes to stop the parasites and the sociopaths from making at quick buck at the expense of humanity.

reply
darepublic
11 minutes ago
[-]
Is it a failure if we ship the project a year late? What if everyone involved would have predicted exactly that outcome
reply
SatvikBeri
31 minutes ago
[-]
Do non-software projects succeed at a higher rate in any industry? I get the impression that projects everywhere go over time, over budget, and frequently get canceled.
reply
bigbuppo
3 hours ago
[-]
As someone that has seen technological solutions applied when they make no sense, I think the next revolution in business processes will be de-computerization. The trend has probably already started thank to one of the major cloud outages.
reply
stock_toaster
1 hour ago
[-]
> de-computerization

I would think cloud-disconnectedness (eg. computers without cloud hosted services) would come far before de-computerization.

reply
John23832
2 hours ago
[-]
I often see big money put behind software projects, but the money then makes stake holders feel entitled to get in the way.
reply
shevy-java
51 minutes ago
[-]
I spent way less - and they still fail!
reply
parasubvert
2 hours ago
[-]
Working on AI that helps to manage IT shops that learns from failure & success might be better for both results and culture than most IT management roles, a profession (painting an absurdly broad brush) that tends to attract a lot of miserable creatures.
reply
dmix
3 hours ago
[-]
So in the 1990s Canada failed to do a payroll system where they paid Accenture Canada $70M

Then in 2010s they spent $185M on a customized version of IBM's PeopleSoft that was managed directly by a government agency https://en.wikipedia.org/wiki/Phoenix_pay_system

And now in 2020s they are going to spend $385M integrating an existing SaaS made by https://en.wikipedia.org/wiki/Dayforce

That's probably one of the worst and longest software failures in history.

reply
bryanlarsen
3 hours ago
[-]
Oh, it's much more interesting than that. Phoenix started as an attempt to create a gun registry. Ottawa had a bunch of civil servants that'd be reasonably compotent at overseeing such a thing, but the government decided that it wanted to build it in Miramichi, New Brunswick. The relevant people refused to move to Miramichi, so the project was built using IBM contractors and newbies. The resulting fiasco was highly predictable.

Then when Harper came in he killed the registry mostly for ideological reasons.

But then he didn't want to destroy a bunch of jobs in Miramichi, so he gave them another project to turn into a fiasco.

reply
ZeroConcerns
3 hours ago
[-]
Yup, and with an equal amount of mindblowing-units-of-money spent, infrastructure projects all around me are still failing as well, or at least being modified (read: downsized), delayed and/or budget-inflated beyond recognition.

So, what's the point here, exactly? "Only licensed engineers as codified by (local!) law are allowed to do projects?" Nah, can't be it, their track record still has too many failures, sometimes even spectacularly explosive and/or implosive ones.

"Any public project should only follow Best Practices"? Sure... "And only make The People feel good"... Incoherent!

Ehhm, so, yeah, maybe things are just complicated, and we should focus more on the amount of effort we're prepared to put in, the competency (c.q. pay grade) of the staff we're willing to assign, and exactly how long we're willing to wait prior to conceding defeat?

reply
graemep
1 hour ago
[-]
One of the problems is scale.

Large scale systems tend to fail. large centralised and centrally managed systems with big budgets and large numbers of people who need to coordinate, lots of people with an interest in the project pushing and lobbying for different things.

Multiple smaller systems is usually a better approach, where possible. Not possible for things like transport infrastructure, but often possible for software.

reply
AlexandrB
1 hour ago
[-]
> Not possible for things like transport infrastructure

It depends what you define as a system. Arguably a lot of transport infrastructure is a bunch of small systems linked with well-understood interfaces (e.g. everyone agrees on the gauge of rail that's going to be installed and the voltage in the wires).

Consider how construction works in practice. There are hundreds or thousands of workers working on different parts of the overall project and each of them makes small decisions as part of their work to achieve the goal. For example, the electrical wiring of a single train station is its own self-contained system. It's necessary for the station to work, but it doesn't really depend on how the electrical system is installed in the next station in the line. The electricians installing the wiring make a bunch of tiny decisions about how and where the wires are run that are beyond the ability of someone to specify centrally - but thanks to well known best practices and standards, everything works when hooked up together.

reply
sebastos
2 hours ago
[-]
Nailed it, but I fear this wisdom will be easily passed by by someone who doesn’t already intuit it from years of experience. Like the Island de la Muerta: wisdom that can only be found if you already know where it is.
reply
827a
45 minutes ago
[-]
Slightly related but unpopular opinion I have: I think software, broadly, today is the highest quality its ever been. People love to hate on some specific issues concerning how the Windows file explorer takes 900ms to open instead of 150ms, or how sometimes an iOS 26 liquid glass animation is a bit janky... we're complaining about so much minutia instead of seeing the whole forest.

I trust my phone to work so much that it is now the single, non-redundant source for keys to my apartment, keys to my car, and payment method. Phones could only even hope to do all of these things as of like ~4 years ago, and only as of ~this year do I feel confident enough to not even carry redundancies. My phone has never breached that trust so critically that I feel I need to.

Of course, this article talks about new software projects. And I think the truth and reason of the matter lies in this asymmetry: Android/iOS are not new. Giving an engineering team agency and a well-defined mandate that spans a long period of time oftentimes produces fantastic software. If that mandate often changes; or if it is unclear in the first place; or if there are middlemen stakeholders involved; you run the risk of things turning sideways. The failure of large software systems is, rarely, an engineering problem.

But, of course, it sometimes is. It took us ~30-40 years of abstraction/foundation building to get to the pretty darn good software we have today. It'll take another 30-40 years to add one or two more nines of reliability. And that's ok; I think we're trending in the right direction, and we're learning. Unless we start getting AI involved; then it might take 50-60 years :)

reply
runningmike
49 minutes ago
[-]
There is no such thing as ‘simplicity science’ that can be directly applied when dealing with IT problems. However, many insights of complexity science are applicable to solving real world IT problems. People love simple solutions. However Simple is a scam, https://nocomplexity.com/simple-is-a-scam/

There are no generic, simple solutions for complex IT challenges. But there are ground rules for finding and implementing simple solutions. I have created a playbook to prevent IT diasasters, The art and science towards simpler IT solutions see https://nocomplexity.com/documents/reports/SimplifyIT.pdf

reply
MattRogish
1 hour ago
[-]
The lesson from “big software projects are still failing” isn’t that we need better methodologies, better project management, or stricter controls. The lesson is "don't do big software projects".

Software is not the same as building in the physical world where we get economies of scale.

Building 1,000 bridges will make the cost of the next incremental bridge cheaper due to a zillion factors, even if Bridge #1 is built from sticks (we'll learn standards, stable, fundamental engineering principles, predicable failure modes, etc.) we'll eventually reach a stable, repeatable, scalable approach to build bridges. They will very rarely (in modernity) catastrophically fail (yes, Tacoma Narrows happened but in properly functioning societies it's rare.)

Nobody will say "I want to build a bridge upside-down, out of paper clips and can withstand a 747 driving over it". Because that's physically impossible. But nothing's impossible in software.

Software isn't scalable in this way. It's not scalable because it doesn't have hard constraints (like the laws of physics) - so anything goes and can be in scope; and since writing and integrating large amounts of code is a communication exercise, suffers from diseconomies of scale.

Customers want the software to do exactly what they want and - within reason - no laws of physics are violated if you move a button or implement some business process.

Because everyone wants to keep working the way they want to work, no software project (even if it sounds the same) is the same. Your company's bespoke accounting software will be different than mine, even if we are direct competitors in the same market. Our business processes are different, org structures are different, sales processes are different, etc.. So they all build different accounting software, even if the fundamentals (GaaP, double-entry bookkeeping, etc.) are shared.

It's also the same reason why enterprise software sucks - do you think that a startup building expense management starts off being a giant mess of garbage? No! IT starts off simple and clean and beautiful because their initial customer base (startups) are beggars and cannot be choosers, so they adapt their process to the tool. But then larger companies come along with dissimilar requirements and, Expense Management SaaS Co. wins that deal by changing the product to work with whatever oddball requirements they have, and so on, until the product essentially is a bunch of config options and workflows that you have to build yourself.

(Interestingly, I think these products become asymptotically stuck - any feature you add or remove will make some of your customers happy and some of your customers mad, so the product can never get "better" globally).

We can have all the retrospectives and learnings we want but the goal - "Build big software" - is intractable, and as long as we keep trying to do that, we will inevitably fail. This is not a systems problem that we can fix.

The lesson is: "never build big software".

(Small software is stuff like Bezos' two pizza team w/APIs etc. - many small things make a big thing)

reply
corpMaverick
1 minute ago
[-]
I agree with you on "don't do big software project" Specially do not fast scale them out to hundreds of people. You have to scale them more organically ensuring that every person added is a net gain. They think that adding more people will reduce the time.

I am surprised on the lack of creativity when doing these projects. Why don't they start 5 small projects building the same thing and let them work for a year. At the end of the year you cancel one of the projects, increasing the funding in the other four. You can do that every year based on the results. It may look like a waste but it will significantly increase your chances of succeeding.

reply
stonemetal12
1 hour ago
[-]
>Building 1,000 bridges will make the cost of the next incremental bridge cheaper due to a zillion factors, even if Bridge #1 is built from sticks (we'll learn standards, stable, fundamental engineering principles, predicable failure modes, etc.) we'll eventually reach a stable, repeatable, scalable approach to build bridges. They will very rarely (in modernity) catastrophically fail (yes, Tacoma Narrows happened but in properly functioning societies it's rare.)

Build 1000 JSON parsers and tell me if the next one isn't cheaper to develop with "(we'll learn standards, stable, fundamental engineering principles, predicable failure modes, etc.)"

>Software isn't scalable in this way. It's not scalable because it doesn't have hard constraints (like the laws of physics)

Uh, maybe fewer but none is way to far. Get 2 billion integer operations per second out of a 286, the 500 mile email, big data storage, etc. Physical limits are everywhere.

>It's also the same reason why enterprise software sucks.

The reason enterprise software sucks is because the lack of introspection and learning from the garbage that went before.

reply
nacozarina
1 hour ago
[-]
managing software requirements and the corresponding changes to user/group/process behaviors is by far the hardest part of software development, and it is a task no one knows how to scale.

absent understanding, large companies engage in cargo cult behaviors: they create a sensible org chart, produce a gannt chart, have the coders start whacking code, presumably in 9 months a baby comes out.

every time, ugly baby

reply
oldandboring
1 hour ago
[-]
Almost nobody who works in software development is a licensed professional engineer. Many are even self-taught, and that includes both ICs and managers. I'm not saying this is direct causation but I do think it odd that we are so utterly dependent on software for so many critical things and yet we basically YOLO its development compared to what we expect of the people who design our bridges, our chemicals, our airplanes, etc.
reply
keeda
1 hour ago
[-]
Licensing and the perceived rigor it signifies is irrelevant to whether something can be considered "professional engineering." Engineering exists at the intersection of applied science, business and economics. So most software projects can be YOLO'd simply because the economics permit it, but there are others where the high costs necessitate more rigor.

For instance, software in safety-critical systems is highly rigorously developed. However that level of investment does not make sense for run-of-the-mill internal LOB CRUD apps which constitute the vast majority of the dark matter of the software universe.

Software engineering is also nothing special when it comes to various failure modes, because you'll find similar examples in other engineering disciplines.

I commented about this at length a few days ago: https://news.ycombinator.com/item?id=45849304

reply
amai
1 hour ago
[-]
To stop failing we could use AI to replace managers not software developers.
reply
an0malous
38 minutes ago
[-]
No need to waste GPUs, a simple bash script that alternates between asking for status updates and randomly changing requirements would do
reply
JohnMakin
3 hours ago
[-]
> "Why worry about something that isn’t going to happen?”

Lots to break down in this article other than this initial quotation, but I find a lot of parallels in failing software projects, this attitude, and my recent hyper-fixation (seems to spark up again every few years), the sinking of the Titanic.

It was a combination of failures like this. Why was the captain going full speed ahead into a known ice field? Well, the boat can't sink and there (may have been) organizational pressure to arrive at a certain time in new york (aka, imaginary deadline must be met). Why wasn't there enough life jackets and boats for crew and passengers? Well, the boat can't sink anyway, why worry about something that isn't going to happen? Why train crew on how to deploy the life rafts and emergency procedures properly? Same reason. Why didn't the SS Californian rescue the ship? Well, the 3rd party Titanic telegraph operators had immense pressure to send telegrams to NY, and the chatter about the ice field got on their nerves and they mostly ignored it (misaligned priorities). If even a little caution and forward thinking was used, the death toll would have been drastically lower if not nearly nonexistent. It took 2 hours to sink, which is plenty of time to evacuate a boat of that size.

Same with software projects - they often fail over a period of multiple years and if you go back and look at how they went wrong, there often are numerous points and decisions made that could have reversed course, yet, often the opposite happens - management digs in even more. Project timelines are optimistic to the point of delusion and don't build in failure/setbacks into schedules or roadmaps at all. I've had to rescue one of these projects several years ago and it took a toll on me I'm pretty sure I carry to this day, I'm wildly cynical of "project management" as it relates to IT/devops.

reply
parados
36 minutes ago
[-]
> and my recent hyper-fixation (seems to spark up again every few years), the sinking of the Titanic.

But the rest of your comment reveals nothing novel other than anyone would find after watching James Cameron's movie multiple times.

I suggest you go to the original inquiries (congressional in the US, Board of trade in the UK). There is a wealth of subtle lessons there.

Hint: Look at the Admiralty Manual of Seamanship that was current at that time and their recommendations when faced with an iceberg.

Hint: Look at the Board of Trade (UK) experiments with the turning behaviour of the sister ship. In particular of interest is the engine layout of the Titanic and the attempt by the crew, inexperienced with the ship, to avoid the iceberg. This was critical to the outcome.

Hint: Look at the behaviour of Captain Rostron. Lots of lessons there.

reply
mschuster91
37 minutes ago
[-]
No big surprise. Taking a shitty process and "digitalizing" it will lead to a shitty process just on computers in the best case, in the worst case everything collapses.
reply
franktankbank
3 hours ago
[-]
> Phoenix project executives believed they could deliver a modernized payment system, customizing PeopleSoft’s off-the-shelf payroll package to follow 80,000 pay rules spanning 105 collective agreements with federal public-service unions.

Somehow I come away skeptical of the inevitable conclusion that Phoenix was doomed to fail and instead that perhaps they were hamstrung by architecture constraints dictated by assholes.

reply
QuercusMax
1 hour ago
[-]
Wasn't the Agile movement kicked off by a group of people writing payroll software for Chrysler?

https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...

Payroll systems seem to be a massively complicated beast.

reply
array_key_first
5 minutes ago
[-]
Arbitrary payroll is absurdly complicated. The trick is to not make it arbitrary - have a limited amount of stuff you do, and always have backdoors to manually pushing data through payroll.
reply
franktankbank
1 hour ago
[-]
You don't want to get me started on Agile.
reply
ruralfam
3 hours ago
[-]
My reaction also. 80K payroll rules!!! Without much prompt effort, I got about 350K Canada Federal Service employees (sorry if not correct).
reply
dmix
3 hours ago
[-]
Sounds like they put zero effort into simplifying those rules the first time around.

Now in the new project they put together a committee to attempt it

> The main objective of this committee also includes simplifying the pay rules for public servants, in order to reduce the complexity of the development of Phoenix's replacement. This complexity of the current pay rules is a result of "negotiated rules for pay and benefits over 60 years that are specific to each of over 80 occupational groups in the public service." making it difficult to develop a single solution which can handle each occupational groups specific needs.

reply
stackskipton
2 hours ago
[-]
I have worked on government payroll systems, simplifying those rules is almost impossible from political PoV. They are generally a combo of weird laws, court cases, union contracts and more.

Any time you think about touching them, the people who get those salaries come out in droves and no one else cares so government has every incentive to leave them alone.

reply
tehjoker
2 hours ago
[-]
You could simplify them if you made sure the people getting them got overall more money ;) The government doesn't want to do that though.
reply
franktankbank
2 hours ago
[-]
Oh great a committee!
reply
AndrewDucker
2 hours ago
[-]
Committees are how you discover what the problems are and agree solutions.

No single person is going to understand all of the history and legality involved, or be able to represent the people on all sides of this mess.

Yes, this means discussion, investigation, almost certainly months of effort to find something that works, and lots of compromise. That's how adults deal with complex situations.

reply
supportengineer
3 hours ago
[-]
The purpose of a system is what it does.

1. Enable grift to cronies

2. Promo-driven culture

3. Resume-oriented software architecture

reply
add-sub-mul-div
2 hours ago
[-]
An endless succession of new tools, methodologies, and roles but failure persists because success is rooted in good judgment, wisdom, and common sense.
reply
lawlessone
1 hour ago
[-]
Every improvement will be moderated increased demands from management, crunch, pressure to release, "good enough", add this extra library that monetizes/spys on the customer etc

In the same way that hardware improvements are quickly gobbled up by more demanding software.

The people doing the programming will also be more removed technically. I can do Python, Java , Kotlin. I can do a little C++ ,less C, and a lot less assembly.

reply
AtlasBarfed
29 minutes ago
[-]
Software was failing and mismanaged.

So we added a language and cultural barrier, 12 hour offset, and thousands of miles of separation with outsourcing.

Software was failing and mismanaged.

So now we will take the above failures, and now tack on an AI "prompt engineering" barrier (done by the above outsourced labor).

And on top of that, all engineers that know what they are doing are devalued from the market, all the newer engineers will be AI braindead.

Everything will be fixed!

reply
apercu
2 hours ago
[-]
Hot take: It's not technical problems causing these projects to fail.

It's leadership and accountability (well, the lack of them).

reply
AnimalMuppet
34 minutes ago
[-]
And that often takes a particular form: The requirements never converge, or at least never converge on anything realistically buildable.
reply
x0x0
1 hour ago
[-]
The article is kind of dumb. eg it hangs its hat on the Phoenix payroll system, which

> Phoenix project executives believed they could deliver a modernized payment system, customizing PeopleSoft’s off-the-shelf payroll package to follow 80,000 pay rules spanning 105 collective agreements with federal public-service unions. It also was attempting to implement 34 human-resource system interfaces across 101 government agencies and departments required for sharing employee data.

So basically people -- none of them in IT, but rather working for the government -- built something extraordinarily complex (80k rules!), and then are like wow, it's unforeseen that would make anything downstream at least equally as complex. And then the article blames IT in general. When this data point tells us that replacing a business process that used to require (per [1]) 2,000 pay advisors to perform will be complex. While working in an organization that has shit the bed so thoroughly that paying its employees requires 2k people. For an organization of 290k, so 0.6% of headcount is spent on paying employees!

IT is complex, but incompetent people and incompetent orgs do not magically become competent when undertaking IT projects.

Also too, making extraordinarily complex things they shouting the word "computer" at them like you're playing D&D and it's a spell does not make them simple.

[1] https://www.oag-bvg.gc.ca/internet/English/parl_oag_201711_0...

reply
mariopt
3 hours ago
[-]
> IT projects suffer from enough management hallucinations and delusions without AI adding to them.

Software is also incredibly hard, the human mind can understand the physical space very well but once we're deep into abstractions it simply struggles to keep up with it.

It is easier to explain how to build a house from scratch to virtually anyone than a mobile app/Excel.

reply
apercu
2 hours ago
[-]
I came to opposite conclusions. Technology is pretty easy, people are hard and the business culture we have fostered in the last 40 years gets in the way of success.
reply
tehjoker
2 hours ago
[-]
Easy, just imagine a 1GB array as a 2.5mm long square in RAM (assuming a DRAM cell is 10nm). Now it's physical.
reply
jmyeet
2 hours ago
[-]
This has dot-com bubble written all over it. But there are some deeper issues.

First, we as a society should really be scrutinizing what we invest in. Trillions of dollars could end homelessness as a rounding error.

Second, real people are going to be punished for this as the layoffs go into overdrive, people lose their houses and people struggle to have enough to eat.

Third, the ultimate goal of all this investment is to displace people from the labor pool. People are annoying. They demand things like fair pay, safe working conditions and sick leave.

Who will buy the results of all this AI if there’s no one left with a job?

Lastly, the externalities of all this investment are indefensible. For example, air and water pollution and rising utility prices.

We’re bouldering towards a future with a few thousand wealthy people where everyone else lives in worker housing, owns nothing and is the next incarnation of brick kiln workers on wealthy estates.

reply
ctoth
2 hours ago
[-]
Systemically, how would you solve homelessness, if I gave you a trillion dollars?
reply
jddj
2 hours ago
[-]
A trillion in a money market fund @ 5% is 50B/year.

Over the course of a few years (so as to not drive up the price of politicians too quickly) one could buy the top N politicians from most countries. From there on out your options are many.

After a decade or so you can probably have your trillion back.

reply
tonyedgecombe
2 hours ago
[-]
The article isn't really about AI (for a change).
reply
exabrial
2 hours ago
[-]
The biggest reason is developer ego. Devs see their code as artwork an extension of themselves, so it's really hard to have critical conversations about small things and they erupt into holy wars. Off hand:

* Formatting

* Style

* Conventions

* Patterns

* Using the latest frameworks or whats en-vogue

I think where I've seen results delivered effectively and consistently is where there is a universal style enforced, which removes the individualism from the codebase. Some devs will not thrive in that environment, but instead it makes the code a means-to-the-end, rather than being-the-end.

reply
AlotOfReading
2 hours ago
[-]
As far as I can see in the modern tech industry landscape, virtually everyone has adopted style guides and automatic formatting/linting. Modern languages like Go even bake those decisions into the language itself.

I'd consider managing that stuff essentially table-stakes in big orgs these days. It doesn't stop projects from failing in highly expensive and visible ways.

reply
ctoth
2 hours ago
[-]
The UK Post Office lied and made people kill themselves ... because of dev ego?
reply
exabrial
2 hours ago
[-]
Ironically, the downvotes pretty much prove this is exactly correct.
reply
parasubvert
2 hours ago
[-]
Eh, you're not wrong, but management failures tend to be a bigger issue. On the hierarchy of ways software projects fail, developer ego is kind of upper-middle of the pack rather than top. Delusional, ignorant, or sadistic leadership tends to be higher.
reply