Laws of Software Engineering
1139 points
1 day ago
| 113 comments
| lawsofsoftwareengineering.com
| HN
GuB-42
1 day ago
[-]
> Premature optimization is the root of all evil.

There are few principle of software engineering that I hate more than this one, though SOLID is close.

It is important to understand that it is from a 1974 paper, computing was very different back then, and so was the idea of optimization. Back then, optimizing meant writing assembly code and counting cycles. It is still done today in very specific applications, but today, performance is mostly about architectural choices, and it has to be given consideration right from the start. In 1974, these architectural choices weren't choices, the hardware didn't let you do it differently.

Focusing on the "critical 3%" (which imply profiling) is still good advice, but it will mostly help you fix "performance bugs", like an accidentally quadratic algorithms, stuff that is done in loop but doesn't need to be, etc... But once you have dealt with this problem, that's when you notice that you spend 90% of the time in abstractions and it is too late to change it now, so you add caching, parallelism, etc... making your code more complicated and still slower than if you thought about performance at the start.

Today, late optimization is just as bad as premature optimization, if not more so.

reply
austin-cheney
1 day ago
[-]
The most misunderstood statement in all of programming by a wide margin.

I really encourage people to read the Donald Knuth essay that features this sentiment. Pro tip: You can skip to the very end of the article to get to this sentiment without losing context.

Here ya go: https://dl.acm.org/doi/10.1145/356635.356640

Basically, don't spend unnecessary effort increasing performance in an unmeasured way before its necessary, except for those 10% of situations where you know in advance that crucial performance is absolutely necessary. That is the sentiment. I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.

reply
iamflimflam1
1 day ago
[-]
> I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.

Similar to the "code should be self documenting - ergo: We don't write any comments, ever"

reply
f1shy
1 day ago
[-]
It is to me incredible, how many „developers“, even “10 years senior developers” have no idea how to use a dubugger and or profiler. I’ve even met some that asked “what is a profiler?” I hope I’m not insulting anybody, but to me is like going to an “experienced mechanic” and they don’t know what a screwdriver is.
reply
materielle
1 day ago
[-]
It’s because in most enterprise contexts:

1) Most bugs are integration bugs. Whereby multiple systems are glued together but there’s something about the API contract that the various developers in each system don’t understand.

2) Most performance issues are architectural. Unnecessary round trips, doing work synchronously, fetching too much data.

Debuggers and profilers don’t really help with those problems.

I personally know how to use those tools and I do for personal projects. It just doesn’t come up in my enterprise job.

reply
ludston
1 day ago
[-]
If you don't have personal examples of using a profiler to diagnose an issue like "too many round trips" and identify where those round trips are coming from, then you've never inherited a complex performance problem before.
reply
gf000
1 day ago
[-]
Doesn't really change the picture. If you don't know the basics of a car, then you absolutely shouldn't be driving in traffic either.
reply
munch-o-man
21 hours ago
[-]
yeah but that analogy is sort of false. A better analogy...but then it would make you look absurd...would be "if you don't know how to take apart and re-assemble the engine of a vehicle you shouldn't be allowed to drive it on the road". You get a driver's license if you can remember a few common sense facts and spend a bit of monitored time behind the wheel without doing anything absurdly illegal or injuring/killing somebody
reply
hrimfaxi
23 hours ago
[-]
You don't use like Datadog or something at your enterprise job?
reply
rustystump
1 day ago
[-]
That is surprising. They have come up in every enterprise job i have had. Debuggers and profilers absolutely do help although for distributed systems they are called something else.
reply
didgetmaster
1 day ago
[-]
I once interviewed at Microsoft. The hiring manager asked me how I would go about programming a break point if I were writing a debugger. I started to explain how I would have to swap out an instruction to put an INT 3 in the code and then replace it when the breakpoint would hit.

He stopped me an said he was just looking to see if I knew what an INT 3 was. He said few engineers he interviewed had any idea.

reply
kitd
22 hours ago
[-]
Did you get the job ... or were you overqualified?
reply
didgetmaster
9 hours ago
[-]
I guess I was overqualified. Didn't get the job.
reply
rustystump
1 day ago
[-]
What is an int3
reply
lock1
1 day ago
[-]
CPU interrupt for breakpoint, https://wiki.osdev.org/Interrupt_Vector_Table
reply
afpx
1 day ago
[-]
The last time I interviewed (around 10 years ago) I was surprised when 9 of the 10 senior developers didn't know how many bits were in basic elemetary types.

(Then, shortly afterward I also tried to find a new job, realized the entire industry had changed, and was fortunate enough to decide it wasn't worth the trouble.)

reply
WalterBright
1 day ago
[-]
> 9 of the 10 senior developers didn't know how many bits were in basic elemetary types

That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.

The net result of that is I never use C "long", instead using "int" and "long long".

This mess is why D has 32 bit ints and 64 bit longs, whether it's a 32 bit machine or a 64 bit machine. The result was we haven't had porting problems with integer sizes.

reply
switchbak
1 day ago
[-]
It's substantially worse on the JVM. One's intuition from C just fails when you have to think about references vs primitives, and the overhead of those (with or without compressed OOPs).

I've met very few folks who understand the overheads involved, and how extreme the benefits can be from avoiding those.

reply
Quarrelsome
1 day ago
[-]
Conversely I've met many folks who come into managed environments and piss away time trying to wrangle the managed system into how they think it should work, instead of accepting that clever people wrote it and guidelines when followed result in acceptable outcomes.

The sort of insane stuff I've seen on the dotnet repo where people are trying to tear apart the entire type system just because they think they've cracked some secret performance code.

reply
andai
1 day ago
[-]
>on the dotnet repo

You mean the .net compiler/runtime itself? I haven't looked at it, but isn't that the one place you'd expect to see weirdly low-level C# code?

reply
gf000
23 hours ago
[-]
In what way is it worse? The range of values they can contain is well-specified.

And you have a frame with an operands stack where you should be able to store at least a 32-bit value. `double` would just fill 2 adjacent slots.

And references are just pointers (possibly not using the whole of the value as an address, but as flags for e.g. the GC) pointing to objects, whose internal structure is implementation detail, but usually having a header and the fields (that can again be reference types).

Pretty standard stuff, heap allocating stuff is pretty common in C as well.

And unlike C, it will run the exact same way on every platform.

reply
awesome_dude
1 day ago
[-]
My favourite JVM trivia, although I openly admit I don't know if it's still true, is the fact that the size of a boolean is not defined.

If you ask a typical grad the size of a bool they will inevitably say one bit, but, CPUs and RAM, etc don't work like that, typically they expect WORD sized chunks of memory - meaning that the boolean size of one but becomes a WORD sized chunk, assuming that it hasn't been packed

reply
KnuthIsGod
1 day ago
[-]
". While it represents one bit of information, it is typically implemented as 1 byte in arrays, and often 4 bytes (an int) or more as a standalone variable on the stack "
reply
afpx
1 day ago
[-]
That's a reasonable answer. But, I meant they seemed to have little understanding or interest. I don't interview much, and I'm probably a poor interviewer. But, I guess I was expecting some discussion.
reply
WalterBright
1 day ago
[-]
I ran into some comp sci graduates in the early 80's who did not know what a "register" was.

To be fair, though, I come up short on a lot of things comp sci graduates know.

It's why Andrei Alexandrescu and I made a good team. I was the engineer, and he the scientist. The yin and the yang, so to speak.

reply
sas224dbm
1 day ago
[-]
Oooh, saw Andrei's name pop up and remember his books on C++ back in the day .. ran into a systems engineer a while ago that asked why during a tech review asked why some data size wasn't 1000 instead of 1024.. like err ??
reply
m463
1 day ago
[-]
Even more fun is pointers, especially when windows / macos were switching from 32-bits to 64-bits (in different ways).
reply
WalterBright
1 day ago
[-]
Microsoft tried valiantly to make Win16 code portable to Win32, and Win32 to Win64. But it failed miserably, apparently because the programmers had never ported 16 bit C to 32 bit C, etc., and picked all the wrong abstractions.
reply
AdieuToLogic
1 day ago
[-]
> Even more fun is pointers, especially when windows / macos were switching from 32-bits to 64-bits (in different ways).

And yet even more of a fun time with porting pointer code was going from the various x86 memory models[0] to 32-bit. Depending on the program, the pain was either near, far, or huge... :-D

0 - https://en.wikipedia.org/wiki/X86_memory_models

reply
andai
1 day ago
[-]
Why did they design it like that? It must have seemed like a good idea at the time.
reply
jandrewrogers
1 day ago
[-]
In ancient computing times, which is when C was birthed, the size of integers at the hardware level and their representation was much more diverse than it is today. The register bit-width was almost arbitrary, not the tidy powers of 2 that everyone is accustomed to today.

The integer representation wasn't always two's complement in the early days of computing, so you couldn't even assume that. C++ only required integer representations to be two's complement as of C++20, since the last architectures that don't work this way had effectively been dead for decades.

In that context, an 'int' was supposed to be the native word size of an integer on a given architecture. A long time ago, 'int' was an abstraction over the dozen different bit-widths used in real hardware. In that context, it was an aid to portability.

reply
andai
18 hours ago
[-]
Was it possible to write a program taking into account this diversity, and have it work properly?
reply
WalterBright
1 day ago
[-]
C is a portable language, in that programs will likely compile successfully on a different architecture. Unfortunately, that doesn't mean they will run properly, as the semantics are not portable.
reply
LPisGood
1 day ago
[-]
So what’s the point of having portable syntax, but not portable semantics?
reply
WalterBright
1 day ago
[-]
C certainly gives the illusion of portability. I recall a fellow who worked on DSP programming, where chars and shorts and ints and longs were all 32 bits. He said C was great because that would compile.

I suggested to him that he'd have a hard time finding any existing C code that ran correctly on it. After all, how are you going to write a byte to memory if you've only got 32 bit operations?

Anyhow, after 20 years of programming C, I took what I learned and applied it to D. The integral types are specified sizes, and 2's complement.

One might ask, what about 16 bit machines? Instead of trying to define how this would work in official D, I suggested a variant of D where the language rules were adapted to 16 bits. This is not objectively worse than what C does, and it works fine, and the advantage is there is no false pretense of portability.

reply
ekidd
1 day ago
[-]
I mean, as a senior developer, the number of bits in an "int" is "who the hell knows, because it has changed a bunch of times during my career, and that's what stdint.h is for." And let's not even talk about machines with 32-bit "char" types, which I actually had to program for once.

If the number of bits isn't actually included right in the type name, then be very sure you know what you're doing.

The senior engineer answer to "How many bits are there in an int?" is "No, stop, put that down before you put your eye out!" Which, to be fair, is the senior engineer answer to a lot of things.

reply
SAI_Peregrinus
1 day ago
[-]
How many bits are in an `int` in C? What do you mean "at least 16", that's ridiculous, nobody would write a language that leaves the number of bits in basic elementary types partially specified‽
reply
bluGill
1 day ago
[-]
It is a good idea - most of the time you don't care, and on slower systems a large int is harmful since the system can't handle that much and it cost performance - go to the faster system with larger ints when you need larger intw.
reply
estimator7292
1 day ago
[-]
On the one hand, in today's world asking how many bits is in an int is exactly as answerable as "how long is a piece of rope"

On the other, the right answer is 16 or 32. It's not the correct answer, strictly speaking, but it is the right one.

reply
jandrewrogers
1 day ago
[-]
An 'int' is also 64 bits on some platforms.
reply
fragmede
1 day ago
[-]
It's the wrong question. How many bits is uint64 is a much better question, if we're at a place where that's relevant.
reply
i_am_a_peasant
1 day ago
[-]
I had one tell me all ints are 16 bits, and then they said 0xffff is a 32bit number.
reply
jamesfinlayson
1 day ago
[-]
Maybe I'm wrong but I suspect this might be partly due to the rise of Docker which makes attaching a debugger/profiler harder but also partly due to the existence of products like NewRelic which are like a hands-off version of a debugger and profiler.

I haven't used a debugger much at work for years because it's all Docker (I know it's possible but lots of hoops to jump through, plus my current job has everything in AWS i.e. no local dev).

reply
inglor_cz
1 day ago
[-]
On the other hand, I had to debug a PHP app in Docker using XDebug and it was mostly painless. Or, to be more precise, no more painful than debugging it on local Wamp/Xampp.
reply
alexjplant
1 day ago
[-]
> "code should be self documenting

It should be to the greatest extent possible. Strive to write literate code before writing a comment. Comments should be how and why, not what.

> - ergo: We don't write any comments, ever"

Indeed this does not logically follow. Writing fluent, idiomatic code with real names for symbols and obvious control flow beats writing brain teasers riddled with comments that are necessary because of the difficulty in parsing a 15-line statement with triply-nested closures and single-letter variable names. There's a wide middle ground where comments are leveraged, not made out of necessity.

reply
Sharlin
1 day ago
[-]
You misunderstood the GP - they were criticizing the way some programmers use "code should be self-documenting" as an excuse when they actually mean "I’m too lazy to write comments even when I really should". Just like "premature optimization is bad" may in fact mean something like "I never bothered to learn how to measure and reason about performance"
reply
alexjplant
1 day ago
[-]
Updated my comment to refine my rhetorical intent. Thank you for the call-out.
reply
wombatpm
1 day ago
[-]
At a minimum they should comment their GOTO’s
reply
p0nce
1 day ago
[-]
Laziness in moral clothing.
reply
msla
1 day ago
[-]
> Similar to the "code should be self documenting - ergo: We don't write any comments, ever"

My counterpoint: Code can be self-documenting, reality isn't. You can have a perfectly clear method that does something nobody will ever understand unless you have plenty of documentation about why that specific thing needs to be done, and why it can't be simpler. Like having special-casing for DST in Arizona, which no other state seems to need:

https://en.wikipedia.org/wiki/Time_in_the_United_States

reply
pc86
1 day ago
[-]
This isn't a counterpoint, it's just additional (and barely relevant) information.
reply
msla
1 day ago
[-]
It's a counterpoint to the maxim, not the post I'm replying to.
reply
switchbak
1 day ago
[-]
Documenting it in a way that ensures it satisfies the example case would be preferred. You know, like with a test.
reply
msla
19 hours ago
[-]
"Why is this person testing that Arizona does such bizarre things with time? Surely no actual state is like that! Such complexity! Take it out!"
reply
rustystump
1 day ago
[-]
Language conventions aside, i have rarely found a comments to be and more often they have lied to me. AI makes this both worse and better.

I know it may be hard for me to understand the need for writing in english what is obvious (to me) in code. I also know i have read a stupid amount of code.

My rule is simple, if the comment repeats verbatim the name of a variable declaration or function name, it has to go. Anything else we can talk about.

reply
hinkley
1 day ago
[-]
Because it reads like permission not to think and for a group of supposed intellectuals we spend a lot of fucking time trying not to think.

Even 'grug brained' isn't about not thinking, it's about keeping capacity in reserve for when the shit hits the fan. Proper Grug Brain is fully compatible with Kernighan's Law.

reply
rkaregaran
1 day ago
[-]
(this is the correct answer, parent needs to understand this better)
reply
Sammi
1 day ago
[-]
In particular I've seen way too many people use this term as an excuse to write obviously poor performing code. That's not what Knuth said. He never said it's ok to write obviously bad code.

I'm still salty about that time a colleague suggested adding a 500 kb general purpose js library to a webapp that was already taking 12 seconds on initial load, in order to fix a tiny corner case, when we could have written our own micro utility in 20 lines. I had to spend so much time advocating to management for my choice to spend time writing that utility myself, because of that kind of garbage opinion that is way too acceptable in our industry today. The insufferable bastard kept saying I had to do measurements in order to make sure I wasn't prematurely optimizing. Guy adding 500 kb of js when you need 1 kb of it is obviously a horrible idea, especially when you're already way over the performance budget. Asshat. I'm still salty he got so much airtime for that shitty opinion of his and that I had to spend so much energy defending myself.

reply
jcgrillo
1 day ago
[-]
Reminds me of a codebase that was littered with SQL injection opportunities because doing it right would have been "premature optimization" since it was "just" a research spike and not customer facing. Guess what happened when it got promoted to a customer facing product?
reply
Shorel
1 day ago
[-]
Now that's an stupid argument. I'm with you. Removing SQL injection has little if anything to do with performance, so it is not an optimization. I guess we will get more of this with the vibe coding craze.
reply
fragmede
1 day ago
[-]
We'll see. It's easy enough to ask Claude to red team and attack the system given the codebase and see what holes it finds to patch up. It's good enough now to find blatantly obvious shit like an SQL injection.
reply
Quarrelsome
1 day ago
[-]
tbf that's not their fault, as long as they were open about the flaws. Business should not have promoted it to a customer facing product. That's just org failure.
reply
jcgrillo
1 day ago
[-]
I disagree. If you merge code to main you immediately lose all control over how it will be used later. You shouldn't ever ship something you're not comfortable with, or unprepared to stake your professional reputation on. To do so is profoundly unethical. In a functioning engineering culture individuals who behave that way would be personally legally liable for that decision. Real professions--doctors, engineers, etc.--have a coherent concept of malpractice, and the legal teeth to back it up. We need that for software too, if we're actually engineers.
reply
Quarrelsome
1 day ago
[-]
Profoundly unethical? Ok so wtf is this formatting in your comment. You DARE comment, online where people can see, where you start a new sentence with two dashes "--". What are you thinking? Where's the professionalism? Imagine someone took that sentence and put it on the front of the biggest magazine in the world. You'd LOOK LIKE A FOOL.

OR, perhaps its the case that different contexts have different levels of effort. Running a spike can be an important way to promote new ideas across an org and show how things can be done differently. It can be a political tool that has positive impact, because there's a lot more to a business than simply writing good code. However if your org is horrible then it can backfire in the way that was described. Maybe business are too aggressive and trample on dev, maybe dev doesn't have a spine, maybe nobody spoke up about what a fucking disaster it was going to be, maybe they did and nobody listened. Those are all organisational issues akin to an exploitable code base but embedded into the org instead of the code.

These issues are not the direct fault of the spike, its the fault of the org, just like the idiot that took your poorly formatted comment and put it on the front page of Vogue.

reply
jcgrillo
1 day ago
[-]
Grammatical errors, formatting mistakes, or bad writing in general aren't something the magazine publisher can be held liable for, it may be embarrassing but it's not illegal or unethical. Publishing outright falsehoods about someone is though--we call that defamation. Knowingly shipping a broken, insecure system isn't all that different. Of course the people who came along later and chucked it into prod without actually reviewing it were also negligent, but that doesn't render the first guy blameless.
reply
Quarrelsome
1 day ago
[-]
If it was only supposed to be a spike then it does render the first guy somewhat blameless. Especially if the org was made aware of the issues, which I imagine they were if someone had raised the issue of the exploits in the code base.

I mean I could take a toddlers tricycle and try to take it onto the motorway. Can we blame the toy company for that? It has wheels, it goes forward, its basically a car, right? In the same way a spike is basically something we can ship right now.

reply
f1shy
1 day ago
[-]
That is the giat of the leftpad history, isn’t it?
reply
sandeepkd
1 day ago
[-]
This is crucial detail that almost everyone misses when they are skimming the topic on surface. The implication is that this statement/law is referenced more often to shut down the architecture designs/discussions
reply
dimitrios1
1 day ago
[-]
Even moreso . I like the Rob Pike restatement of this principle, it really makes it crystal clear:

"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is."

Moreso, in my personal experience, I've seen a few speed hacks cause incorrect behavior on more than one occasion.

reply
YZF
1 day ago
[-]
This is true but doesn't help.

Parent is talking about building software that is inherently non-performant due to abstractions or architecture with the wrong assumption that it can be optimized later if needed.

The analogy is trying to convert a garbage truck into a race car. A race car is built as a race car. You don't start building a garbage truck and then optimize it on the race course. There are obvious principles and understanding that first go into the building of a race car, assuming one is needed, and the optimization happens from that basis in testing on and off the track.

reply
dimitrios1
18 hours ago
[-]
Ha! -- Allow me to introduce you to the US Diesel Truckin Nationals! Here are some dump trucks drag racing https://www.youtube.com/watch?v=aqxpOPeImkw
reply
YZF
4 hours ago
[-]
lol. TIL. But they're not racing Formula 1.
reply
red_admiral
20 hours ago
[-]
Knuth certainly writes better than Dijkstra, even if he lost the "goto" argument in the end.
reply
tshaddox
1 day ago
[-]
> Basically, don't spend unnecessary effort increasing performance in an unmeasured way before its necessary, except for those 10% of situations where you know in advance that crucial performance is absolutely necessary. That is the sentiment.

Which is pretty close to just saying "don't do anything unless you have a good reason for doing it."

reply
pelasaco
20 hours ago
[-]
Computer scientist here. I love Donald Knuth, but he never maintained production systems :)

I’m being a bit provocative here, just to make two points:

a) Software development back in the day, especially when it comes to service, reach, security, etc., was completely different from today. Black Friday, millions of users, SLAs, 24-hour service... these didn’t exist back then.

b) Because of so many conditions — some mentioned in point (a) - prematurity ends when the code is live in production. End.

reply
giancarlostoro
1 day ago
[-]
> except for those 10% of situations where you know in advance that crucial performance is absolutely necessary

Yeah like, NOT indexing any fields in a database, that'll become a problem very quickly. ;)

reply
austin-cheney
1 day ago
[-]
Those 10% situations will be identified by the business requirements. Everything else is an unmeasured assumption of priorities in conflict with the stated priorities. For the remaining 90% of situations high performance is always important but not as important as working software.
reply
ElectricalUnion
1 day ago
[-]
Those days, what I see as "premature database optimization" is non-DBAs, without query plans, EXPLAINS and profiling, sprinkling lots of useless single column indexes that don't cover the actual columns used for joins and wheres, confusing the query planner and making the database MUCH slower and therefore more deadlock-prone instead.
reply
tombert
1 day ago
[-]
The biggest issue I have with premature optimization is stuff that really doesn't matter.

For example, in Java I usually use ConcurrentHashMap, even in contexts that a regular HashMap might be ok. My reasoning for this is simple: I might want to use it in a multithreaded context eventually and the performance differences really aren't that much for most things; uncontested locks in Java are nearly free.

I've gotten pull requests rejected because regular HashMaps are "faster", and then the comments on the PR ends up with people bickering about when to use it.

In that case, does it actually matter? Even if HashMap is technically "faster", it's not much faster, and maybe instead we should focus on the thing that's likely to actually make a noticeable difference like the forty extra separate blocking calls to PostgreSQL or web requests?

So that's the premature optimization that I think is evil. I think it's perfectly fine at the algorithm level to optimize early.

reply
xorcist
1 day ago
[-]
I can fully understand "bickering" about someone sprinkling their favourite data type over a codebase which consistently used a standard data type before. The argument that it might be multithreaded in the futurure does not hold if the rest of the code base clearly was not writen with that in mind. That could even be counter productive, should someone get the misguided idea that it was ready for it.

Make a (very) good argument, and suggest a realtistic path to change the whole codebase, but don't create inconsistency just because it is "better". It is not.

reply
tombert
1 day ago
[-]
I don’t expose it at the boundaries, just within functions. With everything outside of the function I take a Map interface in and/or return the Map out.

It makes no difference to the outside code.

reply
jpollock
1 day ago
[-]
You are communicating with future readers of the code. The presence of ConcurrentHashMap will lead future engineers into believing the code is threadsafe. This isn't true, and believing it is dangerous.
reply
tombert
1 day ago
[-]
No, they'll believe that that specific map is thread safe.
reply
xorcist
1 day ago
[-]
That's really not much better. No function is an island, as wise man almost once said.
reply
tombert
1 day ago
[-]
It’s actually a lot better. That’s literally the whole point of interfaces and polymorphism: to make it so the outside does not care about the implementation.
reply
boonzeet
1 day ago
[-]
Ironically to your point, I think adding a ConcurrentHashMap because it might be multithreaded eventually IS premature optimisation.

The work can be done in future to migrate to using ConcurrentHashMap when the feature to add multithreading support is added. There's no sense to add groundwork for unplanned, unimplemented features - that is premature optimisation in a nutshell.

reply
ratherbefuddled
19 hours ago
[-]
Typing ten extra characters is an investment that pays off vs the overhead of doing any piece of work in the future. There's no real downside here, it doesn't make the code more complex, it gives less surprising results.

I think you are conflating YAGNI and premature optimisation and neither apply in this case.

reply
franga2000
22 hours ago
[-]
The point of the premature saying is to avoid additional effort in the name of unnecessary optimization, not to avoid optimization itself. Using a thing that's already right there because it might be better in some cases and is no more effort than the alternatives is not a problem.
reply
tombert
18 hours ago
[-]
I would agree if I were adding a dependency to Gradle or something. In this case it’s just something that conforms to the same interface as the other Map and is built into every Java for the last 20+ years. The max amount of effort I am wasting here is the ten extra characters I used to type “Concurrent”.

My point was that even if it is not optimal, there’s really no utility in bickering about it because it doesn’t really change anything. The savings from changing it to regular hashmap will, generously, be on the order of nanoseconds in most cases, and so people getting their panties in a bunch over it seems like a waste of time.

reply
brabel
1 day ago
[-]
I think you are in the wrong. But my reason is that when you support concurrency, every access and modification must be checked more carefully. By using a concurrent Map you are making me review the code as if it must support concurrency which is much harder. So I say don’t signal you want to support concurrency if you don’t need it.
reply
hunterpayne
1 day ago
[-]
1) You don't need ConcurrentHashMap to make code thread-safe. That's the most extreme version that means that you need a thread-safe iterator too.

2) Locks are cheap

3) I seriously doubt that the difference between a Map and a ConcurrentHashMap is measurable in your app

Which means that both, the comments on your PRs are irrelevant and you are still going too far in your thread-safety. So you are both wrong.

What you are right about is to focus on network calls.

reply
tombert
1 day ago
[-]
Locks are cheap performance-wise (or at least they can be) but they’re easy to screw up and they can be difficult to performance test.

ConcurrentHashMap has the advantage of hiding the locking from me and more importantly has the advantage of being correct, and it can still use the same Map interface so if it’s eventually used downstream somewhere stuff like `compute` will work and it will be thread safe without and work with mutexes.

The argument I am making is that it is literally no extra work to use the ConcurrentHashMap, and in my benchmarks with JMH, it doesn’t perform significantly worse in a single-threaded context. It seems silly for anyone to try and save a nanosecond to use a regular HashMap in most cases.

reply
nixon_why69
21 hours ago
[-]
As soon as you have a couple of those ConcurrentHashMaps in scope with relationships between their elements, your concurrency story is screwed unless you've really thought everything out. I'd encourage using just plain HashMap to signal that there's no thread safety so you know to rethink everything if/when it comes up.
reply
toast0
1 day ago
[-]
I only use mature optimizations, so I'm good.

Thinking about the overall design, how its likely to be used, and what the performane and other requirements are before aggregating the frameworks of the day is mature optimization.

Then you build things in a reasonable way and see if you need to do more for performance. It's fun to do more, but most of the time, building things with a thought about performance gets you where you need to be.

The I don't need to think about performance at all camp, has a real hard time making things better later. For most things, cycle counting upfront isn't useful, but thinking about how data will be accessed and such can easily make a huge difference. Things like bulk load or one at a time load are enormous if you're loading lots of things, but if you'll never load lots of things, either works.

Thinking about concurrency, parallelism, and distributed systems stuff before you build is also pretty mature. It's hard to change some of that after you've started.

reply
Shorel
1 day ago
[-]
That first sentence.

I want it in a t-shirt. On billboards. Everywhere :)

reply
NikolaosC
1 day ago
[-]
Spent 6 months last year ripping out an abstraction layer that made every request 40ms slower. We profiled, found the hot path, couldn't fix it without a rewrite. The "optimize later" school never tells you later sometimes means never
reply
tombert
1 day ago
[-]
I'd say it usually means "never".

I also find it a bit annoying is that most people just make shit up about stuff that is "faster". Instead of measuring and/or looking at the compiled bytecode/assembly, people just repeat tribal knowledge about stuff that is "faster" with no justification. I find that this is common amongst senior-level people at BigCos especially.

When I was working in .NET land, someone kept telling me that "switch statements are faster" than their equivalent "if" statements, so I wrote a very straightforward test comparing both, and used dotpeek to show that they compile to the exact same thing. The person still insisted that switch is "faster", I guess because he had a professor tell him this one time (probably with more appropriate context) and took whatever the professor said as gospel.

reply
bluGill
1 day ago
[-]
I've seen a lot of requests to obtimize code where we can measure the optimal versions saves a few nanoseconds. I just deleted some 'optimal code' that took a lot of mutexes and so was only faster when there is no contetion but in real world multi-writer situations the easy code wins. (Shared memory vs local socket for ipc)
reply
tombert
1 day ago
[-]
I don't write a lot of super low level stuff, so maybe things are different there, but at least in the normal user space level I've found it pretty rare that explicit mutexes ever beat the performance of an (in my opinion) easier design using queues and/or something like ZeroMQ.

Generally I've found that the penalty, even without contention, is pretty minimal, and it almost always wins under contention.

reply
bluGill
1 day ago
[-]
To be fair the code in question was written many years ago - before anyone I know had heard of zeromq (it existed but wasn't known). it would be possible to optimize the mutexs out I'm sure - but the big problem wasn't speed it was complexity of code that I now maintain. Since local sockets are easier and faster I'm safe deleting the more complex code that should have never been written.
reply
tananaev
1 day ago
[-]
With modern tools it should be pretty easy to build scalable solutions. I take premature optimization as going out of your way to optimize something that's already reasonable. Not that you should write really bad code as a starting point.
reply
Sammi
1 day ago
[-]
The problem is that that this term gets misused to say the opposite of what it was intended for.

It's particularly the kind of people who like to say "hur hur don't prematurely optimize" that don't bother writing decent software to begin with and use the term as an excuse to write poor performing code.

Instead of optimizing their code, these people end up making excuses so they can pessimize it instead.

reply
Shorel
1 day ago
[-]
To me that's the people who write desktop software in Electron. Hate that.
reply
bartread
1 day ago
[-]
I believe use of Electron is known as premature deoptimisation and if it had been a thing when Knuth coined the original phrase I'm sure he would have come up with that term too. Use of Electron to deliver software is popular and works but that doesn't make it any less of an abomination.

I'm actually considering, for the first time since 2013/14 when I worked on a Visual Studio extension, creating a piece of desktop software - and a piece of cross-platform desktop software at that. Given that Microsoft's desktop story has descended into a chaotic mishmash of somewhat conflicting stories, and given it will be a cold day in hell before I choose Electron as the solution to any problem I might have, most likely I will roll with Qt + Rust, or at least Qt + something.

20-odd years ago I might have opted for Java + Swing because I'd done a lot of it and, in fairness to Swing, it's not a bad UI toolkit and widget set. These days I simply prefer the svelte footprint and lower resource reuqirements of a native binary - ideally statically linked too, but I'll live with the dynamic linking Qt's licensing necessitates.

reply
Shorel
20 hours ago
[-]
I am testing Slint for exactly the same thing. An alternative to Qt/wxWidgets.

It is written in Rust.

reply
bartread
16 hours ago
[-]
Slint? Interesting: I will have to investigate.
reply
Sammi
1 day ago
[-]
I'm going to put a huge `depends` on this one. If it's for a small widget you click on once in a while, but it stays loaded as a full Electron app all the time, then yeah it's terrible. If it's a full time front and center application like VS Code, then Electron earns it's keep. The more often you interact with the app the more willing you will be to put up with the headroom that Electron requires in order to get the real benefits that Electron provides.
reply
pydry
1 day ago
[-]
In my career Ive seen about 1000 instances of somebody trying to optimize something prematurely.

Usually those people also have a good old whinge about the premature optimization quote being wrong or misinterpreted and general attitudes to software efficiency.

Not once have I ever seen somebody try to derail a process of "ascertain speed is an issue that should be tackled" -> "profile" -> fix the hot path.

reply
YZF
1 day ago
[-]
In my career I've seen endless examples of hopelessly badly designed software where no amount of optimization can turn it into anything other than a piece of garbage. Slow, bloated and inefficient.

Ascertain an issue is too late for bad software. The technical term is polishing a turd.

Not that what you're describing doesn't happen, people trying to make something irrelevant fast, but that's not the big problem we face as an industry. The problem is bad software.

reply
wredcoll
1 day ago
[-]
Hurray, bad software is bad, we did it! Just don't write bad code!
reply
YZF
1 day ago
[-]
Before you write code you design (and/or architect) a system (formally or informally).

There's too little appreciation today for a well designed system. And the "premature optimization" line is often used to justify not thinking about things because, hey, that's premature. Just throw something together.

reply
wredcoll
1 day ago
[-]
Like everything else there's nuance and a range of appropriate behaviors. It's probably worth spending some time beforehand designing the next mars rover's software but it's real easy to get, say, the design of an ai based program editor wrong if you aren't getting user feedback.
reply
YZF
4 hours ago
[-]
Getting feedback from users for a product is important as well. Those are somewhat orthogonal concepts. I'm not proposing analysis paralysis or no prototyping but I am saying there are some things that if you didn't consider in advance can become huge issues down the road. There are examples (e.g. Facebook or the Google crawler) where very successful products started with something not great and then were able to fix that later but I would argue most of the very successful products and platforms (software or not) have had some non-negligible thinking/planning upfront.
reply
Jensson
1 day ago
[-]
> Not once have I ever seen somebody try to derail a process of "ascertain speed is an issue that should be tackled" -> "profile" -> fix the hot path.

Many things need to be optimized before you can easily profile them, so at this stage its already too late and your software will forever be slow.

reply
hunterpayne
1 day ago
[-]
"Not once have I ever seen somebody try to derail a process of "ascertain speed is an issue that should be tackled" -> "profile" -> fix the hot path."

That's because your boss will never in a 1000 years hire the type of dev who can do that. And even if you did, there will be team members who will fight those fixes tooth and nail. And yes, I have a very cynical view of some devs but they earned that through some of the pettiest behavior I have ever seen.

reply
wredcoll
1 day ago
[-]
Man, what are you talking about? Speed optimizations after shipping software happens constantly in everything from video games in C to ui libraries in js and everything in between.

People write some code, test it, ship it, then get some ideas that its too slow and make it faster.

The nice thing about doing it with shipped code is you can actually measure where time is spent insyead of guessing.

reply
cstoner
1 day ago
[-]
Yeah, I interpret "premature optimization" as taking a request that takes 500ms and focusing on saving a couple ms by refactoring logic to avoid a SQL JOIN or something.

Your users are not going to notice. Sure, it's faster but it's not focused on the problem.

reply
davedx
1 day ago
[-]
"today, performance is mostly about architectural choices, and it has to be given consideration right from the start"

This doesn't make sense. Why is performance (via architectural choices) more important today than then?

You can build a snappy app today by using boring technology and following some sensible best practices. You have to work pretty hard to need PREMATURE OPTIMIZATION on a project -- note the premature there

reply
jandrewrogers
1 day ago
[-]
The big thing that changed is that almost all software performance today is bandwidth-bound at the limit. Not computation-bound. This transition was first noticed in supercomputing around 25 years ago.

Optimization of bandwidth-bound code is almost purely architectural in nature. Most of our software best practices date from a time when everything was computation-bound such that architecture could be ignored with few bad effects.

reply
materielle
1 day ago
[-]
Because you can naively iterate through a million items faster than an additional network round trip would take.

So a lot of code quality debates don’t matter for the typical enterprise app. While a dev spends their afternoon shaving off 100 nanoseconds in the hot path, a second developer on a deadline added a poorly thought out round trip that adds 800milliseconds.

This architectural problems are also more difficult to unwind later since they tend to have cascading effects.

reply
sov
10 hours ago
[-]
"dont prematurely optimize" is talking about optimization SEPERATE from architecture (and how pre-mature optimization as Knuth describes gets in the way of correct architecture). nowadays, almost any optimization purely distinct from architecture is handled either by cpu arch/logic improvements, compiler/interpreter improvements, or buried in the overwhelming tide of cpu speedups, so TODAY these terms get conflated as architecture IS typically how we optimize.
reply
f1shy
1 day ago
[-]
I agree. But I have to say, when defining the architecture, there are things known that will be terrible bottlenecks later. They should be avoided. Just as the previous comment, about defining proper indices in a database. Optimization means making something that is already “good” and correct better. There is no excuse to make a half ass, bug ridden shitty software, under the excuse “optimization is for later” that is technical debt generation: 2 very different things.
reply
Nevermark
1 day ago
[-]
> You can build a snappy app today by using boring technology and following some sensible best practices.

If you are building something with similar practical constraints for the Nth time this is definitely true.

You are inheriting “architecture” from your own memory and/or tools/dependencies that are already well fit to the problem area. The architectural performance/model problem already got a lot of thought.

Lots of problems are like that.

But if you are solving a problem where existing tools do a poor job, you better be thinking about performance with any new architecture.

reply
kqr
1 day ago
[-]
In the 1970s computer systems spanned fewer orders of magnitude. Operations generally took somewhere between maybe 1 and 10^8 CPU cycles. Today, the range is closer to 10^-1 to 10^13.
reply
hunterpayne
1 day ago
[-]
Sorry but you are just wrong. There are very few coding changes that you can make to fix performance issues that users will notice. Almost all the optimization changes they might notice are architectural changes. This is because CPU bound code is very rare and if you have that case you are probably doing something wrong architecturally. Code being memory bound is 99.999% of the time what is happening. And optimizations to memory bound code are almost always architectural too. Any coding changes, even to a huge platform's codebase can probably be found and fixed in a couple of hours. Anything non-architectural that takes longer than that, is about as likely to create noticeable improvement as you winning the big jackpot in a lottery.
reply
paulddraper
1 day ago
[-]
> Why is performance (via architectural choices) more important today than then?

There were fewer available layers of abstraction.

Whether you wrote in ASM, C, or Pascal, there was a lot less variance than writing in Rust, JavaScript, Python.

reply
ghosty141
1 day ago
[-]
What's the problem with SOLID? It's very very rare that I see a case where going against SOLID leads to better design.
reply
GuB-42
1 day ago
[-]
SOLID tend to encourage premature abstraction, which is a root of evil that is more relevant today than optimization.

SOLID isn't bad, but like the idea of premature optimization, it can easily lead you into the wrong direction. You know how people make fun of enterprise code all the time, that's what you get when you take SOLID too far.

In practice, it tends to lead to a proliferation of interfaces, which is not only bad for performance but also result in code that is hard to follow. When you see a call through an interface, you don't know what code will be run unless you know how the object is initialized.

reply
wavemode
1 day ago
[-]
A lot of people make the mistake of thinking that if you just follow SOLID then you have good code, but plenty of teams follow it to the letter and still create complete messes.

The problem is that SOLID on its own does nothing for you. It's a set of (vague) rules, but not a full framework for how to design software. I would even argue that SOLID is actively harmful if used on its own.

Things like Clean Architecture and Domain-Driven Design are a lot closer to being true frameworks for software design, and a lot of their basic principles are actually really good (like the core of the application being made up of objects which perform calculations, validations and business rules with no side effects), but the complexity of those architectures is a problem in itself.

And, even aside from that, I think the industry in general reached a point where people decided that, principled object-oriented design is just not worth it. Why spend all this effort worrying about the software remaining maintainable for decades, when we could instead just throw together something that works, then IPO, then rewrite the whole thing once we have money.

reply
sroussey
1 day ago
[-]
In a way, SOLID is premature optimization. You are optimizing abstractions before knowing how the code is used in practice. Lots of code will be written and never changed again, but a minority will see changes quite a bit. Concentrate there. Like you don't need to optimize things that aren't in hot code (usually, omg experience will tell you that all rules have exceptions, including the exceptions).
reply
ghosty141
1 day ago
[-]
> Lots of code will be written and never changed again, but a minority will see changes quite a bit. Concentrate there

I think the most important principle above all is knowing when not to stick to them.

For example if I know a piece of code is just some "dead end" in the application that almost nothing depends on then there is little point optimizing it (in an architectural and performance sense). But if I'm writing a core part of an application that will have lots of ties to the rest, it totally does make sense keeping an eye on SOLID for example.

I think the real error is taking these at face value and not factoring in the rest of your problem domain. It's way too simple to think SOLID = good, else bad.

reply
dzjkb
1 day ago
[-]
here's a nice critique of SOLID principles:

https://www.tedinski.com/2019/04/02/solid-critique.html

reply
newsoftheday
1 day ago
[-]
They start by indicating people don't understand, “A module should have only one reason to change.”. Reading more of that article, it's clear the author doesn't understand much about software engineering and sounds more like a researcher who just graduated from putting together 2+2.
reply
segmondy
1 day ago
[-]
The great thing bout the net is also it's biggest problem. Anyone can write a blog and if it looks nice, sounds polished they could sway a large group. I roll my eyes so strong at folks that reject SOLID principles and design patterns.
reply
f1shy
1 day ago
[-]
I have seen way too often, advocates of SOLID and patterns to have religious arguments: I don’t like it. That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas. About patterns, I cannot really say as much positive. They are not bad per-se. But I’ve seen they have made lots of harm. In the gang of 4 book, in the preface, I think, says something like “this list is neither exhaustive, nor complete, and often inadequate” the problem is every single person I know who was exposed to the book, try to hammer every problem into one pattern (in the sense of [1]). Also insist in using the name everywhere like “facade_blabla” IMHO the pattern may be Façade, but putting that through the names of all classes and methods, is not good design.

[1] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

reply
ghosty141
1 day ago
[-]
> That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas

This should be the header of the website. I think the core of all these arguments is people thinking they ARE laws that must be followed no matter what. And in that case, yeah that won't work.

reply
tracker1
1 day ago
[-]
Something, something, wrong abstractions are worse than no abstractions.

SOLID approaches aren't free... beyond that keeping code closer together by task/area is another approach. I'm not a fan of premature abstraction, and definitely prefer that code that relates to a feature live closer together as opposed to by the type of class or functional domain space.

For that matter, I think it's perfectly fine for a web endpoint handler to make and return a simple database query directly without 8 layers of interfaces/classes in between.

Beyond that, there are other approaches to software development that go beyond typical OOP practices. Something, something, everything looks like a nail.

The issues that I have with SOLID/CLEAN/ONION is that they tend to lead to inscrutable code bases that take an exponentially long amount of time for anyone to come close to learning and understanding... Let alone the decades of cruft and dead code paths that nobody bothered to clean up along the way.

The longest lived applications I've ever experienced tend to be either the simplest, easiest to replace or the most byzantine complex monstrosities... and I know which I'd rather work on and support. After three decades I tend to prioritize KISS/YAGNI over anything else... not that there aren't times where certain patterns are needed, so much as that there are more times where they aren't.

I've worked on one, singular, one application in three decades where the abstractions that tend to proliferate in SOLID/CLEAN/ONION actually made sense... it was a commercial application deployed to various govt agencies that had to support MS-SQL, Oracle and DB2 backends. Every, other, time I've seen an excess of database and interface abstractions have been instances that would have been better solved in other, less performance impacting ways. If you only have a single concrete implementation of an interface, you probably don't need that interface... You can inherit/override the class directly for testing.

And don't get me started on keeping unit tests in a completely separate project... .Net actually makes it painful to put your tests with your implementation code. It's one of my few actual critiques about the framework itself, not just how it's used/abused.

reply
gavmor
1 day ago
[-]
This doesn't seem to be a critique of the principles so much as a critique of their phrasing.

Even his "critique" of Demeter is, essentially, that it focuses on an inconsequential aspect of dysfunction—method chaining—which I consider to be just one sme that leads to the larger principle which—and we, apparently, both agree on this—is interface design.

reply
someguyiguess
1 day ago
[-]
It only applies to the object oriented programming paradigm
reply
mrkeen
1 day ago
[-]
Negative.

The only part of SOLID that is perhaps OO-only is Liskov Substitution.

L is still a good idea, but without object-inheritance, there's less chance of shooting yourself in the foot.

reply
kortex
1 day ago
[-]
I go by a philosophy that Liskov Substitution is reeeally about referential transparency. I don't care about parent/child classes, I care about interfaces and implementations, and structural subtyping. Fix that, and it's great.
reply
marcosdumay
1 day ago
[-]
That's understating the problem. It mandates OOP.

If you follow SOLID, you'll write OOP only, with always present inheritance chains, factories for everything, and no clear relation between parameters and the procedures that use them.

reply
Exoristos
1 day ago
[-]
This is only superficially true. Here's a fair discussion that could serve as a counterpoint: https://medium.com/@ignatovich.dm/applying-solid-principles-...
reply
paulddraper
1 day ago
[-]
It causes excessive abstraction, and more verbose code.

L and I are both pretty reasonable.

But S and D can easily be taken to excess.

And O seems to suggest OO-style polymorphism instead of ADTs.

reply
ghosty141
1 day ago
[-]
This is similar to my view. All these "laws" should alwaye be used as guidance not as actual laws. Same with O. I think its good advice to design software so adding features that are orthogonal to other features don't require modifying much code.

That's how I view it. You should design your application such that extension involves little modifying of existing code as long as it's not necessary from a behavior or architectural standpoint.

reply
SAI_Peregrinus
1 day ago
[-]
Of course you can do that & still make a mess. E.g. by deciding that all your behavior will be "configurable" by coding inside strings in a YAML file, and what YAML files you load at runtime determine which features you get. Sure, they might conflict, but that's the fault of whoever wrote that "configuration" YAML. (Replace YAML with XML for a previous era version of this bad idea).
reply
jnpnj
1 day ago
[-]
It's a very interesting topic. Even when designing a system, how to modularize, it's healthy to wait until the whole context is in sight. It's a bit of a black art, too early or too late you pay some price.
reply
ozim
1 day ago
[-]
Sounds like we agree.

Bunch of stuff is done for us. Using postgres having indexes correct - is not premature optimization, just basic stuff to be covered.

Having double loop is quadratic though. Parallelism is super fun because it actually might make everything slower instead of faster.

reply
xnx
1 day ago
[-]
> making your code more complicated and still slower than if you thought about performance at the start.

Not if your optimization for performance is some Rube Goldberg assemblage of microservices and an laundry list of AWS services.

reply
causal
1 day ago
[-]
Exactly. Today "premature optimization" almost always means unnecessary infra or abstractions or some other complexity- not DS&A choices.
reply
pcblues
1 day ago
[-]
I still don't understand microservices for anything short of a NAG of four level architecture.
reply
SJC_Hacker
1 day ago
[-]
Maybe its just my domain, but most perf problems I've seen relate to architechture and infrastrucutre decisions, and not really code or algorithms. Especially the "microservice" mantra, splitting out simple functions into containers, using k8s, all in the name of "scalability" then being surprised when a simple API call takes several seconds due to all that crud it has to go through.
reply
benrutter
1 day ago
[-]
> Today, late optimization is just as bad as premature optimization, if not more so.

I agree but I don't think this discredits the "premature optimisation is the root of all evil" thing, aside from the fact that it's a heavy exaggeration.

The trouble is, people read it as "don't optimise" which is an incredibly bad decision.

Especially in the data world though, I've seen lots of teams really struggle with problems caused by using technologies they don't need (normally spark or kubernetes or both) just because they might need them later.

I think that type of pitfall is what the original quote is warning against.

reply
miki123211
19 hours ago
[-]
IMHO "root of all evil" sits there with YAGNI. An useful way to think about the world (overoptimizing early is often as bad as overabstracting, and just as common), but a principle which shouldn't be taken to an extreme.

I think a more useful mental model is of one-way versus two-way decisions (local optimizations can be applied later., the architecture has to be right from the start), as well as expected risks and payoffs.

reply
cogman10
1 day ago
[-]
Completely agreed here [1].

And as I point out, what Knuth was talking about in terms of optimization was things like loop unrolling and function inlining. Not picking the right datastructure or algorithm for the problem.

I mean, FFS, his entire book was about exploring and picking the right datastructures and algorithms for problems.

[1] https://news.ycombinator.com/item?id=47849194

reply
konovalov-nk
10 hours ago
[-]
Recently I stumbled upon this crazy thing: https://github.com/anomalyco/opencode/issues/9676#issuecomme...

`ripgrep` tool call on opencode added `--follow` override, which allowed to traverse symlinks. And you can imagine what happens when symlinks point to ancestors. My 9950x3D CPU had 100% load all the time until I did a bit of research and added config to forbid this on `rg` level.

So I'd refine it as "Premature optimization is the root of all evil. Blatant ignorance of it is even worse."

reply
mrzhangbo
21 hours ago
[-]
I agree. The main reason for slow development is excessive optimization and over-design.
reply
vanguardanon
1 day ago
[-]
Is this less relevant today where if you do go down the wrong architecture it is much cheaper to rewrite everything again in something that is better?
reply
throwaway5752
1 day ago
[-]
"Premature optimization is the root of all evil"

Decades in, this is the worst of all of them. Misused by laziness or malice, and nowhere near specific enough.

The graveyard of companies boxed in by past poor decisions is sprawling. And the people that made those early poor decisions bounce around field talking about their "successful track record" of globally poor and locally good architectural decisions that others have had to clean up.

It touches on a real problem, though, but it should be stricken form the record and replaced with a much better principle. "Design to the problem you have today and the problems you have in 6 months if you succeed. Don't design to the problems you'll have have next year if it means you won't succeed in 6 months" doesn't roll off the tongue.

reply
tracker1
1 day ago
[-]
On your last bit, I definitely agree... personally I've leaned more and more into KISS above all else... simple things that are easy to replace are easily replaced only when you need to. Similarly, I also tend to push for initial implementations of many/most things in a scripted language first, mostly for flexibility/simplicity to get a process "right" before worrying about a lot of other things.

One thing that came out of the no-sql/new-sql trends in the past decade and a half is that joins are the enemy of performance at scale. It really helps to know and compromise on db normalization in ways such as leaning on JSON/XML for non-critical column data as opposed to 1:1/children/joins a lot of the time. For that matter, pure performance and vertical scale have shifted a lot of options back from the brink of micro service death by a million paper cuts processes.

reply
firemelt
23 hours ago
[-]
lol I never seen it that way

its my favorit quotes

premature optimization nowadays looks like choosing microservice when monolith can works just fine

reply
chrismarlow9
1 day ago
[-]
A better rule is "complicate only if necessary"
reply
dec0dedab0de
1 day ago
[-]
I like it as a way to remind myself to not get caught up in the minutiae.
reply
enraged_camel
1 day ago
[-]
>> Today, late optimization is just as bad as premature optimization, if not more so.

You are right about the origin of and the circumstances surrounding the quote, but I disagree with the conclusion you've drawn.

I've seen engineers waste days, even weeks, reaching for microservices before product-market fit is even found, adding caching layers without measuring and validating bottlenecks, adding sharding pre-emptively, adding materialized views when regular tables suffice, paying for edge-rendering for a dashboard used almost entirely by users in a single state, standing up Kubernetes for an internal application used by just two departments, or building custom in-house rate limiters and job queues when Sidekiq or similar solutions would cover the next two years.

One company I consulted for designed and optimized for an order of magnitude more users than were in the total addressable market for their industry! Of that, they ultimately managed to hit only 3.5%.

All of this was driven by imagined scale rather than real measurements. And every one of those choices carried a long tail: cache invalidation bugs, distributed transactions, deployment orchestration, hydration mismatches, dependency array footguns, and a codebase that became permanently harder to change. Meanwhile the actual bottlenecks were things like N+1 queries or missing indexes that nobody looked at because attention went elsewhere.

reply
cstoner
1 day ago
[-]
Thank your for posting this. I disagreed with OP but couldn't _quite_ find the words to describe why. But your post covers what i was trying to say.

I was quite literally asked to implement an in-memory cache to avoid a "full table scan" caused by a join to a small DB table recently. Our architect saw "full table scans" in our database stats and assumed that must mean a performance problem. I feel like he thought he was making a data-driven profiling decision, but seemed to misunderstand that a full-table scan is faster for a small table than a lookup. That whole table is in RAM in the DB already.

So now we have a complex Redis PubSub cache invalidation strategy to save maybe a ms or two.

I would believe that we have performance problems in this chunk of code, and it's possible an in-memory cache may "fix" the issue, but if it does, then the root of the problem was more likely an N+1 query (that an in-memory cache bandaids over). But by focusing on this cache, suddenly we have a much more complex chunk of code that needs to be maintained than if we had just tracked down the N+1 query and fixed _that_

reply
Esophagus4
1 day ago
[-]
> All of this was driven by imagined scale rather than real measurements

Yes. When I was a young engineer, I was asked to design something for a scale we didn’t even get close to achieving. Eventual consistency this, event driven conflict resolution that… The service never even went live because by the time we designed it, everyone realized it was a waste of time.

I learned it makes no sense to waste time designing for zillions of users that might never come. It’s more important to have an architecture that can evolve as needs change rather than one that can see years into the future (that may never come).

reply
pelasaco
21 hours ago
[-]
The question is always, "When is it no longer premature?" Normally, it means the system is already in production, users are suffering, and maintaining and supporting it is a nightmare. Then we engineers say, "We have to spend N sprints on that to pay the technical debt."
reply
kgwxd
1 day ago
[-]
All aphorisms are awful.
reply
tonymet
1 day ago
[-]
The wheel is a premature optimization to someone who never figured out how to build one.
reply
tehjoker
1 day ago
[-]
I would venture that this statement is not true for library authors. Performance is a large factor in competitive advantage, especially in domains like image analysis or processing large corpuses of text etc.

In these domains, algorithm selection, and fine tuning hot spots pays off significantly. You must hit minimum speeds to make your application viable.

reply
jollyllama
1 day ago
[-]
You ARE going to need it.
reply
dorkitude
1 day ago
[-]
Found the overbuilder!
reply
theLiminator
1 day ago
[-]
Yeah, I hate that quote too. People take it so out of context and even ignore engineering for reasonable performance out of the box.

I don't blame Knuth, he's talking about focusing on micro-optimizations, but a lot of devs nowadays don't even care to get basic performance right.

reply
rustystump
1 day ago
[-]
I have worked on projects that had 5 layers of buffer/caching all implementing complicated evictions strategies. This is all a browser client. There were 4 caching layers when i started. I added the 5th caching freq network data to disk which massively improved performance.

This is too true. However, often you dont fully know the shape of the domain until you swing at it and fail.

reply
dartharva
1 day ago
[-]
Strange, this is actually one of the most important things I learnt the hard way as an analyst having had the misfortune of being forced to do local-level data engineering after being tasked with doing extensive BI on heavy data volumes (I don't have engineering education) without cloud ETL. Had someone told me this simple statement before time it would have saved me a LOT of pain and effort.
reply
m3kw9
1 day ago
[-]
If you done enough premature optimization, you will know it’s usually wasted, detrimental and bad trade
reply
EGreg
1 day ago
[-]
reply
CyberDildonics
1 day ago
[-]
Unfortunately people do keep repeating it to excuse the fact that they don't know how to optimize in the first place.

Anyone who has done optimization even a little knows that it isn't very difficult, but you do need to plan and architect for it so you don't have to restructure you whole program to get it to run well.

Mostly it's just rationalization, people don't know the skill so they pretend it's not worth doing and their users suffer for it.

If software and website were even reasonably optimized people could just use a computer as powerful as a rasberry pi 5 (except for high res video) for most of what they do day to day.

reply
snarfy
1 day ago
[-]
I definitely hate SOLID more.
reply
Aaargh20318
1 day ago
[-]
I’m missing Curly’s Law: https://blog.codinghorror.com/curlys-law-do-one-thing/

“A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance, and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing, and should mean it all of the time.”

reply
inetknght
1 day ago
[-]
> It must not be both a floor polish and a dessert topping.

I worked as a janitor for four years near a restaurant, so I know a little bit about floor polishing and dessert toppings. This law might be a little less universal than you think. There are plenty of people who would happily try out floor polish as a dessert topping if they're told it'll get them high.

reply
otterley
1 day ago
[-]
It’s a reference to a very old SNL sketch called “shimmer”. https://www.youtube.com/shorts/03lLPUYkpYM

It probably won’t be up very long but it’s a classic.

reply
dhosek
1 day ago
[-]
Of course for maximum confusion, there’s Gen X me, making this reference to my Gen Alpha kids who have absolutely no idea what I’m talking about.

I’m still waiting for the moment in the ice cream shop when I can ask them, “sugar or plain?” https://mediaburn.org/videos/sugar-or-plain/

reply
gpderetta
1 day ago
[-]
ah, I thought it was an Ubik reference!
reply
inetknght
1 day ago
[-]
Hah, nice!
reply
rapnie
1 day ago
[-]
Borax is an example of a substance that is simultaneously used for skin care, household cleaning, as soldiering flux, and ant killer. But I guess it is a constant with variable effects. Hard to be found in local shops anymore.
reply
m463
1 day ago
[-]
reminds me of WD-40 (water displacer formula 40).

not used that often to displace water.

reply
otterley
1 day ago
[-]
Really? It's at practically every hardware store I've been to (US, things may be different elsewhere).
reply
rapnie
1 day ago
[-]
Yes, Netherlands, think it is quite different in retail in comparison.
reply
normie3000
1 day ago
[-]
White vinegar is a fierce cleaning product and a salad dressing. Although to me it has the disadvantage of tasting like a cleaning product.
reply
aworks
1 day ago
[-]
I worked for awhile as a janitor in a college dorm. Not an easy job but it definitely revealed a side of humanity I might not have otherwise seen. Especially the clean out after students left for the year.
reply
rapnie
1 day ago
[-]
We had a large green plant growing in an unused fridge. Fungus yes, but this was a new experience. As students we learned a lot.
reply
inetknght
1 day ago
[-]
> it definitely revealed a side of humanity I might not have otherwise seen

It definitely revealed a lot of falsehoods and stereotypes.

reply
js8
1 day ago
[-]
I thought that you were about to write: "as a janitor in a restaurant, the dessert topping is sometimes used as a floor polish".
reply
inetknght
1 day ago
[-]
Something as expensive as dessert toppings would only be used as floor polish by the people who truly were high... and only if they could do it without the boss knowing what they were doing.
reply
CyberDildonics
1 day ago
[-]
floor polish as a dessert topping if they're told it'll get them high.

I think that would be called a drug, not a desert topping.

reply
m463
1 day ago
[-]
As a computer person, I hated math/physics/science because of the one-letter (greek) variable names.

Of course some of that osmosizes back via lisp and APL.

reply
tsoukase
14 hours ago
[-]
That was the exact reason I liked math. Maybe because I am Greek.
reply
shermantanktop
1 day ago
[-]
Oh! I didn’t have a name for this one, but it’s a lesson I’ve learned. E.g. if variable x is non-zero, then variable y will be set to zero. Don’t check variable y to find out whether x is zero. And definitely don’t repurpose y for some other function in cases where it’s otherwise unused.
reply
pc86
1 day ago
[-]
For modern day web developers there are very, very, very few things that fall into the "if you do this you should probably be escorted out of the building on the first offense" category, but "reusing a variable because it's 'not being used'" might be on that list. I can maybe see the argument in very low memory embedded systems or similar systems where I'm not even qualified to come up with examples but not in anything that regularly shows up on HN for example.
reply
estimator7292
1 day ago
[-]
Shellac is both a floor polish and a food additive, particularly in candy.

Used to be, anyway. Modern alternatives are much better. It's still used as glue in wind instruments though.

reply
ipnon
1 day ago
[-]
I usually invoke this by naming with POSIWID.
reply
sdeiley
1 day ago
[-]
Eh. I love absl::StatusOr<T> at Google
reply
conartist6
1 day ago
[-]
Remember that these "laws" contain so many internal contradictions that when they're all listed out like this, you can just pick one that justifies what you want to justify. The hard part is knowing which law break when, and why
reply
jimmypk
1 day ago
[-]
Postel's Law vs. Hyrum's Law is the canonical example. Postel says be liberal in what you accept — but Hyrum's Law says every observable behavior of your API will eventually be depended on by someone. So if you're lenient about accepting malformed input and silently correcting it, you create a user base that depends on that lenient behavior. Tightening it later is a breaking change even if it was never documented. Being liberal is how you get the Hyrum surface area.

The resolution I've landed on: be strict in what you accept at boundaries you control (internal APIs, config parsing) and liberal only at external boundaries where you can't enforce client upgrades. But that heuristic requires knowing which category you're in, which is often the hard part.

reply
physicles
1 day ago
[-]
I’m one of those that have thrown out Postel’s law entirely. Maybe the issue is that it never defines “strict”, “liberal”, and “accept”. But at least for public APIs, it never made sense to me.

If I accidentally accept bad input and later want to fix that, I could break long-time API users and cause a lot of human suffering. If my input parsing is too strict, someone who wants more liberal parsing will complain, and I can choose to add it before that interaction becomes load-bearing (or update my docs and convince them they are wrong).

The stark asymmetry says it all.

Of course, old clients that can’t be upgraded have veto power over any changes that could break them. But that’s just backwards compatibility, not Postel’s Law.

Source: I’m on a team that maintains a public API used by thousands of people for nearly 10 years. Small potatoes in internet land but big enough that if you cause your users pain, you feel it.

reply
zffr
1 day ago
[-]
One example where I think the law does make sense is for website URL paths.

Over time the paths may change, and this can break existing links. IMO websites should continue to accept old paths and redirect to the new equivalents. Eventually the redirects can be removed when their usage drops low enough.

reply
zaphar
1 day ago
[-]
I probably use a different interpretation of Postel's law. I try not "break" for anything I might receive, where break means "crash, silently corrupt data, so on". But that just means that I return an error to the sender usually. Is this what Postel meant? I have no idea.
reply
ragnese
1 day ago
[-]
I don't think that interpretation makes that much sense. Isn't it a bit too... obvious that you shouldn't just crash and/or corrupt data on invalid input? If the law were essentially "Don't crash or corrupt data on invalid input", it would seem to me that an even better law would be: "Don't crash or corrupt data." Surely there aren't too many situations where we'd want to avoid crashing because of bad input, but we'd be totally fine crashing or corrupting data for some other (expected) reason.

So, I think not crashing because of invalid input is probably too obvious to be a "law" bearing someone's name. IMO, it must be asserting that we should try our best to do what the user/client means so that they aren't frustrated by having to be perfect.

reply
kortex
1 day ago
[-]
I actually dont think it's that obvious at all (unless you are a senior engineer). It's like the classic joke:

A QA engineer walks into a bar and orders a beer. She orders 2 beers.

She orders 0 beers.

She orders -1 beers.

She orders a lizard.

She orders a NULLPTR.

She tries to leave without paying.

Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.

The bar explodes.

It's usually not obvious when starting to write an API just how malformed the data could be. It's kind of a subconscious bias to sort of assume that the input is going to be well-formed, or at least malformed in predictable ways.

I think the cure for this is another "law"/maxim: "Parse, don't validate." The first step in handling external input is try to squeeze it into as strict of a structure with as many invariants as possible, and failing to do so, return an error.

It's not about perfection, but it is predictable.

reply
ragnese
16 hours ago
[-]
Hmm. Fair point. It's entirely possible that it's not obvious and that the "law" is almost a "reminder" of sorts to not assume you're getting well-formed inputs.

I'm still skeptical that this is the case with Postel's Law, but I do see that it's possible to read it that way. I guess I could always go do some research to prove it one way or the other, but... nah.

And yes, "Parse, don't validate." is one of my absolute favorite maxims (good word choice, by the way; I would've struggled on choosing a word for it here).

reply
zaphar
21 hours ago
[-]
Right even for senior engineers this can be hard to get right in practice. Parse, don't validate is certainly one approach to the problem. Choosing languages that force you to get it right is another.
reply
ryandrake
1 day ago
[-]
Yea, I interpret it as the same thing: On invalid input, don't crash or give the caller a root shell or whatever, but definitely don't swallow it silently. If the input is malformed, it should error and stop. NOT try to read the user's mind and conjure up some kind of "expected" output.
reply
zaphar
1 day ago
[-]
I think perhaps a better wording of the law would be: "Be prepared to be sent almost anything. But be specific about what you will send yourself".
reply
hnfong
21 hours ago
[-]
Isn't there a (admittedly messed up) possibility where some crazy API user would depend on the strict behavior and expect your API to return an error, but after the upgrade the parsing check is relaxed and it breaks that user's code?
reply
buu700
1 day ago
[-]
I've always liked Postel's law as a general philosophy toward life. But yeah, it's definitely become a little dated in the software world, if it was ever a good idea at all.
reply
nothrabannosir
1 day ago
[-]
I used to see far more references to Postel’s law in the 00s and early 10s. In the last decade, that has noticeably shifted towards hyrum’s law. I think it’s a shift in zeitgeist.
reply
throwaway173738
1 day ago
[-]
I look at Postel’s law more as advice on how to parse input. At some point you’re going to have to upgrade a client or a server to add a new field. If you’ve been strict, then you’ve created a big coordination problem, because the new field is a breaking change. But if you’re liberal, then your systems ignore components of the input that they don’t recognize. And that lets you avoid a fully coordinated update.
reply
ragnese
1 day ago
[-]
Yes, what Postel's Law is about. That's the whole point of contrasting it with Hyrum's Law, no?

Hyrum's Law is pointing out that sometimes the new field is a breaking change in the liberal scenario as well, because if you used to just ignore the field before and now you don't, your client that was including it before will see a change in behavior now. At least by being strict, (not accepting empty arrays, extra fields, empty strings, incorrect types that can be coerced, etc), you know that expanding the domain of valid inputs won't conflict with some unexpected-but-previously-papered-over stuff that current clients are sending.

reply
zahlman
1 day ago
[-]
I've always thought of Hyrum's Law more as a Murphy-style warning than as actionable advice.
reply
dmoy
1 day ago
[-]
Yea hyrum's law is like an observation of what happens when you personally try to make 200,000+ distinct commits to google3 to fix things that are broken (as hyrum did, mind you this is before LLMs), and people come climbing out of the woodwork to yell at you with fistfuls of xkcd/1172
reply
astrobe_
1 day ago
[-]
This reminds me of a comment I read here a long time ago; it was about XML and how DTDs were supposed to permit one to be strict. However, in reality, the person said, if the the other end who is sending you broken XML is a big corp who refuses to fix it, then you have no choice but accept it.

Bottom line: it's all a matter of balance of powers. If you're the smaller guy in the equation, you'll be "Postel'ed" anyway.

Yet Postel's law is still in the "the road to hell is paved with good intentions" category, for the reason you explain very well (AKA XKCD #1172 "Workflow"). Wikipedia even lists a couple of major critics about it [1].

[1] https://en.wikipedia.org/wiki/Robustness_principle

reply
jimbokun
1 day ago
[-]
Would be tempted to stick a proxy in there that checks if the data is malformed in the expected way, and if so converts it to the valid form before forwarding to the real service.
reply
someguyiguess
1 day ago
[-]
I propose we add your law: Jimmy’s Law
reply
AussieWog93
1 day ago
[-]
DRY is my pet example of this.

I've seen CompSci guys especially (I'm EEE background, we have our own problems but this ain't one of them) launch conceptual complexity into the stratosphere just so that they could avoid writing two separate functions that do similar things.

reply
busfahrer
1 day ago
[-]
I think I remember a Carmack tweet where he mentioned in most cases he only considers it once he reaches three duplicates
reply
michaelcampbell
1 day ago
[-]
The "Rule of 3" is a pretty well known rule of thumb; I suspect Carmack would admit it predates him by a fair bit.
reply
mcv
1 day ago
[-]
I once heard of a counter-principle called WET: Write Everything Twice.
reply
whattheheckheck
1 day ago
[-]
Why 3? What is this baseball?

Take the 5 Rings approach.

The purpose of the blade is to cut down your opponent.

The purpose of software is to provide value to the customer.

It's the only thing that matters.

You can also philosophize why people with blades needed to cut down their opponents along with why we have to provide value to the customer but thats beyond the scope of this comment

reply
marcosdumay
1 day ago
[-]
Why baseball? You don't use the number 3 in any other context?

If you write a lot of code, the odds of something repeating in another place just by coincidence are quite large. But the odds of the specific code that repeated once repeating again are almost none.

That's a basic rule from probability that appears in all kinds of contexts.

Anyway, both DRY and WET assume the developers are some kind ignorant automaton that can't ever know the goal of their code. You should know if things are repeating by coincidence or not.

reply
jimbokun
1 day ago
[-]
Rule of 3 will greatly benefit the customer over time as it lessens the probability of bugs and makes adding new features faster.
reply
ta20240528
1 day ago
[-]
"The purpose of software is to provide value to the customer."

Partially correct. The purpose of your software to its owners is also to provide future value to customers competitively.

What we have learnt is that software needs to be engineered: designed and structured.

reply
nradov
1 day ago
[-]
And yet some of the software most valuable to customers was thrown together haphazardly with nothing resembling real engineering.
reply
shermantanktop
1 day ago
[-]
If you get lucky doing that you might regret it. Especially with non-technical management.

Making software is a back-of-house function, in restaurant terms. Nobody out there sees it happen, nobody knows what good looks like, but when a kitchen goes badly wrong, the restaurant eventually closes.

reply
galbar
1 day ago
[-]
These projects quickly reach a point where evolving it further is too costly and risky. To the point that the org owning it will choose to stop development to do a re-implementation which, despite being a very costly and risky endeavor, ends up being a the better choice.

This is a very costly way of developing software.

reply
nradov
1 day ago
[-]
It's easy to say that organizations should do it right the first time, in terms of applying proper engineering practices. But they often didn't have the time, capital, and skillset to do that. Not ideal, but that's often how things work in the real world and it will never change.
reply
datadrivenangel
1 day ago
[-]
Organizations should do it not catastrophically wrongly, especially once a core design / concept is mostly solidified. Putting a little time into reliability and guardrails prevents a huge amount of downside.

I've been at organizations that don't think engineers should write tests because it takes too much time and slows them down...

reply
lamasery
1 day ago
[-]
Plenty of businesses or products within businesses stagnate and fail because their software got too expensive to maintain and extend. Not infrequently, this happens before it even sees a public release. Any business that can't draw startup-type levels of investment to throw effectively infinite amounts of Other People's Money at those kinds of problems, risks that end if they allow their software to get too messed-up.

The "who gives a shit, we'll just rewrite it at 100x the cost" approach to quality is very particular to the software startup business model, and doesn't work elsewhere.

reply
TheGRS
1 day ago
[-]
One is one. Two is a coincidence. And three is a trend. That's my personal head canon.
reply
D-Coder
1 day ago
[-]
"Once is happenstance. Twice is coincidence. The third time it's enemy action." — Auric Goldfinger
reply
jimbokun
1 day ago
[-]
Another law for the list!
reply
ericmcer
1 day ago
[-]
DRY and KISS were right next to each other which I thought was funny.
reply
aworks
1 day ago
[-]
I worked for a company that also had hardware engineers writing RTL. Our software architect spent years helping that team reuse/automate/modularize their code. At a mininum, it's still just text files with syntax, despite rather different semantics.
reply
zahlman
1 day ago
[-]
I've heard that story a few times (ironically enough) but can't say I've seen a good example. When was over-architecture motivated by an attempt to reduce duplication? Why was it effective in that goal, let alone necessary?
reply
mosburger
1 day ago
[-]
I think there is often tension between DRY and "thing should do only one thing." E.g., I've found myself guilty of DRYing up a function, but the use is slightly different in a couple places, so... I know, I'll just add a flag/additional function argument. And you keep doing that and soon you have a messed up function with lots of conditional logic.

The key is to avoid the temptation to DRY when things are only slightly different and find a balance between reuse and "one function/class should only do one thing."

reply
physicles
1 day ago
[-]
For sure. I feel I need all of my experience to discern the difference between “slightly different, and should be combined” and “slightly different, and you’ll regret it if you combine them.”

One of my favorite things as a software engineer is when you see the third example of a thing, it shows you the problem from a different angle, and you can finally see the perfect abstraction that was hiding there the whole time.

reply
dasil003
1 day ago
[-]
Buy me a beer and I can tell you some very poignant stories. The best ones are where there is a legitimate abstraction that could be great, assuming A) everyone who had to interact with the abstraction had the expertise to use it, B) the details of the product requirements conformed to the high level technical vision, now and forever, and C) migrating from the current state to the new system could be done in a bounded amount of time.

My view is over-engineering comes from the innate desire of engineers to understand and master complexity. But all software is a liability, every decision a tradeoff that prunes future possibilities. So really you want to make things as simple as possible to solve the problem at hand as that will give you more optionality on how to evolve later.

reply
caminante
1 day ago
[-]
IMHO, it comes down to awareness/probability about the need to future proof or add defensive behavior.

The spectrum is [YAGNI ---- DRY]

A little less abstract: designing a UX comes to mind. It's one thing to make something workable for you, but to make it for others is way harder.

reply
onionisafruit
1 day ago
[-]
I’ll give a simplified example of something I have at work right now. The program moves data from the old system to the new system. It started out moving a couple of simple data types that were basically the same thing by different names. It was a great candidate for reusing a method. Then a third type was introduced that required a little extra processing in the middle. We updated the method with a flag to do that extra processing. One at a time, we added 20 more data types that each had slightly different needs. Now the formerly simple method is a beast with several arguments that change the flow enough that there are a probably just a few lines that get run for all the types. If we didn’t happen to start with two similar types we probably wouldn’t have built this spaghetti monster.
reply
zahlman
19 hours ago
[-]
> Then a third type was introduced that required a little extra processing in the middle. We updated the method with

A callback to do the processing?

> a flag

Oh.

> Now... several arguments... probably just a few lines that get run for all the types

Yeah, that does tend to be where it leads when new parameters are thought of in terms of requesting special treatment, rather than providing more tools.

Yes, yes, "the complexity has to go somewhere". But it doesn't all have to get heaped into the same pile, or mashed together with the common bits.

reply
markburns
1 day ago
[-]
I saw a fancy HTML table generator that had so many parameters and flags and bells and whistles that it took IIRC hundreds of lines of code to save writing a similar amount of HTML in a handful of different places.

Yes the initial HTML looked similar in these few places, and the resultant usage of the abstraction did not look similar.

But it took a very long time reading each place a table existed and quite a bit longer working out how to get it to generate the small amount of HTML you wanted to generate for a new case.

Definitely would have opted for repetition in this particular scenario.

reply
pydry
1 day ago
[-]
DRY is misunderstood. It's definitely a fundamental aspect of code quality it's just one of about 4 and maximizing it to the exclusion of the others is where things go wrong. Usually it comes at the expense of loose coupling (which is equally fundamental).

The goal ought to be to aim for a local minima of all of these qualities.

Some people just want to toss DRY away entirely though or be uselessly vague about when to apply it ("use it when it makes sense") and thats not really much better than being a DRY fundamentalist.

reply
layer8
1 day ago
[-]
DRY is misnamed. I prefer stating it as SPOT — Single Point Of Truth. Another way to state it is this: If, when one instance changes in the future, the other instance should change identically, then make it a single instance. That’s really the only DRY criterion.
reply
xnorswap
1 day ago
[-]
I like this a lot more, because it captures whether two things are necessarily the same or just happen to be currently the same.

A common "failure" of DRY is coupling together two things that only happened to bear similarity while they were both new, and then being unable to pick them apart properly later.

reply
CodesInChaos
1 day ago
[-]
> then being unable to pick them apart properly later.

Which is often caused by the "midlayer mistake" https://lwn.net/Articles/336262/

reply
Silamoth
1 day ago
[-]
That’s how I understand it as well. It’s not about an abstract ideal of duplication but about making your life easier and your software less buggy. If you have to manually change something in 5 different places, there’s a good chance you’ll forget one of those places at some point and introduce a bug.
reply
mosburger
1 day ago
[-]
I said this elsewhere in the comments, but I think there's sort of a fundamental tension that shows up sometimes between DRY and "a function/class should only do one thing." E.g., there might be two places in your code that do almost identical things, so there's a temptation to say "I know! I'll make a common function, I'll just need to add a flag/extra argument..." and if you keep doing that you end up with messy "DRY" functions with tons of conditional logic that tries to do too much.

Yeah there are ways to avoid this and you need to strike balances, but sometimes you have to be careful and resist the temptation to DRY everything up 'cuz you might just make it brittler (pun intended).

reply
gavmor
1 day ago
[-]
Yes, and there are many different kinds of truth, so when two arise together—we can call this connascence—we can categorize how these instances overlap: Connascence of Name, Connascence of Algorithm, etc.
reply
mcv
1 day ago
[-]
That's how I understood it. If you add a new thing (constant, route, feature flag, property, DB table) and it immediately needs to be added in 4 different places (4 seems to be the standard in my current project) before you can use it, that's not DRY.
reply
mjr00
1 day ago
[-]
> If you add a new thing (constant, route, feature flag, property, DB table) and it immediately needs to be added in 4 different places (4 seems to be the standard in my current project) before you can use it, that's not DRY.

The tricky part is that sometimes "a new thing" is really "four new things" disguised as one. A database table is a great example because it's a failure mode I've seen many times. A developer has to do it once and they have to add what they perceive as the same thing four times: the database table itself, the internal DB->code translation e.g. ORM mapping, the API definition, and maybe a CRUD UI widget. The developer thinks, "oh, this isn't DRY" and looks to tools like Alembic and PostGREST or Postgraphile to handle this end-to-end; now you only need to write to one place when adding a database table, great!

It works great at first, then more complex requirements come down: the database gets some virtual generated columns which shouldn't be exposed in code, the API shouldn't return certain fields, the UI needs to work off denormalized views. Suddenly what appeared to be the same thing four times is now four different things, except there's a framework in place which treats these four things as one, and the challenge is now decoupling them.

Thankfully most good modern frameworks have escape valves for when your requirements get more complicated, but a lot of older ones[0] really locked you in and it became a nightmare to deal with.

[0] really old versions of Entity Framework being the best/worst example.

reply
mcv
1 day ago
[-]
I believe that was the point of Ruby on Rails: that you really had to just create the class, and the framework would create the table and handle the ORM. Or maybe you still had to write the migration; it's been as while. That was pretty spectacular in its dedication to DRY, but also pretty extreme.

But the code I'm talking about is really adding the same thing in 4 different places: the constant itself, adding it to a type, adding it to a list, and there was something else. It made it very easy to forget one step.

reply
pydry
1 day ago
[-]
Renaming it doesnt change the nature of the problem.

There should often be two points of truth because having one would increase the coupling cost more than the benefits that would be derived from deduplication.

reply
iwontberude
1 day ago
[-]
Why do the have to be so smart but so annoying at the same time?
reply
davedx
1 day ago
[-]
As a very senior SWE with a decent amount of eng decision making responsibility these days I still find I get so much mileage out of KISS and YAGNI that I never really think about any other laws.

So much SWE is overengineering. Just like this website to be honest. You don't get away with all that bullshit in other eng professions where your BoM and labour costs are material.

reply
ryandrake
1 day ago
[-]
I always thought it would be great if every line of software (or, chose some other unit of complexity) added to the unit cost of creating each copy. Like every bolt or gasket adds a few cents the the BOM of a piece of hardware. That fictional world would have simple, fast, and higher quality software programs.
reply
blandflakes
1 day ago
[-]
This was also true of Amazon's Leadership Principles. They are pretty reasonable guidelines, but in a debate, it really came down to which one you could most reasonably weaponize in favor of your argument, even to the detriment of several others.

Which maybe is also fine, I dunno :)

reply
rustyhancock
1 day ago
[-]
It's because they are heurists intended to be applied by knowledgeable and experienced humans.

It can be quite hard to explain when a student asks why you did something a particular way. The truthful answer is that it felt like the right way to go about it.

With some thought you can explain it partly - really justify the decision subconsciously made.

If they're asking about a conscious decision that's rarely much more helpful that you having to say that's what the regulations, or guidelines say.

Where they really learn is seeing those edge cases and gray areas

reply
jolt42
1 day ago
[-]
I'll propose this as the only unbreakable law: "everything in moderation", which I feel implies any law is breakable, which now this is sounding like the barber's paradox. What else does anyone propose as unbreakable?
reply
alok-g
1 day ago
[-]
>> everything in moderation

Saying this is like saying 'pick the optimum point' without saying anything about how to find the optimum point. This cannot be a law, it is the definition of optimum.

Note that optimum point need not be somewhere in the middle or 'inside', like a maxima. The optimum point could very well be on an extreme of the domain (input variables space).

reply
zdc1
1 day ago
[-]
Counterpoint: "everything in moderation, including moderation"
reply
xnx
14 hours ago
[-]
> so many internal contradictions that when they're all listed out like this, you can just pick one that justifies what you want to justify.

More like the Bible of Software Engineering then

reply
rapnie
1 day ago
[-]
I like alternatives to formal IT lawfare, like CUPID [0] properties for Joyful coding by Dan North, as alternative to SOLID principles.

[0] https://cupid.dev/

reply
heresie-dabord
20 hours ago
[-]
> these "laws" contain so many internal contradictions

Also notice how many of the so-called _software laws_ are actually statements about human behaviour and people-problems.

Confirmation bias, Dunning-Kruger Effect, Sunk-Cost Fallacy, Ringlemann Effect, Price's Law, Putt's Law, Conway's Law, Brook's Law, Peter Principle, Hanlon's Razor, Amara's Law...

Of the 59 "laws", only a small number are guiding principles specifically about planning and software.

Human behaviour is hard to change -- the same dysfunction can be seen everywhere. As a fundamental principle, you need to use the right/best tool for the job; you will know when you are using the wrong tool/solution because you'll spend a significant amount of time trying to correct/mask the unwanted consequences.

And if you enter a shop where many tools are wrong... consider going to work in a different shop.

reply
mday27
1 day ago
[-]
Seems to me like there's also a divide between observational laws (e.g. Hyrum's Law just says "this seems to be true") and prescriptive laws (e.g. Knuth's Law, which is really a statement about how you ought to behave)
reply
ghm2180
1 day ago
[-]
This is doubly true in Machine Learning Engineering. Knowing what methods to avoid is just as important to know what might work well and why. Importantly a bunch of Data Science techniques — and I use data science in the sense of making critical team/org decisions — is also as important for which you should understand a bit of statistics not only data driven ML.
reply
Silamoth
1 day ago
[-]
Statistics is absolutely fundamental to data science. But I’m not sure this relates to the above idea of “laws” being internally contradictory?
reply
httpz
1 day ago
[-]
That's why we call it software engineering, not software science.
reply
diehunde
1 day ago
[-]
I guess that's why confirmation bias is also listed?
reply
ChrisMarshallNY
1 day ago
[-]
Great point.

Sort of like a real code of law.

reply
ericmcer
1 day ago
[-]
Most of them felt contradictory and kind of antiquated.

Reading through the list mostly made me feel sad. You can't help but interpret these through the modern lens of AI assisted coding. Then you wonder if learning and following (some) of these for the last 20 years is going to make you a janitor for a bunch of AI slop, or force you into a coding style where these rules are meaningless, or make you entirely irrelevant.

reply
deaux
1 day ago
[-]
Laws of Software Engineering (2026 Update)

- Every website will be vibecoded using Claude Opus

This will result in the following:

- The background color will be a shade of cream, to properly represent Anthropic

- There will be excessive use of different fonts and weights on the same page, as if a freshman design student who just learned about typography

- There will be an excess of cards in different styles, a noteworthy amount of which has a colored, round border either on hover or by default on exactly one side of the card

reply
shimman
1 day ago
[-]
Yeah, the dude vibe coding the site also likely vibe coded the book. Instant pass. Also looking up this person's coding history (mostly cheat sheets and road maps), zero confidence that anything is worthy of note in the book.
reply
dabedee
1 day ago
[-]
- The domain will be a long title with a dot com at the end.
reply
ai_fry_ur_brain
20 hours ago
[-]
If I think you vibe coded your website im not using your product, reading your blog and will bad mouth you at every opportunity.
reply
komi013
19 hours ago
[-]
user name checks out
reply
dataviz1000
1 day ago
[-]
I did not see Boyd’s Law of Iteration [0]

"In analyzing complexity, fast iteration almost always produces better results than in-depth analysis."

Boyd invented the OODA loop.

[0]https://blog.codinghorror.com/boyds-law-of-iteration/

reply
Silamoth
1 day ago
[-]
That’s such a good one! I wish more people understood this. It seems management and business types always want some upfront plan. And I get it, to an extent. But we’ve learned this isn’t a very effective way to build software. You can’t think of all possible problems ahead of time, especially the first time around. Refactoring to solve problems with a flexible architecture it better than designing yourself into a rigid architecture that can’t adapt as you learn the problem space.
reply
computerdork
1 day ago
[-]
That is a good one, iterative development is in general superior to overly deliberate and overly careful development.

And what a great and very subtle example with the fighter jet control sticks. This reminds of a build time issue I once had. Yeah, way back in college, did really poorly on a final programming project, because didn't realize you were supposed to swap out a component they had you write with a mock component that was provided for you - hard to explain, but they wanted you to write this component to show you could, but once you did, you weren't supposed to use it, because it was extremely slow to build. So they also gave you a mock version to use when working on the code of your main system.

Using my full component killed my build time, as it took 10 minutes to build instead of a few seconds, and it was the one school programming project I couldn't finish before the deadline and was super stressful. Was a very painful lesson but ever since have always found ways to shorten my build times.

reply
AnimalMuppet
1 day ago
[-]
> iterative development is in general superior to overly deliberate and overly careful development.

I reserve the right to become smarter as I learn stuff. That means that I reserve the right to produce better designs as I learn stuff. Want me to produce better designs? Let me learn stuff. Therefore, let me iterate a few times.

reply
computerdork
1 day ago
[-]
agreed:)
reply
devsda
1 day ago
[-]
I think taking Boyd's law to the extreme is how some folks end up with sprints lasting 1 week or less.
reply
martinclayton
1 day ago
[-]
This resonates with "the bitter lesson" somehow, interesting...

...On reading more it seems of use primarily in adversarial situations, so not-so-much resonant.

reply
lqstuart
1 day ago
[-]
What do you call the law that you violate when you vibe code an entire website for "List of 'laws' of software engineering" instead of just creating a Wikipedia page for it
reply
Bratmon
1 day ago
[-]
"Creating a Wikipedia page" is a weird suggestion. In 2026, it's actually not possible to create a Wikipedia page unless you're already a deep expert in Wikipedia culture.

(Wikipedia nerds often say "No, anyone can create a page as long as they follow the 137 guidelines!" This is a prank- Wikipedia admins will delete your article no matter how many guidelines it follows)

reply
zelphirkalt
23 hours ago
[-]
Even some 15y ago it was impossible to add web links to communities, even though other web links to similar communities were already in the web links section, because some people weaponized wiki as a moat.
reply
tjohnell
1 day ago
[-]
Slop’s Law

If it can be slopped, it will be slopped.

reply
niccl
1 day ago
[-]
Sturgeon's Law (2026): 99% of everything is crap or slop
reply
meken
1 day ago
[-]
I love Kernighan’s Law:

> "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"

reply
zelphirkalt
23 hours ago
[-]
Hm, I find this one to be very dubious. When you make an effort to write correct code, you think hard. When you debug, it's more like just looking at the execution of what you have thought of before and thinking "OK where did I go wrong this time? Show me in the process." and it is usually much easier to see why something is wrong or at least at which step it breaks.
reply
hatsix
1 day ago
[-]
I know it's not software-engineering-only, but Chesterton's Fence is often the first 'law' I teach interns and new hires: https://fs.blog/chestertons-fence/
reply
jrmiii
11 hours ago
[-]
Found your comment by searching to see if anyone mentioned it. Really key in legacy systems.
reply
ericmcer
1 day ago
[-]
They have "Law of Unintended Consequences" on this list which describes the same phenomena.

I always liked the fence story better though.

reply
sltr
20 hours ago
[-]
the corollary of Chesterton's Fence is also valuable: don't go putting up unnecessary fences, because others won't be able to take them down
reply
computerdork
1 day ago
[-]
This is one of my biggest principles too, "think before you do."
reply
RivieraKid
1 day ago
[-]
Not a law but a a design principle that I've found to be one of the most useful ones and also unknown:

Structure code so that in an ideal case, removing a functionality should be as simple as deleting a directory or file.

reply
layer8
1 day ago
[-]
Functionalities aren’t necessarily orthogonal to each other; features tend to interact with one another. “Avoid coupling between unrelated functionalities” would be more realistic.
reply
danparsonson
1 day ago
[-]
Features arise out of the composition of fundamental units of the system, they're not normally first class units themselves. Can you give an example?
reply
RivieraKid
1 day ago
[-]
For example using nested .gitignore files vs using one root .gitignore file. I guess this principle is related to this one:

Imagine the code as a graph with nodes and edges. The nodes should be grouped in a way that when you display the graph with grouped nodes, you see few edges between groups. Removing a group means that you need to cut maybe 3 edges, not 30. I.e. you don't want something where every component has a line to every other component.

Also when working on a feature - modifying / adding / removing, ideally you want to only look at an isolated group, with minimal links to the rest of the code.

reply
ActivePattern
1 day ago
[-]
You're describing "modularity" or "loose coupling" in code. But it rarely implies you can just delete files or directory. It usually just means that a change in one component requires minimal changes to other components -- i.e. the diff is kept small.
reply
kijin
1 day ago
[-]
What's the smallest unit of functionality to which your principle applies?

For example, each comment on HN has a line on top that contains buttons like "parent", "prev", "next", "flag", "favorite", etc. depending on context. Suppose I might one day want to remove the "flag" functionality. Should each button be its own file? What about the "comment header" template file that references each of those button files?

reply
jpitz
1 day ago
[-]
I think that if you continue along the logical progression of the parent poster, then maybe the smaller units of functionality would be represented by simple ranges of lines of text. Given that, deleting a single button would ideally mean a single contiguous deletion from a file, versus deleting many disparate lines.
reply
sverhagen
1 day ago
[-]
Maybe the buttons shouldn't be their own files, but the backend functionality certainly could be. I don't do this, but I like the idea.
reply
balamatom
18 hours ago
[-]
The `comment_header` template would iterate over the files in `comment_header.d/*`, which would, admittedly, need forced sorted naming:

100_parent.template

150_context.template

200_prev_next.template

300_flag.template

350_favorite.template

Looks odd with the numbering, no?

But then you get the added benefit of being able to refer to them by numbers, just "100" or "300" without having to glue humanlang inflection, declension, punctuation onto identifiers that happen to be words...

Some places where you can see this pattern: BASIC's explicit line numbering; non-systemd init systems.

reply
MarkLowenstein
1 day ago
[-]
YES! "Deletability"
reply
voiceofunreason
1 day ago
[-]
reply
skydhash
1 day ago
[-]
Now, I tend towards the C idiom of having few files and not a deep structure and away from the one class, one file of Java. Less files to rename when refactoring and less files to open trying to understand an implementation.
reply
dhosek
1 day ago
[-]
One advantage of more smaller files is that merge conflicts become less common. I would guess that at least half of the trivial merge conflicts I see are two unrelated commits which both add some header or other definition at the top of the file, but because both are inserted at line 17, git looks at it and says, “I give up.”

This in itself might not be enough to justify this, but the fewer files will lead to more challenges in a collaborative environment (I’d also note that more small files will speed up incremental compilations since unchanged code is less likely to get recompiled which is one reason why when I do JVM dev, I never really think about compilation time—my IDE can recompile everything quickly in the background without my noticing).

reply
skydhash
1 day ago
[-]
for the first point, such merge commits are so trivial to fix that it’s barely takes time.

You got a point for incremental compilation. But fewer files (done well) is not really a challenge as everything is self contained. It makes it easier to discern orthogonal features as the dependency graph is clearer. With multiple files you find often that similar things are assumed to be identical and used as such. Then it’s a big refactor when trying to split them, especially if they are foundational.

reply
pkasting
1 day ago
[-]
This list is missing my personal law, Kasting's Law:

Asking "who wrote this stupid code?" will retroactively travel back in time and cause it to have been you.

reply
omoikane
1 day ago
[-]
"I'm casting around in my head for someone to blame, and it's just... me, keeps coming back at me."

- Jeremy Clarkson (Top Gear, series 14 episode 5)

reply
pkasting
1 day ago
[-]
(Tangent: What a pleasure to see your username again; will always think of your readability help fondly!)
reply
omoikane
1 day ago
[-]
Great to see you too, it has been a pleasure working with you :)
reply
davery22
1 day ago
[-]
A few extra from my own notes-

- Shirky Principle: Institutions will try to preserve the problem to which they are the solution

- Chesterton's Fence: Changes should not be made until the reasoning behind the current state of affairs is understood

- Rule of Three: Refactoring given only two instances of similar code risks selecting a poor abstraction that becomes harder to maintain than the initial duplication

reply
t43562
1 day ago
[-]
The conservation of Complexity (Tesler) seems immediately insightful to me just as a sentence:

  "Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated."
But then in the explanation seems to me to devolve down to a trite suggestion not to burden your users. This doesn't interest me because users need the level of complexity they need and no more whatever you're doing and making it less causes your application to be an unflexible toy. So this is all, to a degree, obvious.

I think it's more useful to remember when you're refactoring that if you try to make one bit of a system simpler then you often just make another part more complex. Why write something twice to end up with it being just as bad the other way round?

reply
fenomas
1 day ago
[-]
Nice to have these all collected nicely and sharable. For the amusement of HN let me add one I've become known for at my current work, for saying to juniors who are overly worried about DRY:

> Fen's law: copy-paste is free; abstractions are expensive.

edit: I should add, this is aimed at situations like when you need a new function that's very similar to one you already have, and juniors often assume it's bad to copy-paste so they add a parameter to the existing function so it abstracts both cases. And my point is: wait, consider the cost of the abstraction, are the two use cases likely to diverge later, do they have the same business owner, etc.

reply
ndr
1 day ago
[-]
Same vibe, different angle:

> 11. Abstractions don’t remove complexity. They move it to the day you’re on call.

Source: https://addyosmani.com/blog/21-lessons/

reply
Symmetry
1 day ago
[-]
"Any problem in computer science can be solved with another level of indirection...except for the problem of too many levels of indirection."
reply
austin-cheney
1 day ago
[-]
My own personal law is:

When it comes to frameworks (any framework) any jargon not explicitly pointing to numbers always eventually reduces down to some highly personalized interpretation of easy.

It is more impactful than it sounds because it implicitly points to the distinction of ultimate goal: the selfish developer or the product they are developing. It is also important to point out that before software frameworks were a thing the term framework just identifies a defined set of overlapping abstract business principles to achieve a desired state. Software frameworks, on the other hand, provide a library to determine a design convention rather than the desired operating state.

reply
nashashmi
1 day ago
[-]
Yii frameworks were full of that jargon.

I had a hard time learning the whole mvc concept

reply
biscuits1
1 day ago
[-]
Today, I was presented with Claude's decision to include numerous goto statements in a new implementation. I thought deeply about their manual removal; years of software laws went against what I saw. But then, I realized it wouldn't matter anymore.

Then I committed the code and let the second AI review it. It too had no problem with goto's.

Claude's Law: The code that is written by the agent is the most correct way to write it.

reply
voiceofunreason
1 day ago
[-]
Render therefore unto Caesar the things which are Caesar's; and unto Claude the things that are Claude's.
reply
ozgrakkurt
1 day ago
[-]
For anyone reading this. Learn software engineering from people that do software engineering. Just read textbooks which are written by people that actually do things
reply
ghm2180
1 day ago
[-]
Any recommendations? I read designing data intensive applications(DDIA) which was really good. But it is by Martin Klepmann who as I understand is an academic. Reading PEPs is also nice as it allows one to understand the motivations and "Why should I care" about feature X.
reply
devgoncalo
1 day ago
[-]
reply
WillAdams
1 day ago
[-]
Ousterhout's _A Philosophy of Software Design_ (mentioned elsethread) would be mine.
reply
ozgrakkurt
1 day ago
[-]
reply
pratikdeoghare
1 day ago
[-]
https://norvig.github.io/paip-lisp/#/

Really great book even if don’t care about lisp or ai.

reply
azath92
1 day ago
[-]
the python cookbook is good. and fluent python is more from principles rather than application (obvs both python specific). I also like philosophy of software design. tiny little book that uses simple example (class that makes a text editor) to talk about complexity, not actually about making a text editor at all.
reply
newsoftheday
1 day ago
[-]
I also recommend avoiding anyone who rails against SOLID or OOP.
reply
hunterpayne
1 day ago
[-]
This is the best comment on this article but it was deleted for some reason.

"The meta-law of software engineering: All laws of software engineering will be immediately misinterpreted and mindlessly applied in a way that would horrify their originators. Now that we can observe the behaviour of LLMs that are missing key context, we can understand why."

Or, you can't boil down decades of wisdom and experience into a pithy, 1 sentence quote.

reply
Kinrany
1 day ago
[-]
SOLID being included immediately makes me have zero expectation of the list being curated by someone with good taste.
reply
newsoftheday
1 day ago
[-]
The few on this page today who object to SOLID seem likely to me to be functional programmers who have never understood software engineering principles in the first place.
reply
sov
1 day ago
[-]
Weird take--SOLID, to me (I work in embedded but have done basically everything), represents a system of design principles that mean well and are probably fine in a heavily OO environment 80% of the time but resoundingly end up prime examples of the pareto principle.
reply
sjducb
23 hours ago
[-]
I like the Single responsibility part of SOLID. It makes the code much easier to reason about. The Liskov Substitution principle is also super important. Subclasses should fully replace the parent class.

The I and O of solid are fine and don’t cause too many problems.

But I agree that Dependency Inversion is a recipe for loads of pointless interfaces that are only ever used once. I strongly prefer the traditional pattern where high level modules depend on lower level modules, it’s much simpler.

I think it would be better as SOIL not SOLID.

Maybe they could have replaced the D with DRY…

reply
detectivestory
1 day ago
[-]
I'm seeing some hate for SOLID in these comments and I am a little surprised. While I don't think it should ever be used religiously, I would much rather work on a team that understood the principles than one that didn't.
reply
causal
1 day ago
[-]
I think it's probably pointing toward the general harm that thinking only in objects has done to programming as a practice
reply
AtNightWeCode
1 day ago
[-]
I think the baseline is that code can be trash and still comply with SOLID. Therefor people get frustrated over it. Getting PR:s rejected and so on.

I think it is better to have real requirements like: The code needs to be testable in a simple way.

reply
heap_perms
1 day ago
[-]
That's interesting, what makes you think that? Not long ago, I was working on my degree in Computer Science (Software Engineering), and we were heavily drilled on this principle. Even then, I found it amusing how all the professors were huge fanboys of SOLID. It was very dogmatic.
reply
zelphirkalt
22 hours ago
[-]
Professors ... They are likely knowledgeable about the abstract things in computer science, but when it comes to actually writing code and guidance on that, I would only trust the ones, that have a background of getting deep into the code and actually making things. For example when I was studying I experienced a variety of professors:

One who was a Python core developer and who knew many languages and could "compile in his head" what the result of some code will be in assembly. I would trust this one.

One, who criticized my C code for having multiple procedures, because that would make it look after more pointers and told me it would be better all in one long procedure, lol, without ever even considering the readability. That one was likely also simply wrong because of the compiler probably inlining things anyway. That one taught a math lecture and used C. Needless to say I wouldn't trust that one when it comes to writing good code.

Then I had a math physics guy, who wrote Java 5 or earlier code when it was Java 8 times. That one didn't use generics at all, and cast to Object instead and whatever else. He also explained, that he uses bit shift in a for loop variable update, because that was faster than *2. Yeah, also wouldn't trust that one to give any advice on how to write good code. It taught me to be very skeptical of mathematicians writing code, unless they have a proven track record of software development skills. This kind of person is the reason why mathematicians and physicists should be supported by an actual software developer, to write their code, and not be too ignorant or arrogant to consider hiring one.

I also had one professor, who taught a math lecture in such a bad way, that it was hard to follow and even his writing on the blackboard was illegible. That one also had another lecture which was mostly talk about Internet and web concepts in one of the most grating accents imaginable, almost comical. I wouldn't trust that one to give advice on writing good code.

reply
ryanshrott
1 day ago
[-]
People use the premature optimization principle in exactly the wrong way these days. Knuth's full quote is, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." That 97%/3% split is the whole point.

People bring it up to argue for never thinking about performance, which flips the intent on its head. The real takeaway is that you need to spot that critical 3% early enough to build around it, and that means doing some optimization thinking up front, not none at all.

reply
dassh
1 day ago
[-]
Calling them 'laws' is always a bit of a stretch. They are more like useful heuristics. The real engineering part is knowing exactly when to break them.
reply
someguyiguess
1 day ago
[-]
Especially since a lot of them are written in a tongue-in-cheek way. And many are contradictory.
reply
r0ze-at-hn
1 day ago
[-]
Love the details sub pages. Over 20 years I collected a little list of specific laws or really observations (https://metamagic.substack.com/p/software-laws) and thought about turning each into specific detailed blog posts, but it has been more fun chatting with other engineers, showing the page and watch as they scan the list and inevitably tell me a great story. For example I could do a full writeup on the math behind this one, but it is way more fun hearing the stories about the trying and failing to get second re-writes for code.

9. Most software will get at most one major rewrite in its lifetime.

reply
merge_software
1 day ago
[-]
> YAGNI (You Aren't Gonna Need It)

This one is listed as design, but it could just as easily count as architecture. Guessing a lot developers have worked on scaling with lambda functions or a complex IAC setup when a simple API running on a small VPS would have done the trick, at least until enough people are using the application for it to be considered profitable.

reply
hinkley
1 day ago
[-]
When we dockerized a service the p95 time in testing notched up a noticeable amount. I was already juggling so much other work at that point that for shits and giggles I tried vertically scaling - reducing the cluster size by half and the cores per server by 2x. Zeroed out the p95 delta.

OPS gave me shit about it, and I was like kiss my ass, the cluster costs EXACTLY the same and deployments are 25% faster.

I think people forget that in the cloud, bigger servers don't really cost more until you get crazy about it.

reply
TheGRS
1 day ago
[-]
You know, I mention this stuff all the time in various meetings and discussions. I read a lot of stuff on Hacker News and just have years of accumulated knowledge from the various colleagues I've worked with. Its nice to have a little reference sheet.
reply
asmodeuslucifer
1 day ago
[-]
I learned about Dunbar’s number

(~150) is the size of a community in which everyone knows each other’s identities and roles.

In anthropology class. You can ask someone to write down the name of everyone they can think of, real or fictional, live or dead and most people will not make it to 250.

Some individuals like professional gossip columnists or some politicians can remember as many as 1,000 people.

reply
OkayPhysicist
1 day ago
[-]
Dunbar's number is largely psuedoscience. Dunbar just took the group size of gorillas, scaled up to humans based on brain size, and then published, without bothering with silly little details like actually testing human group sizes.
reply
4dregress
1 day ago
[-]
I like to replace the bus factor with the Lottery Factor.

I actually had a college run over by a bus on the way to work in London, was very lucky and made a full recovery.

Head poking out under the main exit of the bus.

reply
mojuba
1 day ago
[-]
> Get it working correctly first, then make it fast, then make it pretty.

Or develop a skill to make it correct, fast and pretty in one or two approaches.

reply
theandrewbailey
1 day ago
[-]
Modern SaaS: make it "pretty", then make it work, then make it "pretty" again in the next release. Make fast? Never.
reply
AussieWog93
1 day ago
[-]
I recently had success with a problem I was having by basically doing the following:

- Write a correct, pretty implementation

- Beat Claude Code with a stick for 20 minutes until it generated a fragile, unmaintainable mess that still happened to produce the same result but in 300ms rather than 2500ms. (In this step, explicitly prompting it to test rather than just philosophising gets you really far)

- Pull across the concepts and timesaves from Claude's mess into the pretty code.

Seriously, these new models are actually really good at reasoning about performance and knowing alternative solutions or libraries that you might have only just discovered yourself.

reply
mojuba
1 day ago
[-]
However, a correct, pretty and fast solution may exist that neither of you have found yet.

But yes, the scope and breadth of their knowledge goes far beyond what a human brain can handle. How many relevant facts can you hold in your mind when solving a problem? 5? 12? An LLM can take thousands of relevant facts into account at the same time, and that's their superhuman ability.

reply
tmoertel
1 day ago
[-]
One that is missing is Ousterhout’s rule for decomposing complexity:

    complexity(system) =
        sum(complexity(component) * time_spent_working_in(component)
            for component in system).
The rule suggests that encapsulating complexity (e.g., in stable libraries that you never have to revisit) is equivalent to eliminating that complexity.
reply
stingraycharles
1 day ago
[-]
That’s not some kind of law, though. And I’m also not sure whether it even makes sense, complexity is not a function of time spent working on something.
reply
tmoertel
1 day ago
[-]
First, few of the laws on that site are actual laws in the physics or mathematics sense. They are more guiding principles.

> complexity is not a function of time spent working on something.

But the complexity you observe is a function of your exposure to that complexity.

The notion of complexity exists to quantify the degree of struggle required to achieve some end. Ousterhout’s observation is that if you can move complexity into components far away from where you must do your work to achieve your ends, you no longer need to struggle with that complexity, and thus it effectively is not there anymore.

reply
wduquette
1 day ago
[-]
And in addition, the time you spend making a component work properly is absolutely a function of its complexity. Once you get it right, package it up neatly with a clean interface and a nice box, and leave it alone. Where "getting it right" means getting it to a state where you can "leave it alone".
reply
CuriouslyC
1 day ago
[-]
I think the intent is that if you can cleanly encapsulate some complexity so that people working on stuff that uses it don't have to understand anything beyond a simple interface, that complexity "doesn't exist" for all intents and purposes. Obviously this isn't universal, but a fair percentage of programmers these days don't understand the hardware they're programming against due to the layers of abstractions over them, so it's not crazy either.
reply
Brian_K_White
1 day ago
[-]
It's showing that all the complexity in the components are someone else's problem. Your only complexity is your own top layer and your interface with the components.
reply
skydhash
1 day ago
[-]
Only when the problem has been resolved well enough for your use cases. Like using an http client instead of dealing with parsing http messages or using a GUI toolkit instead of manipulating raw pixels.

That’s pretty much what good design is about. Your solve a foundational problems and now no one else needs to think about it (including you when working on some other parts).

reply
someguyiguess
1 day ago
[-]
Unless you are the one having to maintain that library. Then it just migrates the complexity to another location.
reply
tmoertel
1 day ago
[-]
> Unless you are the one having to maintain that library. Then it just migrates the complexity to another location.

No, it’s a win, even then.

Say you are writing an operating system, and one of the fundamental data structures you use all over the place is a concurrency-safe linked list.

Option 1 is to manipulate the relevant instances of the linked list directly—whenever you need to insert, append, iterate over, or delete from any list from any subsystem in your operating system. So you’ll have low-level list-related lock and pointer operations spread throughout the entire code base. Each one of these operations requires you to struggle with the list at the abstraction level and at the implementation level.

Option 2 is to factor out the linked-list operations and isolate them in a small library. Yes, you must still struggle with the list at the abstraction and implementation levels in this one small library, but everywhere else the complexity has been reduced to having to struggle with the abstraction level only, and the abstraction is a small set of straightforward operations that is easy to wrap your head around.

The sole difference between the options, as you wrote, is that “the complexity just migrates to another location.” But which would you rather maintain?

That was Ousterout’s point.

reply
ChrisMarshallNY
1 day ago
[-]
Great stuff!

Where's Chesterton's Fence?

https://en.wiktionary.org/wiki/Chesterton%27s_fence

[EDIT: Ninja'd a couple of times. +1 for Shirky's principle]

reply
serious_angel
1 day ago
[-]
Great! Do principles fit? If so, considering presence of "Bus Factor", I believe "Chesterton's Fence" should be listed, too.
reply
regular_trash
1 day ago
[-]
Hot take - I hate YAGNI. My personal pet peeve is when someone says YAGNI to a structure in the code they perceive as "more complex than they would have done it".

Sure, don't add hooks for things you don't immediately need. But if you are reasonably sure a feature is going to be required at some point, it doesn't hurt to organize and structure your code in a way that makes those hooks easy to add later on.

Worst case scenario, you are wrong and have to refactor significantly to accommodate some other feature you didn't envision. But odds are you have to do that anyway if you abide by YAGNI as dogma.

The amount of times I've heard YAGNI as reasoning to not modularize code is insane. There needs to be a law that well-intentioned developers will constantly misuse and misunderstand the ideas behind these heuristics in surprising ways.

reply
traderj0e
1 day ago
[-]
YAGNI isn't really a law, it's just something you say when you think you ain't gonna need it. You could be wrong, and you actually gonna need it.
reply
AtNightWeCode
1 day ago
[-]
It is misused if anything. YAGNI is about functionality. What to add or not. But it has become an excuse for being lazy. Same people that interpreted the line “Working software over comprehensive documentation” as no need for documentation.
reply
traderj0e
1 day ago
[-]
YAGNI is usually about modularization, often in response to Java-style OOP obsession. Like you don't need to define some big protocol that's only ever going to have one implementation.
reply
regular_trash
1 day ago
[-]
Well this is not the context I had in mind. I'm thinking of the many times I've had to break apart 3kloc react components to reuse some part just because someone decided modularity didn't matter
reply
traderj0e
13 hours ago
[-]
I mean YAGNI is usually about modularization in general, so yeah a React component would be included in that, it's not limited to just OOP. 3K loc is probably well beyond the point where it should've been split up.
reply
sigma5
1 day ago
[-]
I would add also Little's law for throughput calculation https://en.wikipedia.org/wiki/Little%27s_law
reply
renticulous
1 day ago
[-]
Linus's Law : "Given enough eyeballs, all bugs are shallow".

Applies to opensource. But it also means that code reviews are a good thing. Seniors can guide juniors to coax them to write better code.

reply
WillAdams
1 day ago
[-]
Visual list of well-known aphorisms and so forth.

A couple are well-described/covered in books, e.g., Tesler's Law (Conservation of Complexity) is at the core of _A Philosophy of Software Design_ by John Ousterhout

https://www.goodreads.com/en/book/show/39996759-a-philosophy...

(and of course Brook's Law is from _The Mythical Man Month_)

Curious if folks have recommendations for books which are not as well-known which cover these, other than the _Laws of Software Engineering_ book which the site is an advertisement for.....

reply
Symmetry
1 day ago
[-]
On my laptop I have a yin-yang with DRY and YAGNI replacing the dots.
reply
someguyiguess
1 day ago
[-]
That is exactly what I would expect from someone with your username. Bravo.
reply
netdevphoenix
1 day ago
[-]
"This site was paused as it reached its usage limits. Please contact the site owner for more information."

I wish AWS/Azure had this functionality.

reply
milanm081
1 day ago
[-]
Fixed
reply
CGMthrowaway
11 hours ago
[-]
This website seems heavily inspired by https://lawsofux.com/
reply
Sergey777
1 day ago
[-]
A lot of these “laws” seem obvious individually, but what’s interesting is how often we still ignore them in practice.

Especially things like “every system grows more complex over time” — you can see it in almost any project after a few iterations.

I think the real challenge isn’t knowing these laws, but designing systems that remain usable despite them.

reply
tfrancisl
1 day ago
[-]
Remember, just because people repeated it so many times it made it to this list, does not mean its true. There may be some truth in most of these, but none of these are "Laws". They are aphorisms: punchy one liners with the intent to distill something so complex as human interaction and software design.
reply
emmelaich
1 day ago
[-]
Not sure that Linus would actually agree with Linus' law. So it's a bad name. Call it the ESR observation or something else.

Separately, I'd add Rust's API design principles, though it's more of a adjunct with things in common. https://gist.github.com/mjball/9cd028ac793ae8b351df1379f1e72...

reply
macintux
1 day ago
[-]
Some similarly-titled (but less tidily-presented) posts that have appeared on HN in the past, none of which generated any discussion:

* https://martynassubonis.substack.com/p/5-empirical-laws-of-s...

* https://newsletter.manager.dev/p/the-unwritten-laws-of-softw..., which linked to:

* https://newsletter.manager.dev/p/the-13-software-engineering...

reply
dgb23
1 day ago
[-]
I like this collection. It's nicely presented and at least at a glance it adds some useful context to each item.

While browsing it, I of course found one that I disagree with:

Testing Pyramid: https://lawsofsoftwareengineering.com/laws/testing-pyramid/

I think this is backwards.

Another commenter WillAdams has mentioned A Philosophy of Software Design (which should really be called A Set of Heuristics for Software Design) and one of the key concepts there are small (general) interfaces and deep implementations.

A similar heuristic also comes up in Elements of Clojure (Zachary Tellman) as well, where he talks about "principled components and adaptive systems".

The general idea: You should greatly care about the interfaces, where your stuff connects together and is used by others. The leverage of a component is inversely proportional to the size of that interface and proportional to the size of its implementation.

I think the way that connects to testing is that architecturally granular tests (down the stack) is a bit like pouring molasses into the implementation, rather than focusing on what actually matters, which is what users care about: the interface.

Now of course we as developers are the users of our own code, and we produce building blocks that we then use to compose entire programs. Having example tests for those building blocks is convenient and necessary to some degree.

However, what I want to push back on is the implied idea of having to hack apart or keep apart pieces so we can test them with small tests (per method, function etc.) instead of taking the time to figure out what the surface areas should be and then testing those.

If you need hyper granular tests while you're assembling pieces, then write them (or better: use a REPL if you can), but you don't need to keep them around once your code comes together and you start to design contracts and surface areas that can be used by you or others.

reply
nazgul17
1 day ago
[-]
I think the general wisdom in that scenario is to keep them around until they get in the way. Let them provide a bit of value until they start being a cost.
reply
yodsanklai
19 hours ago
[-]
Many of these laws are trade-offs to be assessed on a case by case basis. Everybody has a different subjective view and you need to be willing to compromise otherwise you'll be very sad working with other people.
reply
noduerme
1 day ago
[-]
I'd like to propose a corollary to Gall's Law. Actually it's a self-proving tautology already contained with the term "lifecycle." Any system that lasts longer than a single lifecycle oscillates between (reducing to) simplicity and (adding) complexity.

My bet is on the long arc of the universe trending toward complexity... but in spite of all this, I don't think all this complexity arises from a simple set of rules, and I don't think Gall's law holds true. The further we look at the rule-set for the universe, the less it appears to be reducible to three or four predictable mechanics.

reply
jt2190
18 hours ago
[-]
I think this site doesn’t capture Gall’s Law correctly, and your observations are closer to the original.

Gall notes that the universe naturally trends toward complexity and unintended consequences and therefore complex designs should be assumed to already be full of these unintended consequence “bugs”. He proposes that systems should be designed:

- with less scope to reduce unintended consequences,

- with less rigidity to allow for workarounds when unintended consequences arise, and

- to take advantage of “momentum” to reduce the required energy to use the system correctly. In other words make the right thing the easy thing, remembering that the easiest thing to do is nothing, thus systems will halt if operators get too busy with other tasks.)

reply
wesselbindt
1 day ago
[-]
Two of my main CAP theorem pet peeves happen on this page:

- Not realizing it's a very concrete theorem applicable in a very narrow theoretical situation, and that its value lies not in the statement itself but in the way of thinking that goes into the proof.

- Stating it as "pick any two". You cannot pick CA. Under the conditions of the CAP theorem it is immediately obvious that CA implies you have exactly one node. And guess what, then you have P too, because there's no way to partition a single node.

A much more usable statement (which is not a theorem but a rule of thumb) is: there is often a tradeoff between consistency and availability.

reply
traderj0e
1 day ago
[-]
In practice a quorum mechanism gets you close enough to picking all three in CAP, doesn't it? But it's still useful to teach.
reply
urxvtcd
1 day ago
[-]
Well, ackchyually, you can not pick P, it's just not cheap. You could imagine a network behaving like a motherboard really.
reply
nopointttt
1 day ago
[-]
The one I keep coming back to is "code you didn't write is code you can't debug." Every fancy dep I grabbed to save an afternoon ended up costing me weeks later when something upstream broke in some way I had no mental model for. LLM generated code has the same problem now. Looks fine until you hit a case it doesn't cover and you're trying to reverse engineer what you let it write.
reply
galaxyLogic
1 day ago
[-]
The Law of Leaky Abstractions. What is a "leaky" abstraction? How does it "leak"?

I wonder if it should be called "Law of Leaky Metaphors" instead. Metaphor is not the same thing as Abstraction. I can understand a "leaky metaphor" as something that does not quite make it, at least not in all aspects. But what would be a good EXAMPLE of a Leaky Abstraction?

reply
0xpgm
1 day ago
[-]
An extension to Zawinski's Law, every web service attempts to expand until it becomes a social network.
reply
bogomog
19 hours ago
[-]
A modernization, really.
reply
milanm081
1 day ago
[-]
For years, I kept notes on patterns I saw in software projects, architecture, planning, estimation, testing, scaling, and team design.

I pulled those notes into a browsable site with 56 laws, grouped into categories, with short explanations and separate pages for each one.

The goal was simple: put these ideas in one place, so when you want a quick refresher on Conway’s Law, Brooks’s Law, Gall’s Law, Hyrum’s Law, or Goodhart’s Law, you can find it fast.

I’m more interested in criticism than praise. Which laws are missing, which ones feel overstated, and which ones get misused most often in real work?

reply
g051051
20 hours ago
[-]
> When I first started, I was enamored with technology and programming and computer science. I’m over it.

Wow, that is incredibly sad to hear. I'm 40+ years in, and still love all of that.

reply
cientifico
1 day ago
[-]
There is one missing that i am using as primary for the last 5 years.

The UX pyramid but applied to DX.

It basically states that you should not focus in making something significant enjoyable or convenient if you don't have something that is usable, reliable or remotely functional.

https://www.google.com/search?q=ux+pyramid

reply
fomosocial
19 hours ago
[-]
Been in Software Engineering for over 2 years now and I'm only hearing most of these named laws for the first time. Am I cooked?
reply
milanm081
17 hours ago
[-]
Thank you all for the valuable feedback! I'll review each comment and improve what I can on the website.
reply
sreitshamer
20 hours ago
[-]
My favorite is "Software development isn't a code production activity, it's a knowledge acquisition activity."
reply
computerdork
1 day ago
[-]
Don't see a really important one in my opinion: Refactor legacy code, don't rewrite it. All that cruft you see are bug fixes.

Because rewriting old complex code is way more time consuming that you think it'll be. You have to add not only in the same features, but all the corner cases that your system ran into in the past.

Have seen this myself. A large team spent an entire year of wasted effort on a clean rewrite of an key system (shopping cart at a high-volume website) that never worked... ...although, in the age of AI, wonder if a rewrite would be easier than in the past. Still, guessing even then, it'd be better if the AI refactored it first as a basis for reworking the code, as opposed to the AI doing a clean rewrite of code from the start.

reply
namenotrequired
1 day ago
[-]
The “second system effect” page more or less covers this
reply
computerdork
1 day ago
[-]
Ah, think there is overlap, but still not the same in my opinion. Having read this just now, the second system effect seems to be more about not getting overly ambitious in the redesign. What the guideline I mentioned is saying is "don't rewrite, refactor.""

As you probably know, there is a tendency when new developers join a team to hate the old legacy code - one of the toughest skills is being able to read someone else's code - so they ask their managers to throw it away and rewrite it. This is rarely worth it and often results in a lot of time being spent recreating fixes for old bugs and corner cases. Much better use of time to try refactoring the existing code first.

Although, can see why you mentioned it from the initial example that I gave (on that rewrite of the shopping cart) which is also covered by the "second system effect." Yeah, thinking back, have seen this too. Overdesign can get really out of hand and becomes really annoying to wade through all that unnecessary complexity whenever you need to make a change.

reply
toolslive
1 day ago
[-]
maybe add: "the universe is winning" (in the design department). Full quote: "software engineers try to build "idiot-proof" systems, while the universe creates "bigger and better idiots" to break them. So far, the universe is winning"
reply
hpincket
1 day ago
[-]
Time to mention my tongue-in-cheek law:

> When describing phenomena in the social world

> Software Engineers gravitate towards eponymous 'laws'.

https://pincketslaw.com/

reply
bpavuk
1 day ago
[-]
> This site was paused as it reached its usage limits. Please contact the site owner for more information.

ha, someone needs to email Netlify...

reply
lifeisstillgood
1 day ago
[-]
Just throwing one of my favourites in:

As JFK never said:

“””We do these things, not because they are easy,

But because we thought they would be easy”””

reply
hintymad
1 day ago
[-]
With the current AI wave, a fun question to ask is: which of these laws do people think no longer apply.
reply
JensRantil
1 day ago
[-]
reply
superxpro12
1 day ago
[-]
reply
compiler-guy
1 day ago
[-]
As true and amusing as the wadsworth constant is, it has very little to do with software engineering.
reply
arnorhs
1 day ago
[-]
Since the site is down, you can use the archive.org link:

https://web.archive.org/web/20260421113202/https://lawsofsof...

reply
milanm081
1 day ago
[-]
Fixed
reply
darccio
1 day ago
[-]
I wonder if it's usual for other professions/fields to have this tendency to create laws/aphorisms so ingrained. I'm biased as software engineer but it seems to me that is more common in computer science than others.
reply
brendaninnis
10 hours ago
[-]
The Dunning-Kruger Effect card is so bad here. The effect is not that your perceived ability is higher the less skilled you are, but that the gap between your actual ability and perceived ability is larger the less skilled you are (and becomes negative when highly skilled). This is so commonly misstated this way that I think most people have this misunderstanding.

Also they used the graph for the Gartner Hype Cycle as the icon, not the Dunning-Kruger graph.

reply
kwar13
20 hours ago
[-]
Half of these are not about software engineering and just general management principles.
reply
satansdeer
1 day ago
[-]
It's proverbs, not laws

Most of them are also wrong

reply
xorcist
1 day ago
[-]
It's strange to see many non-software related "laws" here, such as the Dilbert Principle, but not Internet cornerstones such as Godwin's Law.
reply
quantum_state
1 day ago
[-]
Unfortunately, violation of any of these laws don't seem to have immediate consequences. That's why the IT industry is in ruin.
reply
ebonnafoux
1 day ago
[-]
There is a small typos in The Ninety-Ninety Rule

> The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.

It should be 90% code - 10% time / 10% code - 90% time

reply
Edman274
1 day ago
[-]
It sounds like you are unfamiliar with the idea that software engineering efforts can be underestimated at the outset. The humorous observation here is that the total is 180 percent, which mean that it took longer than expected, which is very common.
reply
ebonnafoux
1 day ago
[-]
Oh OK, that is something I learn today.
reply
grahar64
1 day ago
[-]
Some of these laws are like Gravity, inevitable things you can fight but will always exist e.g. increasing complexity. Some of them are laws that if you break people will yell at you or at least respect you less, e.g. leave it cleaner than when you found it.
reply
stingraycharles
1 day ago
[-]
Lots of them are also only vaguely related to software engineering, e.g. Peter Principle.

It’s not a great list. The good old c2.com has many more, better ones.

reply
layer8
1 day ago
[-]
Physical laws vs human laws.
reply
mchl-mumo
1 day ago
[-]
I find myself guilty of giving over ambitious timelines even when I try to take that into account.
reply
vpol
1 day ago
[-]
reply
herodotus
1 day ago
[-]
Knuth's Optimization Principle: The computer scientist Rod Burstall had a pithy way of saying this: "Efficiency is the enemy of clarity"
reply
voidifremoved
22 hours ago
[-]
Nice site, but missing the Law of Conservation of Misery.
reply
d--b
1 day ago
[-]
It's missing:

> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

reply
tgv
1 day ago
[-]
Shouldn't it also be able to read email? I think that was a law too.

Anyway, the list seems like something AI scraped and has a strong bias towards "gotcha" comments from the likes of reddit.

reply
invalidSyntax
1 day ago
[-]
I just wish if this was a requirement to get a job. Everyone needs to know this.
reply
pcblues
1 day ago
[-]
In the 25 odd years I developed software, I learnt all the rules the hard way.

Relax. You will make all the mistakes because the laws don't make sense until you trip over them :)

Comment your code? Yep. Helped me ten years later working on the same codebase.

You can't read a book about best practises and then apply them as if wisdom is something you can be told :)

It is like telling kids, "If you do this you will hurt yourself" YMMV but it won't :)

reply
Antibabelic
1 day ago
[-]
Software engineering is voodoo masquerading as science. Most of these "laws" are just things some guys said and people thought "sounds sensible". When will we have "laws" that have been extensively tested experimentally in controlled conditions, or "laws" that will have you in jail for violating them? Like "you WILL be held responsible for compromised user data"?
reply
horsawlarway
1 day ago
[-]
At least for your last point... ideally never.

Look, I understand the intent you have, and I also understand the frustration at the lack of care with which many companies have acted with regards to personal data. I get it, I'm also frustrated.

But (it's a big but)...

Your suggestion is that we hold people legally responsible and culpable for losing a confrontation against another motivated, capable, and malicious party.

That's... a seriously, seriously, different standard than holding someone responsible for something like not following best practices, or good policy.

It's the equivalent of killing your general when he loses a battle.

And the problem is that sometimes even good generals lose battles, not because they weren't making an honest effort to win, or being careless, but because they were simply outmatched.

So to be really, really blunt - your proposal basically says that any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data. That's not good policy, period.

reply
Antibabelic
1 day ago
[-]
Incidents happen in the meat world too. Engineers follow established standards to prevent them to the best of their ability. If they don't, they are prosecuted. Nobody has ever suggested putting people in jail for Russia using magic to get access to your emails. However, in the real world, there is no magic. The other party "outmatches" you by exploiting typical flaws in software and hardware, or, far more often, in company employees. Software engineering needs to grow up, have real certification and standards bodies and start being rigorously regulated, unless you want to rely on blind hope that your "general" has been putting an "honest effort" and showing basic competence.
reply
horsawlarway
1 day ago
[-]
We already have similar legal measures in software for following standards. These match very directly to engineering standards in things like construction and architecture. These are clearly understood, ex SOC 2, PCI DSS, GDPR, CCPA, NIST standards, ISO 27001, FISMA... etc... Delve is an example (LITERALLY RIGHT NOW!) of these laws being applied.

What we don't do in engineering is hold the engineer responsible when Russia bombs the bridge.

What you're suggesting is that we hold the software engineer responsible when Russia bombs their software stack (or more realistically, just plants an engineer on the team and leaks security info, like NK has been doing).

Basically - I'm saying you're both wrong about lacking standards, and also suggesting a policy that punishes without regard for circumstance. I'm not saying you're wrong to be mad about general disregard for user data, but I'm saying your "simple and clear" solution is bad.

... something something... for every complex problem there is an answer that is clear, simple, and wrong.

France killed their generals for losing. It was terrible policy then and it's terrible policy now.

reply
fineIllregister
1 day ago
[-]
We have HIPAA in the US for health care data. There have been no disastrous consequences to holding people and organizations responsible for breaches.
reply
horsawlarway
1 day ago
[-]
Sure, and in cases of negligence this is fine. The law even explicitly scales the punishment based on perceived negligence and almost always is only prosecuted in cases where the standards expectations aren't followed.

Ex - MMG for 2026 was prosecuted because:

- They failed to notify in response to a breach.

- They failed to complete proper risk analysis as required by HIPAA

They paid 10k in fines.

It wasn't just "They had a data breach" (ops proposal...) it was "They failed to follow standards which led to a data breach where they then acted negligently"

In the same way that we don't punish an architect if their building falls over. We punish them if the building falls over because they failed to follow expected standards.

reply
jcgrillo
1 day ago
[-]
Buildings don't just fall over, and security breaches don't just happen. These things happen when people fuck up. In the architecture world we hold individuals responsible for not fucking up--not the architect, but instead the licensed engineer who signs and seals the structural aspects of a plan. In the software world we do not.
reply
datadrivenangel
1 day ago
[-]
It's important to occasionally execute or imprison a general to motivate the remaining generals. Rarely though.
reply
jcgrillo
1 day ago
[-]
> any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data

No. Not the company, holding companies responsible doesn't do much. The engineer who signed off on the system needs to be held personally liable for its safety. If you're a licensed civil engineer and you sign off on a bridge that collapses, you're liable. That's how the real world works, it should be the same for software.

reply
horsawlarway
1 day ago
[-]
Define "safety".
reply
jcgrillo
1 day ago
[-]
Obviously if someone dies or is injured a safety violation has occurred. But other examples include things like data protection failures--if for example your system violates GDPR or similar constraints it is unsafe. If your system accidentally breaks tenancy constraints (sends one user's data to another user) it is unsafe. If your system allows a user to escalate privileges it is unsafe.

These kinds of failures are not inevitable. We can build sociotechnical systems and practices that prevent them, but until we're held liable--until there's sufficient selection pressure to erode the "move fast and break shit" culture--we'll continue to act negligently.

reply
horsawlarway
1 day ago
[-]
None of those are what OP proposed. Frankly, we also cover many of these practices just fine. What do you think SOC 2 type 2 and ISO 27001 are?

It seems like your issue is that we don't hold all companies to those standards. But I'm personally ok with that. In the same way I don't think residential homes should be following commercial construction standards.

reply
jcgrillo
1 day ago
[-]
> None of those are what OP proposed.

That doesn't worry me overly much.

> What do you think SOC 2 type 2 and ISO 27001 are?

They're compliance frameworks that have little to no consequences when they're violated, except for some nebulous "loss of trust" or maybe in extreme cases some financial penalties. The problem is the expectation value of the violation penalty isn't sufficient to change behavior. Companies still ship code which violates these things all the time.

> It seems like your issue is that we don't hold all companies to those standards.

Yes, and my issue is that we don't hold engineers personally liable for negligent work.

> I don't think residential homes should be following commercial construction standards.

Sure, there are different gradations of safety standards, but often residential construction plans require sign-off by a professional engineer. In the case when an engineer negligently signs off on an unsafe plan, that engineer is liable. Should be exactly the same situation in software.

reply
jaggederest
1 day ago
[-]
TANSTAAFL was always one of my favorites - there ain't no such thing as a free lunch
reply
bronlund
1 day ago
[-]
Pure gold :) I'm missing one though; "You can never underestimate an end user.".
reply
Waterluvian
1 day ago
[-]
I think it would be cool to have these shown at random as my phone’s “screensaver”
reply
smikhanov
1 day ago
[-]
Oh dear, not again: https://lawsofsoftwareengineering.com/laws/brooks-law/

This one belongs to history books, not to the list of contemporary best practices.

reply
traderj0e
1 day ago
[-]
Any time someone quotes a law named after some random person, it looks like a stuffy "I know something you don't." Amdahl is probably the only name here that deserves it, and it's a real law. I'd be fine if Eric Brewer put his name on CAP too, also a real law.

YAGNI and "you will ship the org chart" are the two most commonly useful things to remember, but they aren't laws.

reply
HoldOnAMinute
1 day ago
[-]
I like it, but this could have been a tab delimited text file.
reply
yesitcan
1 day ago
[-]
None of these things matter anymore. All you need is vibe.
reply
serious_angel
1 day ago
[-]
Sorry, sometimes, I do wish Hacker News would have a down-vote button...
reply
datadrivenangel
1 day ago
[-]
Come back after 500 karma.
reply
Divergence42
1 day ago
[-]
Looks a lot different in a 1 person agent based company.
reply
satansdeer
1 day ago
[-]
It's proverbs, not laws
reply
sunkeeh
1 day ago
[-]
Good luck following the Dilbert Principle xD

Just because some things were observed frequently during a certain period, doesn't mean it's a "Law" or even a "Principle"; it's merely a trend.

reply
alsetmusic
1 day ago
[-]
Their statement of Dunning-Kruger is overly simplified such as to misdefine it:

> The less you know about something, the more confident you tend to be.

From the first line on the wiki article:

> systematic tendency of people with low ability in a specific area to give overly positive assessments of this ability.

Or, said another way, the more you know about something the more complexities you're aware of and the better assessment you can make about topics involving such. At least, that's how I understand it in a nutshell without explaining the experiments run and the observations that led to the findings.

reply
cogman10
1 day ago
[-]
Uhh, I knew I wasn't going to like this one when I read it.

> Premature Optimization (Knuth's Optimization Principle)

> Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved.

This example is the exact example I'd choose where people wrongly and almost obstinately apply the "premature optimization" principles.

I'm not saying that you should write a custom hash table whenever you need to search. However, I am saying that there's a 99% chance your language has an inbuilt and standard datastructure in it's standard library for doing hash table lookups.

The code to use that datastructure vs using an array is nearly identical and not the least bit hard to read or understand.

And the reason you should just do the optimization is because when I've had to fix performance problems, it's almost always been because people put in nested linear searches turning what could have been O(n) into O(n^3).

But further, when Knuth was talking about actual premature optimization, he was not talking about algorithmic complexity. In fact, that would have been exactly the sort of thing he wrapped into "good design".

When knuth wrote about not doing premature optimizations, he was living in an era where compilers were incredibly dumb. A premature optimization would be, for example, hand unrolling a loop to avoid a branch instruction. Or hand inlining functions to avoid method call overhead. That does make code more nasty and harder to deal with. That is to say, the specific optimizations knuth was talking about are the optimizations compilers today do by default.

I really hate that people have taken this to mean "Never consider algorithmic complexity". It's a big reason so much software is so slow and kludgy.

reply
krust
1 day ago
[-]
> Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved.

To be fair, a linear search through an array is, most of the time, faster than a hash table for sufficiently small data sizes.

reply
cogman10
1 day ago
[-]
I'd say that this is premature optimization.

It doesn't take long for hash or tree lookups to start outperforming linear search and, for small datasets, it's not frequently the case that the search itself is a performance bottleneck.

reply
bigfishrunning
1 day ago
[-]
Yeah, for every Knuth there are 10000 copies of schlemiel the painter
reply
bofia
1 day ago
[-]
It would be nice to see what overlaps
reply
Divergence42
1 day ago
[-]
fascinating and agree with many of the laws. In a 1 person agent only company this hits a bit different.
reply
matt765
1 day ago
[-]
I love this
reply
exiguus
1 day ago
[-]
This website should be a json file
reply
serious_angel
1 day ago
[-]
reply
exiguus
16 hours ago
[-]
Love it :)
reply
clauderx
1 day ago
[-]
Ah yes my favorite - Conway's Law is just a fancy way of saying "your architecture is whatever your political mess of a org chart accidentally produced, and everyone calls it 'design' afterward to avoid fixing it."
reply
amelius
1 day ago
[-]
No laws related to AI?
reply
0xbadcafebee
1 day ago
[-]
A law of physics is inviolable.... A law of software engineering is a hot take.

Here's another law: the law of Vibe Engineering. Whatever you feel like, as long as you vibe with it, is software engineering.

reply
lenerdenator
1 day ago
[-]
"No matter how adept and talented you are at your craft with respect to both technical and business matters, people involved in finance will think they know better."

That one's free.

reply
James_K
1 day ago
[-]
I feel that Postel's law probably holds up the worst out of these. While being liberal with the data you accept can seem good for the functioning of your own application, the broader social effect is negative. It promotes misconceptions about the standard into informal standards of their own to which new apps may be forced to conform. Ultimately being strict with the input data allowed can turn out better in the long run, not to mention be more secure.
reply
duc_minh
1 day ago
[-]
Is it just me seeing the following?

Site not available This site was paused as it reached its usage limits. Please contact the site owner for more information.

reply
rtrigoso
1 day ago
[-]
not just you, I am getting the same error
reply
asdfman123
1 day ago
[-]
> Leave the code better than you found it

In most places, people don't follow this rule, as it ensures either you're working an extra 10-20 hours a week to keep things clean, or stuck at mid-level for not making enough impact.

I choose the second option. But I see people who utterly trash the codebase get ahead.

reply
blauditore
1 day ago
[-]
Many of the "teams" laws are BS, especially the ones about promotions and management. I've never been a manager or high-level executive, but it's not that all of them are either non-technical or bad managers. It's just that the combination of both skills is rare.
reply
AtNightWeCode
1 day ago
[-]
"Polishing a turd" is missing. Making something slightly better when it should be removed. Like running Flash apps in 2026.
reply
contingencies
1 day ago
[-]
reply
eranation
1 day ago
[-]
Good list. Missing for me

- NIH

- GIGO

- Rule of 3

reply
kittikitti
1 day ago
[-]
This is really good and comprehensive, thanks for sharing!
reply
samuelknight
1 day ago
[-]
I like the website. Simple and snappy.
reply
IshKebab
1 day ago
[-]
Calling these "laws" is a really really bad idea.
reply
_dain_
1 day ago
[-]
I have a lot of issues with this one:

https://lawsofsoftwareengineering.com/laws/premature-optimiz...

It leaves out this part from Knuth:

>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.

Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.

Moreover:

>Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.

I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.

For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.

Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.

reply
hliyan
1 day ago
[-]
I absolutely cannot stand people who recite this quote but has no knowledge of the sentences that come before or after it: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
reply
randusername
1 day ago
[-]
Don't forget:

- The customer is always right in matters of taste

- Jack of all trades, master of none, but oftentimes better than a master of one

- Curiosity killed the cat, but satisfaction brought it back

- A few bad apples spoil the barrel

- Great minds think alike, though fools seldom differ

Even "pull yourself up by your bootstraps" was originally meant to highlight the absurd futility of a situation.

reply
dgb23
1 day ago
[-]
> In many cases, simpler code is faster, and fast code makes for simpler systems. (...)

I wholeheartedly agree with you here. You mentioned a few architectural/backend issues that emerge from bad performance and introduce unnecessary complexity.

But this also happens in UI: Optimistic updates, client side caching, bundling/transpiling, codesplitting etc.

This is what happens when people always answer performance problems with adding stuff than removing stuff.

reply
cogman10
1 day ago
[-]
> I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.

Just a little historic context will tell you what Knuth was talking about.

Compilers in the era of Knuth were extremely dumb. You didn't get things like automatic method inlining or loop unrolling, you had to do that stuff by hand. And yes, it would give you faster code, but it also made that code uglier.

The modern equivalent would be seeing code working with floating points and jumping to SIMD intrinsics or inline assembly because the compiler did a bad job (or you presume it did) with the floating point math.

That is such a rare case that I find the premature optimization quote to always be wrong when deployed. It's always seems to be an excuse to deploy linear searches and to avoid using (or learning?) language datastructures which solve problems very cleanly in less code and much less time (and sometimes with less memory).

reply
rapatel0
1 day ago
[-]
The list is great but the explanation are clearly AI slop.

"Before SpaceX, launching rockets was costly because industry practice used expensive materials and discarded rockets after one use. Elon Musk applied first-principles thinking: What is a rocket made of? Mainly aluminum, titanium, copper, and carbon fiber. Raw material costs were a fraction of finished rocket prices. From that insight, SpaceX decided to build rockets from scratch and make them reusable."

Everything including humans are made of cheap materials but that doesn't convey the value. The AI got close to the answer with it's first sentence (re-usability) but it clearly missed the mark.

reply
andreygrehov
1 day ago
[-]
`Copy as markdown` please.
reply
Lapsa
1 day ago
[-]
reminder - there's tech out there capable of reading your mind remotely
reply
garff
1 day ago
[-]
Mad AI slop..
reply
bakkerinho
1 day ago
[-]
> This site was paused as it reached its usage limits. Please contact the site owner for more information.

Law 0: Fix infra.

reply
andrerpena
1 day ago
[-]
This looks like a static website that could be served for free from Cloudflare Pages or Vercel, with a nearly unlimited quota. And still... It's been hugged to death, which is ironic, considering it's a software engineering website :).
reply
mghackerlady
1 day ago
[-]
Hell, something like this probably doesn't even need that. Throw it on a debian box running nginx or apache and you'll probably be set (though, with how hard bots have been scraping recently it might be harder than that)
reply
asdfasgasdgasdg
1 day ago
[-]
Law 1: caching is 90-99% of performance.
reply
arnorhs
1 day ago
[-]
are you saying performance is 90-99% caching? If so that is so obviously untrue.

If you are saying you _can_ fix 90-99% of performance bottlenecks eventually with caching, that may be true, but doesn't sound as nice

reply
hermaine
1 day ago
[-]
reply
jvanderbot
1 day ago
[-]
Prior probability of a prompt-created website: 50%.

Posterior probability of a prompt-created website: 99%.

reply
kurnik
1 day ago
[-]
So somebody who doesn’t know how to properly host a static website wants to teach me about software engineering. Cool. 99% sure it’s a vibecoded container for AI slop anyway.
reply
the_arun
1 day ago
[-]
Laws are there to be broken.
reply
milanm081
1 day ago
[-]
Fixed, thanks!
reply
esafak
1 day ago
[-]
"Performance doesn't matter!"
reply
threepts
1 day ago
[-]
I believe there should be one more law here, telling you to not believe this baloney and spend your money on Claude tokens.
reply