Hmm, sounds familiar...
Bingo knows everyone's name-o
Papaya & MBS generate session tokens
Wingman checks if users are ready to take it to the next level
Galactus, the all-knowing aggregator, demands a time range stretching to the end of the universe
EKS is deprecated, Omega Star still doesn't support ISO timestamps
Number of softwares not supporting iso8601, TODAY (no pun), is appalling. For example, git (claiming compatibility, but isn’t).
Beginning in about the 1980s or so, with the rise of PCs and later the internet, the "genius programmer" was lionized and there was a lot of money to be made through programming alone. So systems analysts were slowly done away with and programmers filled that role. These days the systems analyst as a separate profession is, as you say, nearly extinct. The programmers who replaced the analysts applied techniques and philosophies from programming to business information analysis, and that's how we got situations like with Bingo, WNGMAN, and Galactus. Little if any business analysis was done, the program information flows do not mirror the business information flows, and chaos reigns.
In reality, 65% of the work should be in systems analysis and design—well before a single line of code is written. The actual programming takes up maybe 15% of the overall work. And with AI, you can get it down to maybe a tenth that: using Milt Bryce's PRIDE methodology for systems analysis and development will yield specs that are precise enough to serve as context that an LLM can use to generate the correct code with few errors or hallucinations.
In 30 years in software dev, I am yet to see any significant, detailed and consistent effort to be extended into design and architecture. Most architects do not design, do not architect.
Senior devs design and architect and then take their design to the architects for *feedback and approvals*.
These senior devs make designs for features and only account for code and systems they've been exposed to.
With an average employment term of 2 years most are exposed to a small cut of the system, which affects the depth and correctness of their design.
And architects mostly approve, sometimes I think without even reading the docs.
At most, you can expect the architects to give generic advice and throw a few buzzwords.
At large, they feel comfortable and secure in their positions and mostly don't give a shit!
https://www.goodreads.com/en/book/show/39996759-a-philosophy...
Video overview at:
I've been thinking about this a lot. 2~3 years is a long time, long enough to have a pretty good grasp on what a code maintained by 50~100 does in pretty concrete terms, come up with decent improvement ideas, and see at least one or two structural ideas hit production.
If the person then stays 1 or 2 more years they get a chance to further refine, but usually will be moved up the ladder Peter Principle style. If they get a chance to lead these architecture changes that company has a chance to be on a decent path technally speaking.
I'm totally with you on the gist of it: architects will usually be a central switch arranging these ideas coming from more knowledgeable places. In the best terms I see their role as guaranteeing consistency and making sure teams don't impede each other's designs.
I feel it's already enough to rewrite a big part of subsystem or change the whole thing into shit (depends on maintainer).
Software today moves quite fast. 2 year is sometimes difference between a new company and a dead company
But this is exactly the type of generic software design advice the article warns us about! And it mostly results in all all the bad software practices we as users know and love remaining unchanged (consistently "bad" is better than being good at least in some areas!)
I've never seen consistency of libraries and even programming languages have a negative impact. Conversely, the situation you describe, or even going out of the way to use $next_lang entirely, is almost always a bad idea.
The consistency of where to place your braces is important within a given code base and teams working on it, but not that important across them, because each one is internally consistent. Conversely, two code bases and teams using two DBs that solve the same problem is likely not a good idea because now you have two types of DBs to maintain. Also, if one team solves a DB-specific problem, say, a performance issue, it might not be obvious how the other team might be able to pick up the results of that work and benefit from it.
So I don't know. I think the answer depends on how you define "consistency", which OP hasn't done very well.
Sometimes there is a reason! Sometimes there isn't a reason, but it might be something we want to move everything over to if it works well and will rip out if it doesn't. Sometimes it's just someone who believes that functional programming is Objectively Better, and those are when an architect can say "nope, you don't get to be anti-social."
The best architects will identify some hairy problem that would benefit from those skills and get management to point the engineer in that direction instead.
A system that requires homogeneity to function is limited in the kinds of problems it can solve well. But that shouldn't be an excuse to ignore our coworkers (or the other teams: I've recently been seeing cowboy teams be an even bigger problem than cowboy coders.)
What you describe just sounds "inconsistent AND bad".
Consistency enables velocity. If there is consistency, devs can start to make assumptions. "Auth is here, database is there, this is how we handle ABC". Possible problems show up in reviews by being different to expectation. "Hey, where's XYZ?", "Why are you querying the database in the constructor?"
Onboarding between teams becomes a lot easier, ramp up time is smaller.
Without consistency, you end up with lots of small pockets of behavior that cause downstream problems for the org as a whole.
Every team needs extra staff to handle load peaks, resulting in a lot of idle devs.
Senior devs can't properly guess where the problematic parts of fixes or features would be. They don't need to know the details, just where things will be _difficult_.
Every feature requires coordination between the teams, with queuing and prioritizing until local staff become available.
Finally, consistency allows classes of bugs to be fixed once. Fix it once and migrate everyone to the new style.
I think adherence to “consistency is more important than ‘good design’” naturally leads to boiling the ocean refactoring and/or rewrites, which are far riskier endeavors with lower success rates than iterative refactoring of a working system over time.
migrate the rest of the codebase!
Then everyone benefits from the discovery.
If that's difficult, write or find tooling to make that possible.
It's in the "if it hurts, do it more often" school of software dev.
https://martinfowler.com/bliki/FrequencyReducesDifficulty.ht...
There’s absolutely exceptions and nuances. But I think when weighing trade-offs, program makers by and large deeply under-weigh being consistent.
software that has "good" and "bad" parts in unpredictable
Software that has only "bad" parts is also very unpredictable.
(Unless "bad" means something else than "bad", it's hard to keep up with the lingo)
your example is just bad code that unpredictable
High quality and consistent > Low quality and consistent > Variable quality and inconsistent. If you're going to be the cause of the regression into variable quality and inconsistent you'd better deliver on bringing it back up to high quality and consistent. That's a lot of work that most people aren't cut out for because it's usually not a technical change but a cultural change that's needed. How did a codebase get into the state of being below standards? How are you going to prevent that from happening again? You are unlikely to Pull Request your way out of that situation.
its true and false at the same time, it depends
here I can bring example: you have maintaining production system that has been run for years
there is flaw in some parts of codebase that is probably ignored either because
1. bad implementation/hacky way
2. the system outgrow the implementation
so you try to "fix" it but suddenly other internal tools stops working, customer contact the support because it change the behaviour on their end, some CI randomly fails etc
software isn't exist in a vacuum, complex interaction sometimes prevent "good" code to exist because that just reality
I don't like it either but this is just what it is
But yes, the map is not the territory, and giving directions is not the same as walking the trail. The actual implementation can deviate from the plan drafted at the beginning of the project. A good explanation is found in Naur's Theory of Programming, where he says the true knowledge of the system is inside the head of the engineers that worked on it. And that knowledge is not easily transferrable.
On the other hand, there are Real Programmers [0] who will happily optimize the already-fast initializer, balk at changing business logic, and write code that, while optimal in some senses, is unnecessarily difficult for a newcomer (even an expert engineer) to understand. These systems have plenty of detail and are difficult to change, but the complexity is non-essential. This is not good engineering.
It's important to resist both extremes. Decision makers ultimately need both intimate knowledge of the details and the broader knowledge to put those details in context.
I think this should also apply to people who come up with or choose the software development methodology for a project. Scrum masters just don't have the same skin in the game that lead engineers do.
Modules with different requirements should not have single consistent codebase. Testing strategy, application architecture, even naming should be different across different modules.
Structural Engineering (generally construction engineering) does work like that. Following the analogy, the engineers draw; they don't lay bricks. But, all the best engineers have probably been site supervisors at some point and have watched brick being layed, and spoken to the layers of bricks, etc. Construction methods change, but they don't change as quickly as software engineering methods. There is also a very material and applicable "reality" constraint. Most struct's knowledge/heuristics remains valid over long periods of time. The software engineers' body of knowledge can change 52 times in a year. To completely stretch the analogy - the site conditions for construction engineering are better known than the site conditions for a large software project. In the latter case the site itself can be adjusted more easily, and more materially, by the engineering itself i.e. the ground can move under your feet. Site conditioning on steroids!
Ultimately, that's why I agree fully with the piece. Generic advise may be helpful, but it always applies to some generic site conditions that are less relevant in practice.
I understand I’m replying against the spirit of your point, but the IEEE has actually published one and it seems to get updated very slowly.
https://www.computer.org/education/bodies-of-knowledge/softw...
I don't view that as a failure of abstraction as a design principle as much as it is a pitfall of using the wrong abstraction. Using the right abstraction requires on the ground knowledge, and if nobody communicates that up the chain, well, you get the tree swing cartoon.
The problem is that doing it like that is much too expensive and too slow for most businesses.
"Business" runs the same calculations. I'd posit that, as a practical matter, most businesses don't want "good" software; they want "good enough" software.
A lot of this is because while a 'good' business is waiting for the 'good' software to be written, some crappy business has already written the crappy software and sold it to all the customers you were depending on. In general customers are very bad at knowing the difference between good and bad software and typically buy what looks flashy or the sales people bribe them the most for.
Nah, those changes are only in the surface, at the most shallow level.
There's always new techniques, materials and tools in structural engineering as well.
Foundations take a lifetime to change.
Very strongly disagree.
There are limitless methods of solving problems with software (due to very few physical constraints) and there are an enormous number of different measures of whether it's "good" or "bad".
It's both the blessing and curse of software.
There are too many decisions, technical details, and active changes to have someone come in and give direction from on high at intervals.
Maybe at the beginning it could make sense sort of, but projects have to evolve and more often than not discover something important early on in the implementation or when adding "easy" features, and if someone is good at doing software design then you may need them even more at that point. But they may easily be detrimental if they are not closely involved and following the rest of the project details.
Good software designers are facilitators. They don't tell people how to build software, but say "not like that" by making the technical requirements clear. They enable design to constantly change as the needs change.
It has been a long time since I've been at a company willing to actually employ someone in that roll. They require that their most senior engineers be focused on writing code themselves, at the expense of the team and skill-building necessary for quality software.
Instead we get bullshit like "team topologies" or frameworks that are more about how the company wants to manage teams than they are about how well the software works. We get "design documents" that are considered more important than working code. Even the senior engineers that are around aren't allowed to say "no" if it is going to interfere with some junior project manager's imagined deadline.
Software companies are penny-wise and pound foolish, resulting in shittastic spaghetti messes with microservice meatballs.
When you have done this many times you absolutely can design a large application without touching the code. This is part planning and risk analysis experience and part architecture experience. You absolutely need a lot of experience creating large applications multiple times and going through that organizational grind but prior experience in management and writing high level plans is extremely helpful.
When it comes to extending an existing application it really comes down to how well the base application was planned to begin with. In most cases the base application developers have no idea, because they either outsourced the planning to some external artifact or simply pushed through it one line at a time and never looked back. Either way the person writing the extension will be more concerned with the corresponding service data and accessibility than conformance to the base application code if it is not well documented and not well tested in a test automation scheme.
I've personally yet to have a situation where that comes up. And every application I've ever worked on has its architecture evolve over time, as behavior changes and new domain concepts are identified.
There are recurring patterns (one might even call them Design Patterns), but by the time we've internalized them we have even less need for up-front planning. Why write the doc when you can just implement the code?
One does not need to be a programmer in order to be a great systems analyst/architect. Matter of fact it's the opposite: great analysts are good with people, and have a strong intuitive grasp of what people need in order to effectively run the business. Leaving that to programmers is a recipe for disaster, as without documentation of existing business systems and requirements and a solid design, programmers will happily build the wrong thing.
Imagine if you worked for an online retailer like Amazon, and you were assigned to architect a change so you can add free sample items into customers' orders. Take a moment to think about how you'd architect such a system, and what requirements you'd anticipate fulfilling. In the next paragraph, I'll tell you what the requirements are. Or you can skip the next paragraph, the size of which should tell you the requirements are more complex than they seem.
The samples must be items in the basket, so the warehouse knows to pick them. They must be added at the moment of checkout, because that's when the order contents and weight can change. Often a customer should receive a sample only once, even if they check out multiple orders - so a record should be kept of which customers have already been allocated a given sample. It should be possible to assign a customer the same sample multiple times, in which case they should receive it once per order until they've received the assigned number. Some samples go out of stock regularly, so the sample items should not be visible to the customer when they view their order on the website, but if shipped it should appear on their receipt to assure them they haven't been charged for it. Samples should never be charged for, even if their barcode is identical to something we normally charge for. If the warehouse is unable to ship the sample, the customer should not receive a missing-item apology or a separate shipment, and the record saying that customer has had that sample already should be decremented. If the warehouse can't ship anything except the sample, the entire orders should be delayed/cancelled, never shipping the sample alone. If a customer ordered three of an item and was assigned one sample item with the same barcode but the warehouse only had three items with that barcode in stock, something sensible should happen. One key type of 'sample' is first-time-customer gifts; internal documentation should explain that if the first order a customer places is on 14-day delivery and their second order is on faster delivery and arrives first, the first-order gift will be in the second order to arrive but that's expected because it's assigned at checkout. If the first-order-checked-out is cancelled, either by the customer or the warehouse, the new-customer gift should be added to the next order they check out. Some customers will want to opt out of free samples, those who do should not be assigned any samples. But the free sample system is also used by customer services to give out token apology gifts to customers whose orders have had problems, customers who've been promised a gift should receive it even if they've opted out of free samples.
No reasonable person can design such a system upfront, because things like 'opt-out mechanism sometimes shouldn't opt you out' and 'more than one definition of a customer's first order' do not occur to reasonable people.
This thought processes does use some knowledge of online retail but not really that much. It's mostly patterns of system decomposition and good engineering.
Edit: the point of the article itself stands, if the codebase is in no shape to have these free samples built as I described then my input is useless, other than to consider working toward that architectural goal.