For us, an accurate delivery date on a 6 month project was mandatory. CX needed it so they could start onboarding high priority customers. Marketing needed it so they could plan advertising collateral and make promises at conventions. Product needed it to understand what the Q3 roadmap should contain. Sales needed it to close deals. I was fortunate to work in a business where I respected the heads of these departments, which believe it or not, should be the norm.
The challenge wasn't estimation - it's quite doable to break a large project down into a series of sprints (basically a sprint / waterfall hybrid). Delays usually came from unexpected sources, like reacting to a must have interruption or critical bugs. Those you cannot estimate for, but you can collaborate on a solution. Trim features, push date, bring in extra help, or crunch. Whatever the decision, making sure to work with the other departments as colaborators was always beneficial.
Things rarely went to plan, but as soon as any blip occured, there'd be plans to trim scope, crunch more, or push the date with many months of notice.
Then I joined my first web SaaS startup and I think we didn't hit a single deadline in the entire time I worked there. Everyone thought that was fine and normal. Interestingly enough, I'm not convinced that's why we failed, but it was a huge culture shock.
Former Test Engineer here. It was always fun when everyone else’s deadline slipped but ours stayed the same. Had to still ship on the same date even if I didn’t have silicon until much later than originally planned.
A side effect is, no there aren't. Allow me to explain that catty remark.
The experienced pro's have figured out how to arrange their affairs so that delivery of software doesn't matter, i.e., is someone else's problem. The software either arrives or it doesn't.
For instance, my job is in technology development for "hardware" that depends on elaborate support software. I make sure that the hardware I'm working on has an API that I can code against to run the tests that I need. My department has gone all-in on vibe coding.
Customers aren't waiting because the mantra of all users is: "Never change anything," and they can demand continued support of the old software. New hardware with old software counts as "revenue" so the managers are happy.
There are problems with all of these. The company knows they can sell X of the product for $Y (often X is a bad guess, but sometimes it has statistical range - I'll ignore this for space reasons but it is important!). X times Y equals gross profit. If the total costs to make the feature are too high the whole shouldn't be done.
If you trim features - the affects either the number you can sell, or the price you can sell for (sometimes both).
If you push the date that also affects things - some will buy from a competitor (if possible - and the later date makes it more likely the competitors releases with that feature).
Bring in extra help means the total costs goes up. And worse if you bring them in too late that will slow down the delivery.
Crunch is easiest - but that burns out your people and so is often a bad answer long term.
This is why COMPANIES NEED ACCURATE ESTIMATES. They are not optional to running a company. That they are impossible does not change the need. We pretend they are possible because you cannot run a company without - and mostly we get by. However they are a fundamental requirement.
I *HATE* estimating roadmaps, because it feels unfair. I'm happy to estimate a sprint.
Estimating in software is very hard, but that's not a good reason to give up on getting better at it
Saying "I don't know" is arguably more honest, even if it's not useful for budgets or planning.
I completely agree. That's why I chose that example: They're also awful at it, especially these days in North America in particular. But any contractor that tried to put in a bid claiming "it'll be done when it's done and cost what it costs" would not be considered professionally competent enough to award a multi-million dollar budget.
But you're pretty spot on, as 'professionally acceptable' indeed means politically acceptable most of the time. Being honest and admitting one's limit is often unacceptable.
[0]: https://www.strategy-business.com/article/Why-do-large-proje...
Usually management backs off if they have a good understanding of the impact a change will make. I can only give a good estimate of impact if I have a solid grip on the current scope of work and deadlines. I've found management to be super reasonable when they actually understand the cost of a feature change.
When there's clear communication and management decides a change is important to the product then great, we have a clear timeline of scope drift and we can review if our team's ever pulled up on delays.
In practice developers have to "handle" the people requesting hard deadlines. Introduce padding into the estimate to account for the unexpected. Be very specific about milestones to avoid expectation of the impossible. Communicate missed milestones proactively, and there will be missed milestones. You're given a date to feel safe. And sometimes you'll cause unnecessary crunch in order for a deadline you fought for to be met. Other times, you'll need to negotiate what to drop.
But an accurate breakdown of a project amounts to executing that project. Everything else is approximation and prone to error.
- If it's an internal project (like migrating from one vendor to another, with no user impact) then it takes as long as I can convince my boss it is reasonable to take.
- If it's a project with user impact (like adding a new feature) then it takes as long as the estimated ROI remains positive.
- If it's a project that requires coordination with external parties (like a client or a partner), then the sales team gets to pick the delivery date, and the engineering team gets to lie about what constitutes an MVP to fit that date.
This is a cop-out. Just because you can’t do it, doesn’t mean it’s impossible :)
There are many types of research and prototyping project that are not strongly estimable, even just to p50.
But plenty can be estimated more accurately. If you are building a feature that’s similar to something you built before, then it’s very possible to give accurate estimates to, say, p80 or p90 granularity.
You just need to recognize that there is always some possibility of uncovering a bug or dependency issue or infra problem that delays you, and this uncertainty compounds over longer time horizon.
The author even gestures in this direction:
> sometimes you can accurately estimate software work, when that work is very well-understood and very small in scope. For instance, if I know it takes half an hour to deploy a service
So really what we should take from this is that the author is capable of estimating hours-long tasks reliably. theptip reports being able to reliably estimate weeks-long tasks. And theptip has worked with rare engineers who can somehow, magically, deliver an Eng-year of effort across multiple team members within 10% of budget.
So rather than claim it’s impossible, perhaps a better claim is that it’s a very hard skill, and pretty rare?
(IMO also it requires quite a lot of time investment, and that’s not always valuable, eg startups usually aren’t willing to implement the heavyweight process/rituals required to be accurate.)
As a person that has never encountered a complex software project that can be accurately estimated, I am being a bit skeptical.
The author did make examples of when estimation is possible: easy projects with a very short time horizons (less than an a couple of days, I'd say).
I'd love to hear some examples of more complex software projects that can be estimated within a reasonable variance.
However, I think it should also be acknowledged that the point of the article seems to be in a different direction: it _doesn't really matter_ that you have a good time estimate, because asking for an estimate is just a somewhat strange way for the management chain to approach you and then tell you how much time you have to deliver.
They would much rather confidently repeat a date that is totally unfounded rubbish which will have to be rolled back later, because then they can blame the engineering team for not delivering to their estimate.
Having a promised date lets you keep the opportunity going and in some cases can even let you sign them there and then - you sign them under the condition that feature X will be in the app by date Y. That's waaaay better for business, even if it's tougher for engineers.
I’ve seen enough instances of work being done for a specific customer that doesn’t then result in the customer signing up (or - once they see they can postpone signing the big contract by continuing to ask for “just one more crucial feature”, they continue to do so) to ever fall for this again.
In Australia, an SDE + overhead costs say $1500 / work day, so 4 engineers for a month is about $100k. The money has to be allocated from budgets and planned for etc. Dev effort affects the financial viability and competitiveness of projects.
I feel like many employees have a kind of blind spot around this? Like for most other situations, money is a thing to be thought about and carefully accounted for, BUT in the specific case where it's their own days of effort, those don't feel like money.
Also, the software itself presumably has some impact or outcome and quite often dates can matter for that. Especially if there are external commitments.
1. Think about what product/system you want built.
2. Think about how much you're willing to invest to get it (time and money).
3. Cap your time and money spend based on (2).
4. Let the team start building and demo progress regularly to get a sense of whether they'll actually be able to deliver a good enough version of (1) within time/budget.
If it's not going well, kill the project (there needs to be some provision in the contract/agreement/etc. for this). If it's going well, keep it going.
If you cannot (or refuse to) estimate cost or probability of success in a timebox you have no way to figure out ROI.
To rationally allocate money to something, someone has to do the estimate.
I agree there is less uncertainty in plumbing - but not none. My brother runs a plumbing company and they do lose money on jobs sometimes, even with considerable margin. Also when I've needed to get n quotes, the variation was usually considerable.
I think one big situational difference is that my brother is to some extent "on the hook" for quotes (variations / exclusions / assumptions aside) and the consequences are fairly direct.
Whereas as an employee giving an estimate to another department, hey you do your best but there are realistically zero consequences for being wrong. Like maybe there is some reputational cost? But either me or that manager is likely to be gone in a few years, and anyway, it's all the company's money...
If you had a deadline - say thanksgiving or something - and you asked “will the work be done by then” and the answer was “I’m not going to tell you” would you hire the person?
The no estimates movement has been incredibly damaging for Software Engineering.
"I'd like to have my roof reshingled, but with glass tiles and it should be in the basement, and once you are half way I'll change my mind on everything and btw, I'm replacing your crew every three days".
When I’ve engaged with a contractor for remodeling, I usually have some vague idea like “we should do something about this porch and deck and we’d like it to look nice.”
The contractor then talks to you about _requirements_, _options_, and _costs_. They then charges for architectural plans and the option to proceed with a budget and rough timeline.
Then they discover problems (perhaps “legacy construction”) and the scope creeps a bit.
And often the timeline slips by weeks or months for no discernible reason.
Which sounds exactly like a lot of software projects. But half of your house is torn up so you can’t easily cut scope.
Though the "I'm replacing your crew every three days" does cut a little too close the bone...
I guess a fair analogy would be if the home owner just said “Make my home great and easy to use” by Thanksgiving without too many details, and between now ans thanksgiving refines this vision continuously, like literally changing the color choice half way or after fully painting a wall… then its really hard to commit.
If a home owner has a very specific list of things with no on the job adjustments, then usually you can estimate(most home contract work)
All software requests are somewhere in between former and latter, most often leaning towards the former scenario.
Where I've seen issues is when there is a big disconnect and they don't hear about problems until it's way too late.
I've made your argument before, but realistically, much of the word revolves around timelines and it's unreasonable to expect otherwise.
When will you recover from your injury so you can play the world cup?
When will this product arrive that I need for my child's birthday?
When will my car be repaired, that I need for a trip?
How soon before our competitors can we deliver this feature?
"It'll be done when it's done" is very unsatisfying in a lot or situations, if not downright unacceptable.
But the problem is, the sales world has its own reality. The reality there is that "we don't know when" really is unacceptable, and "unacceptable" takes the form of lost sales and lost money.
So we have these two realities that do not fit well together. How do we make them fit? In almost every company I've been in, the answer is, badly.
The only way estimates can be real is if the company has done enough things that are like the work in question. Then you can make realistic (rough) estimates of unknown work. But even then, if you assign work that we know how to do to a team that doesn't know how to do it, your estimates are bogus.
It's what we software engineers like to tell ourselves because it cuts us slack and shifts the blame to others for budget and time overruns. But maybe it's also our fault and we can do better?
And the usual argument of "it's not like physical engineering, software is about always building something new" because that's only true for a minority of projects. Most projects that fail or overrun their limits are pretty vanilla, minor variations of existing stuff. Sometimes just deploying a packaged software with minor tweaks for your company (and you must know this often tends to fail or overrun deadlines, amazingly).
I know another "engineering" area where overruns are unacceptable to me and I don't cut people slack (in the sense it's me who complains): home building/renovation contractors. I know I'm infuriated whenever they pull deadlines out of their asses, and then never meet them for no clear reason. I know I'm upset when they stumble over the slightest setbacks, and they always fail to plan for them (e.g. "we didn't expect this pipe to run through here", even though they've done countless renovations... everything is always a surprise to them). I know I'm infuriated when they adopt the attitude of "it'll be done when it's done" (though usually they simply lie about upfront deadlines/budgets).
Maybe that's how others see us from outside software engineering. We always blame others, we never give realistic deadlines, we always act surprised with setbacks.
One thing that I'd like to understand then is _why_... Why doesn't management use a more direct way of saying it? Instead of asking for estimates, why don't they say: we have until date X, what can we do? Is it just some American way of being polite? I am sincerely curious :)
I've worked on multiple teams at completely different companies years apart that had the same weird rules around "story points" for JIRA: Fibbonacci numbers only, but also anything higher than 5 needs to be broken into subtasks. In practice, this just means, 1-5, except not 4. I have never been able to figure out why anyone thought this actually made any practical sense, or whether this apparently is either common enough to have been picked up by both teams or if I managed to somehow encounter two parallel instances of these rules developing organically.
Is it going to take more than two days?
Is it going to take more than two weeks?
Is it going to take more than two months?
Is it going to take more than two years?
If you can answer these questions, you can estimate using a confidence interval.
If the estimate is too wide, break it down into smaller chunks, and re-estimate.
If you can't break it down further, decide whether it's worth spending time to gather information needed to narrow the estimate or break it down. If not, scrap the project.
Step 1: Customer <-> Sales/Product (i.e., CEO). Step 2: Product <-> Direct to Engineering (i.e., CTO)
The latency between Step1 and Step2 is 10 minutes. CEO leaves the meeting takes a piss and calls the CTO.
- Simple features take a day: CTO to actual implementation latency depends on how hands on the CTO is. In good startups CTO is the coder. Most features will make its way into the product in days.
- Complex Features take a few days: This is a tug of war between CTO - CEO and indirectly the customer. CTO will push back and try to hit a balance with CEO while the CEO works with the customer to find out what is acceptable. Again latency is measured by days.
Big companies cannot do this and will stifle your growth as an engineer. Get out there and challenge yourselves.
> estimates are not by or for engineering teams.
It's surprising the nuance and variety of how management decisions are made in different orgs, a lot depends on personalities, power dynamics and business conditions that the average engineer has almost no exposure to.
When you're asked for an estimate, you've got to understand who's asking and why. It got to the point in an org I worked for once that the VP had to explicitly put a moratorium on engineers giving estimates because those estimates were being taken by non-technical stakeholders of various stripes and put into decks where they were remixed and rehashed and used as fodder for resourcing tradeoff discussions at the VP and executive level in such a way as to be completely nonsensical and useless. Of course these tradeoff discussions were important, but the way to have them was not to go to some hapless engineer, pull an overly precise estimate based on a bunch of tacit assumptions that would never bear out in reality, and then hoist that information up 4 levels of management to be shown to leadership with a completely different set of assumptions and context. Garbage in, garbage out.
These days I think of engineering level of effort as something that is encapsulated as primarily an internal discussion for engineering. Outwardly the discussion should primarily be about scope and deadlines. Of course deadlines have their own pitfalls and nuance, but there is no better reality check for every stakeholder—a deadline is an unambiguous constraint that is hard to misinterpret. Sometimes engineers complain about arbitrary deadlines, and there are legitimate complaints if they are passed down without any due diligence or at least a credible gut check from competent folks, but on balance I think a deadline helps engineering more than it hurts as it allows us to demand product decisions, designs, and other dependencies land in a timely fashion. It also prevents over-engineering and second system syndrome, which is just as dangerous a form of scope creep as anything product managers cook up when the time horizon is long and there is no sense of urgency to ship.
This is so real. Sometimes when you get a unreasonably big feature request. It always turns to be somebody don't know how to express their request correctly. And the management overexerted it.
Your manager, VP, customer, whoever already has a schedule. Find out what it is and work backwards.
I always tell my teams just skip the middlemen and think of estimates as time from the jump. It's just easier that way. As soon as an estimate leaves an engineer's mouth, it is eagerly translated into time by everyone else at the business. That is all anyone else cares about. Better said - that is all anyone else can understand. We humans all have a shared and unambiguous frame of reference for what 1 hour is, or what 1 day is. That isn't true of any other unit of software estimation. It doesn't matter that what one engineer can accomplish in 1 hour or 1 day is different from the next. The same is true no matter what you're measuring in. You can still use buffers with time. If you insist on not thinking of your labor in terms of hours spent, you can map time ranges to eg. points along the Fibonacci sequence. That is still a useful way to estimate because it is certainly true as software complexity goes up, the time spent on it will be growing non-linearly.
Is that a problem? Well, how good are they now?
I find that the best approach to solving that is taking a “tracer-bullet” approach. You make an initial end-to-end PoC that explores all the tricky bits of your project.
Making estimates then becomes quite a bit more tractable (though still has its limits and uncertainty, of course). Conversations about where to cut scope will also be easier.
At my team we think in terms of deliverables and commitments: "I can commit til deliver this by that date under these circumstances".
This mitigated the diverse nature Og thinking.
My favourite parts:
> My job is to figure out the set of software approaches that match that estimate. […]
> Many engineers find this approach distasteful. […]
> If you refuse to estimate, you’re forcing someone less technical to estimate for you.
Even after many years, I still find it distasteful sometimes but I have to remind myself what everyone gets paid for at the end of the day.
Estimation can be done. It's a skillset issue. Yet the broad consensus seems to be that it can't be done, that it's somehow inherently impossible.
Here are the fallacies I think underwrite this consensus:
1. "Software projects spend most of their time grappling with unknown problems." False.
The majority of industry projects—and the time spent on them—are not novel for developers with significant experience. Whether it's building a low-latency transactional system, a frontend/UX, or a data processing platform, there is extensive precedent. The subsystems that deliver business value are well understood, and experienced devs have built versions of them before.
For example, if you're an experienced frontend dev who's worked in React and earlier MVC frameworks, moving to Svelte is not an "unknown problem." Building a user flow in Svelte should take roughly the same time as building it in React. Experience transfers.
2. "You can't estimate tasks until you know the specifics involved." Also false.
Even tasks like "learn Svelte" or "design an Apache Beam job" (which may include learning Beam) are estimable based on history. The time it took you to learn one framework is almost always an upper bound for learning another similar one.
In practice, I've had repeatable success estimating properly scoped sub-deliverables as three basic items: (1) design, (2) implement, (3) test.
3. Estimation is divorced from execution.
When people talk about estimation, there's almost always an implicit model: (1) estimate the work, (2) "wait" for execution, (3) miss the estimate, and (4) conclude that estimation doesn't work.
Of course this fails. Estimates must be married to execution beat by beat. You should know after the first day whether you've missed your first target and by how much—and adjust immediately.
Some argue this is what padding is for (say, 20%). Well-meaning, but that's still a "wait and hope" mindset.
Padding time doesn't work. Padding scope does. Scope padding gives you real execution-time choices to actively manage delivery day by day.
At execution time, you have levers: unblock velocity, bring in temporary help, or remove scope. The key is that you're actively aiming at the delivery date. You will never hit estimates if you're not actively invested in hitting them, and you'll never improve at estimating if you don't operate this way. Which brings me to:
4. "Estimation is not a skillset."
This fallacy is woven into much of the discourse. Estimation is often treated as a naïve exercise—list tasks, guess durations, watch it fail. But estimation is a practicable skill that improves with repetition.
It's hard to practice in teams because everyone has to believe estimation can work, and often most of the room doesn't. That makes alignment difficult, and early failures get interpreted as proof of impossibility rather than part of skill development.
Any skill fails the first N times. Unfortunately, stakeholders are rarely tolerant of failure, even though failure is necessary for improvement. I was lucky early in my career to be on a team that repeatedly practiced active estimation and execution, and we got meaningfully better at it over time.
The only reasonable way to estimate something is in work hours. Everything else is severely misguided.
Also, if you don't follow up any estimate is meaningless.
It's also important to gather consensus among the team and understand if/why work hour estimates differ between individuals on the same body of work or tasks. I'd go so far as to say that a majority of project planning, scoping, and derisking can be figured out during an honest discussion about work hour estimates.
Story points are too open to interpretation and have no meaningful grounding besides the latent work hours that need to go into them.
This is exactly how all good art is done. There's an old French saying, une toile exige un mur.
Choose 2. For example a large feature set can be made quickly, but it will be of poor quality.
Note that cost is somewhat orthogonal, throwing money at a problem does not necessarily improve the tradeoff, indeed sometimes it can make things worse.
Timelines can be estimated approximately.
I’ve never had a construction project finish exactly on time, but that doesn’t mean estimates are unwise.
1. When you consider planning, testing, documentation, etc. it takes 4 hours to change a single line of code.
2. To make good estimates, study the problem carefully, allow for every possibility, and make the estimate in great detail. Then take that number and multiply by 2. Then double that number.
A few days? At least a week.
A week? A month.
A month? A year.
A year? Uh... decade or never...
It's wildly pessimistic but not as inaccurate as I'd like.
First impressions with this is they give really long estimates.
Also, due to coding agents, you can have them completely implement several different approaches and find a lot of unknown unknowns up front.
I was building a mobile app and couldn’t figure out whether I wanted to do two native apps or one RN/Expo app. I had two different agents do each one fully vibe coded and then tell me all the issues they hit (specific to my app, not general differences). Helped a ton.
I asked it to estimate a timeline for a feature in my hobby project and it confidently replied, “4.5 weeks to code completion”.
Less than 4 hours later, the feature was done. I asked it to compare this against its initial estimate and it replied, “Right on schedule!”
I have completely given up on using it to estimate anything that actually matters.
That's really useful for some tasks, like regurgitating code to perform a specific function, but it's basically useless for jobs like estimating schedules.