I think about this a lot because it’s true of any complex system or argument, not just software.
> The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature. It also requires a willingness to accept objectives which are limited by physical, logical, and technological constraints, and to accept a compromise when conflicting objectives cannot be met. No committee will ever do this until it is too late.
(All from his Turing Award lecture, "The Emperor's Old Clothes": https://www.labouseur.com/projects/codeReckon/papers/The-Emp...)
The software I like best was not written by "teams"
I prefer small programs written by individuals that generally violate memes like "software is never finished" and "all software has bugs"
(End user perspective, not a developer)
I was brought in to finish building the interchange format. The previous guy was not up to snuff. The architect I worked for was (with love) a sarcastic bastard who eventually abdicated about 2 rings of the circus to me. He basically took some of the high level meetings and tapped in when one of us thought I might strangle someone.
Their initial impression was that I was a prize to be fought over like a child in a divorce. But the guy who gives you your data has you by the balls, if he is smart enough to realize it, so it went my way nine times out of ten. It was a lot of work threading that needle, (I’ve never changed the semantics of a library so hard without changing the syntax), but it worked out for everyone. By the time we were done the way things worked vs the way they each wanted it to work was on the order of twenty lines of code on their end, which I essentially spoonfed them so they didn’t have a lot of standing to complain. And our three teams always delivered within 15% of estimates, which was about half of anyone else’s error bar so we lowly accreted responsibilities.
I ended up as principal on that project (during a hiring/promotional freeze on that title. I felt bad for leaving within a year because someone pulled strings for that, but I stayed until I was sure the house wouldn’t burn down after I left, and I didn’t have to do that). I must have said, “compromise means nobody gets their way.” About twenty times in or between meetings.
A committee forms when there's widespread disagreement on goals or priorities - representing stakeholders who can't agree. The cost is slower decisions and compromise solutions. The benefit is avoiding tyranny of a single vision that ignores real needs.
Tony might be my favorite computer scientist.
https://www.npr.org/sections/13.7/2014/02/03/270680304/this-...
The book is well worth reading.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
One senior professor, who was helping out with this, asked Dijkstra what is to be done with his correspondences. The professor, quite renowned himself, relates a story where Dijsktra tells him from his hospital bed, to keep the ones with "Tony" and throw the rest.
The professor adds with a dry wit, that his own correspondence with Dijsktra were in the pile too.
The origin of the quote may have more to do with cultural differences between the Dutch and Americans.
That would seem to be your sentiment, not his, based on the link you shared. Rather than being censorious he shared a nice story on the matter.
I can't remember what Oxford did to resolve this, but I think they settled on `C.A.R. Hoare Residence`.
[1] https://www.cs.ox.ac.uk/people/jennifer.watson/tonyhoare.htm...
Anyone else, like me, imagining ML models embodied as Androids attending what amounts to a book club? (I can't quite shake the image of them being little CodeBullets with CRT monitors for heads either.)
The University was correct in saying "nope" to the endless distractions, misery, and overhead of having to deal with that.
My favourite quote of his is “There are two ways of constructing a piece of software: One is to make it so simple that there are obviously no errors, and the other is to make it so complicated that there are no obvious errors.”
While we hope it's not true, if it is a very deserved RIP.
He famously gave up on making formal methods mainstream, but I believe there will be a comeback quite soon.
On generated code, verification is the bottleneck. He was right, just too early.
(Software) Transactional Memory and other ideas inspired by databases have a much better shot at this.
Sad to think that the TonyHoare process has reached STOP.
RIP.
I repeatedly borrow this quote from his 1980 Turing Award speech, 'The Emperor's Old Clothes'... "At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. "You know what went wrong?" he shouted--he always shouted-- "You let your programmers do things which you yourself do not understand." I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system? I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution."
My interpretation is that whether shifting from delegation to programmers, or to compilers, or to LLMs, the invariant is that we will always have to understand the consequences of our choices, or suffer the consequences.
That was 35ish years ago. I just pulled up the paper now and I can't read the notation anymore... This might be something that I try applying an AI to. Get it to walk me through a paper paragraph-by-paragraph until I get back up to speed.
Retrospective: An Axiomatic Basis For Computer Programming. This was written 30 years after An Axiomatic Basis for Computer Programming to take stock on what was proven right and what was proven wrong - https://cacm.acm.org/opinion/retrospective-an-axiomatic-basi...
How Did Software Get So Reliable Without Proof? More detailed paper on the above theme (pdf) - https://6826.csail.mit.edu/2020/papers/noproof.pdf
An older gentleman stood up and politely mentioned they knew a thing or two.
That was Tony Hoare.
I always liked this presentation. I think it's equally fine to say "invented" something, but I think this fits into his ethos (from what I understand of him.) There are natural phenomena, and it just takes noticing.
RIP Sir Tony.
Several keywords used in many programming languages come from Hoare, who either coined them himself, or he took them from another source, but all later programming language designers took them from Hoare. For example "case", but here only the keyword comes from Hoare, because a better form of the "case" statement had been proposed first by McCarthy many years earlier, under the name "select".
Another example is "class" which Simula 67, then all object-oriented languages took from Hoare, However, in this case the keyword has not been used first by Hoare, because he took "class", together with "record", from COBOL.
Another keyword popularized by Hoare is "new" (which Hoare took from Wirth, but everybody else took from Hoare), later used by many languages, including C++. At Hoare, the counterpart of "new" was "destroy", hence the name "destructor", used first in C++.
The paper "Record Handling", published by C.A.R. Hoare in 1965-11 was a major influence on many programming languages. It determined significant changes in the IBM PL/I programming language, including the introduction of pointers . It also was the source of many features of the SIMULA 67 and ALGOL 68 languages, from where they spread in many later programming languages.
The programming language "Occam" has been designed mainly as an implementation of the ideas described by Hoare in the "Communicating Sequential Processes" paper published in 1978-08. OpenMP also inherits many of those concepts, and some of them are also in CUDA.
>OpenMP also inherits many of those concepts, and some of them are also in CUDA.
there was a time, 10-15 years ago, when they were super cool. at some point they """diluted""" the technicality content and the nature of guests and they vanished into irrelevance.
Hoare Logic - https://en.wikipedia.org/wiki/Hoare_logic
Hoare Logic + Dijkstra's weakest precondition + Meyer's Design-by-Contract is what should be used to get LLMs to generate proof with code in a correctness-by-construction approach to implementation.
References:
Correctness-by-Construction (CbC) - https://www.tu-braunschweig.de/en/isf/research/cbc
A Course on Correctness by Construction - https://wp.software.imdea.org/cbc/
However, they were not just concurrent, but also communicating.
His “billion dollar mistake”:
https://www.infoq.com/presentations/Null-References-The-Bill...
He states around minute 25 the solution to the problem is to explicitly represent null in the type system, so nullable pointers are explicitly declared as such. But it can be complex to ensure that non-nullable references are always initialized to a non-null value, which is why he chose the easy solution to just let every reference be nullable.
&ref_that_cannot_be_null
*ref_that_can_be_null
The latter is still a bad idea, even if it isn't the only reference form, and even if it isn't the default, if it lets you do this: ref_that_can_be_null->thing()
Where only things that are, e.g., type T have a `thing` attribute. Nulls are "obviously" not T, but a good number of languages' type system which permit nullable reference, or some form of it, permit treating what is in actuality T|null in the type system as if it were just T, usually leading to some form of runtime failure if null is actually used, ranging from UB (in C, C++) to panics/exceptions (Go, Java, C#, TS).It's an error that can be caught by the type system (any number of other languages demonstrate that), and null pointer derefs are one of those bugs that just plague the languages that have it.
Optional types were a very valuable invention and the fact that null values have been handled incorrectly in many programming languages or environments is not Hoare's fault.
There are two ways this might happen, both will solve the Billion Dollar Problem but I think one is the clear winner. The first way is explicit optionality, often retro-fitted to languages for example in C# the difference between the types Goose and Goose? are that (in a suitable C# project enabling this rule) the first one is always a Goose and the second might be null instead.
The second way is if you have Sum types you can just add "or it's null" to the type.
I think sum types are better because they pass my "three purposes" rule where I can think of not one (Option<T> replaces optionality) or two (Result<T,E> for error handling) but at least three (ControlFlow<B, C> reifies control flow) distinct problems I don't need separate solutions for any more if I have this feature.
If your type system is too weak you suffer the Billion Dollar problem with Hoare's idea and perhaps if this "feature" had never been invented we'd have all migrated to languages with a better type system decades ago.
In my opinion, besides being passed as arguments of functions whose parameters are declared as having the corresponding Optional or Sum type, there is only one other permissible use of values of such types.
Variables of an Optional type shall be allowed in the Boolean expression of an "if" or equivalent conditional statement/expression, while variables of a Sum type shall be allowed in an expression that tests which is the current type in a select/case/switch or whatever is the name used for a conditional statement or expression with multiple branches.
Then in the statements or expressions that are guarded by testing the Optional- or Sum-type variable, that variable shall be used as having the corresponding non-optional type or the type among those possible for a Sum type that has been determined by the test.
This syntax ensures that such variables will not be misused, while also avoiding the excessive and unhelpful verbosity that exists in some languages.
The prime operation and address mapping.
The prime operation defines a certain single‑argument function. Its symbol (a prime mark) is written above and to the left of the argument: 'a = b where a is the argument and b is the result of the operation. This is read as: "prime a equals b" (or "b is the contents of a"). The argument a is called an address, and the function value b is called the contents of the address. The prime function ' defines a mapping from the set of addresses A to the set of contents B, which we will call an address mapping.
Page 36, chapter III https://torba.infoua.net/files/kateryna-yushchenko/Vychislit...
In any case, by 1954 already most or all electronic computers used this.
The only priority questions can refer to which are the first high-level programming languages that have used pointers.
In my opinion the first language having pointers with implicit dereferencing was CPL, published in 1963-08, and the first language having pointers with explicit dereferencing was Euler, published completely in 1966-01, but this feature had already been published in 1965-11. The first mainstream programming language, with a large installed base, which had pointers, was the revised IBM PL/I, starting with its version from 1966-07.
Thanks for the link to the book describing the "Kiev" computer. It seems an interesting computer for the year 1957, but it does not have anything to do with the use of pointers in high-level programming languages.
At the page indicated by you there is a description of what appears to be a symbolic assembler. The use of a symbolic assembly language was a great progress at that early date, because many of the first computer programs had been written directly in machine language, or just with a minimal translation, e.g. by using mnemonics instead of numeric opcodes.
However this does not have anything to do with HLL pointers and means to indicate indirect addressing in an assembly language have existed earlier, because they were strictly necessary for any computed that provided indirect addressing in hardware.
In the very first computers, the instructions were also used as pointers, so a program would modify the address field of an instruction, which was equivalent to assigning a new value to a pointer, before re-executing the instruction.
Later, to avoid the re-writing of instructions, both index registers and indirect addressing were introduced. Indirect addressing typically reserved one bit of an address to mark indirection. So when the CPU loaded a word from the memory, if the indirect addressing bit was set, it would interpret the remainder of the word as a new address, from which a new word would be loaded. This would be repeated if the new word also had the indirection bit set.
The assembly languages just had to use some symbol to indicate that the indirection bit must be set, which appears to have been "prime" for "Kiev".
The use of pointers in assembly language does not count as an invention, as it was used since the earliest automatic computers. The use of implicit reference variables, which cannot be manipulated by the programmer, like in FORTRAN IV (1962) does not count as pointers.
The method for forcing another level of evaluation of a variable by using a "$" prefix, which was introduced in SNOBOL in January 1964, and which has been inherited by the UNIX shell and its derivatives does not count as a pointer.
The term "pointer" was introduced in a revision of the IBM PL/I language, which was published in July 1966. In all earlier publications that I have ever seen the term used was "reference", not "pointer".
There are 2 high-level programming languages that were the first to introduce explicit references (i.e. pointers). One language was Euler, published in January 1966 by Niklaus Wirth and Helmut Weber. However Hoare knew about this language before the publication, so he mentioned it in his paper from November 1965, where he discussed the use of references (i.e. pointers).
The other language was the language CPL, which had references already in August 1963. The difference between how CPL used references and how Euler used references is that in Euler pointer dereferencing was explicit, like later in Pascal or in C. On the other hand, in CPL (the ancestor of BCPL), dereferencing a pointer was implicit, so you had to use a special kind of assignment to assign a new value to a pointer, instead of assigning to the variable pointed by the pointer.
Looking now in Wikipedia, I see a claim that Bud Lawson has invented pointers in 1964, but there is no information about where he has published this and about which is the high-level programming language where the pointers of Bud Lawson had been used.
If the pointers of Bud Lawson were of the kind with explicit dereferencing, they would precede by a year the Euler language.
On the other hand, if his pointers were with implicit dereferencing, then they came a year after the British programming language CPL.
Therefore, in the best case for Bud Lawson, he could have invented an explicit dereferencing operator, like the "*" of C, though this would not have been a great invention, because dereferencing operators were already used in assembly languages, they were missing only in high-level languages.
However, the use of references a.k.a. pointers in a high-level programming language has already been published in August 1963, in the article "The main features of CPL", by Barron, Buxton, Hartley, Nixon and Strachey.
Until I see any evidence for this, I consider that any claim about Bud Lawson inventing pointers is wrong. He might have invented pointers in his head, but if he did not publish this and it was not used in a real high-level programming language, whatever he invented is irrelevant.
I see on the Internet a claim that he might have been connected with the pointers of IBM PL/I.
This claim appears to be contradicted by the evidence. If Bud Lawson had invented pointers in 1964, then the preliminary version of PL/I would have had them.
In reality, the December 1964 version of PL/I did not have pointers. Moreover, the first PL/I version used in production, from the middle of 1965 also did not have pointers.
The first PL/I version that has added pointers was introduced only in July 1966, long enough after the widely-known publications of Hoare and of Wirth about pointers. That PL/I version also added other features proposed by Hoare, so there is no doubt that the changes in the language were prompted by the prior publications.
So I think that the claim that Bud Lawson has invented pointers is certainly wrong. He might have invented something related to pointers, but not in 1964.
PL/I had one original element, the fact that pointer dereferencing was indicated by replacing "." with "->". This has later been incorporated in the language C, to compensate its mistake of making "*" a prefix operator.
The "->" operator is the only invention of PL/I related to pointers, so that is a thing that has been invented by an IBM employee, but I am not aware of any information about who that may be. In any case, this was not invented in 1964, but in 1966.
Tony Hoare was on my bucket list of people I wanted to meet before I or they die. My grad school advisor always talked of him extremely highly, and while I cannot seem to confirm it, I believe Hoare might have been his PhD advisor.
It's hard to overstate how important Hoare was. CSP and Hoare Logic and UTP are all basically entire fields in their own right. It makes me sad he's gone.
Regardless, he certainly knew Tony Hoare, and spoke extremely highly of him.
David May was my PhD supervisor and always spoke very highly of Sir Tony Hoare.
Edit: I’m also lucky enough to have worked with Geoff Barrett, the guy that completed that formal verification (and went on to do numerous other interesting things). Some people may be interested to learn that this work was the very first formal verification of an FPU - and the famous Intel FPU bug could have been avoided had Intel been using the verification methods that the Inmos and University teams pioneered.
Both of them are legitimately wonderful and intelligent humans that I can only use positive adjectives to describe, but the one I was referring to in this was Jim Woodcock [2]. He had many, many nice things to say about Tony Hoare.
[1] Just so I'm not misleading people, I didn't finish my PhD. No fault at all of the advisor or the school.
Source: David loved to tell some of these stories to us as students at Bristol.
Transputer and Occam were, in this sense, too early. A rebuild now combining more recent developments from Effect Algebras would be very interesting technically. (Commercially there are all sorts of barriers).
On specifically the relationship between Occam and Transputer architecture: http://people.cs.bris.ac.uk/~dave/transputer1984.pdf
Wider reading: http://people.cs.bris.ac.uk/~dave
If I remember correctly he had two immediate ideas, his first was bubble sort, the second turned out to be quicksort.
He was already very frail by then. Yet clarity of mind was undiminished. What came across in that talk, in addition to his technical material, was his humor and warmth.
(Source: TFA)
Makes me think of an anecdote where Dijkstra said that he feared he would only be remembered for his shortest path algorithm.
I genuinely forget he authored quicksort on the regular.
Mr. Hoare did a talk back during my undergrad and for some reason despite totally checked out of school I attended, and it is one of my formative experiences. AFAICR it was about proving program correctness.
After it finished during the Q&A segment, one student asked him about his opinions about the famous Brooks essay No Silver Bullet and Mr. Hoare's answer was... total confusion. Apparently he had not heard of the concept at all! It could be a lost in translation thing but I don't think so since I remember understanding the phrase "silver bullet" which did not make any sense to me. And now Mr. Hoare and Dr. Brooks are two of my all time computing heroes.
Edit: Oh and he has multiple honorary doctorates (at least 6!), so would be just as much "Dr." too!
He made incoming DPhil (PhD) students a cup of tea individually in his office at the Computing Laboratory. It was a small group, but still I appreciated this personal touch.
I am normally a casual guy but for a giant being a bit more formal (pun intended) seems appropriate. Or maybe I am a nerd through and through :-)
> Another good example of recursion is quicksort, a sorting algorithm developed by C.A.R. Hoare in 1962. Given an array, one element is chosen and the others partitioned in two subsets - those less than the partition element and those greater than or equal to it. The same process is then applied recursively to the two subsets. When a subset has fewer than two elements, it doesn't need any sorting; this stops the recursion.
> Our version of quicksort is not the fastest possible, but it's one of the simplest. We use the middle element of each subarray for partitioning. [...]
It was one of the first few 'serious' algorithms I learnt to implement on my own. More generally, the book had a profound impact on my life. It made me fall in love with computer programming and ultimately choose it as my career. Thanks to K&R, Tony Hoare and the many other giants on whose shoulders I stand.
In the 60s inventing one single algorithm with 10 lines of code was a thing.
If you did that today nobody would bat an eye.
Today people write game engines, compilers, languages, whole OS and nobody bats an eye cause there are thousands of those.
Quick sort isn't even a thing for leet code interviews anymore because it's not hard enough.
It had intrigued me due to its promise of designing lock-free concurrent systems, that can (I think) also be proven to be deadlock-free.
You do this by building a simple concurrent block that is proven to work correctly, and then build bigger ones using the smaller, proven blocks, to create more complex systems.
The way it is designed is processes don't share data and don't have locks. They use synchronized IPC for passing and modifying data. It seemed to be a foundational piece for designing reliable systems that incorporate concurrency in them.
> On the topic of films, I wanted to follow up with Tony a quote that I have seen online attributed to him about Hollywood portrayal of geniuses, often especially in relation to Good Will Hunting. A typical example is: "Hollywood's idea of genius is Good Will Hunting: someone who can solve any problem instantly. In reality, geniuses struggle with a single problem for years". Tony agreed with the idea that cinema often misrepresents how ability in abstract fields such as mathematics is learned over countless hours of thought, rather than - as the movies like to make out - imparted, unexplained, to people of 'genius'. However, he was unsure where exactly he had said this or how/why it had gotten onto the internet, and he agreed that online quotes on the subject, attributed to him, may well be erroneous.
Somewhat off-topic, but it's cool hearing this from someone who's contributed so much to the fields of programming and mathematics. It makes me hopeful that my own strugglings with math will pay out over time!
His presentation on his billion dollar mistake is something I still regularly share as a fervent believer that using null is an anti-pattern in _most_ cases. https://www.infoq.com/presentations/Null-References-The-Bill...
That said, his contributions greatly outweigh this 'mistake'.
Without things like null pointers, goto, globals, unsafe modes in modern safe(r) languages you can get yourself into a corner by over designing everything, often leading to complex unmaintainable code.
With judicious use of these anti-patterns you get mostly good/clean design with one or two well documented exceptions.
You just don't need it but it isn't there as some sort of "escape hatch" it's more out of stubbornness. Languages which don't have it are fine, arguably easier to understand by embracing structure more. I happen to like Rust's "break 'label value" but there are plenty of ways to solve even the trickier parts of this problem (and of course most languages aren't expression based and wouldn't need a value there).
The OpenJDK HashMap returns null from get(), put() and remove(), among others. Is this just because it hasn't been reviewed enough yet?
Code reviews 'somehow' strip out poorly thought out new uses of escape hatches.
For your example, it would be an use of get, put or remove without checking the result.
249 points by nextos 16 hours ago | 61 comments
he read the algol 60 report (Naur, McCarthy, Perlis, …)
and that described "recursion"
=> aaah!
SIR_TONY_HOARE = μX • (think → create → give → X)
-- process ran from 1934 to 2026 -- terminated with SKIP -- no deadlock detected -- all assertions satisfied -- trace: ⟨ quicksort, hoare_logic, csp, monitors, -- dining_philosophers, knighthood, turing_award, -- billion_dollar_apology, structured_programming, -- unifying_theories, ... ⟩ -- trace length: ∞ The channel is closed. The process has terminated. The algebra endures.
Virtual HLF 2020 – Scientific Dialogue: Sir C. Antony R. Hoare/Leslie Lamport
When I started university he gave a talk to all the new CompScis which as you can imagine was incredibly inspirational for an aspiring Software Engineer.
Grateful to have had that experience.
RIP
This is how you remind me of what I really am This is how you remind me of what I really am
It's not like you to say sorry I was waitin' on a different story This time I'm mistaken For handing you a heart worth breakin' And I've been wrong, I've been down Been to the bottom of every bottle These five words in my head Scream, "Are we havin' fun yet?"
Yet, yet, yet, no, no Yet, yet, yet, no, no
It's not like you didn't know that I said, "I love you," and I swear I still do And it must have been so bad 'Cause livin' with me must have damn near killed you
And this is how you remind me of what I really am This is how you remind me of what I really am
It's not like you to say sorry I was waitin' on a different story This time I'm mistaken For handing you a heart worth breakin' And I've been wrong, I've been down Been to the bottom of every bottle These five words in my head Scream, "Are we havin' fun yet?"
Yet, yet, yet, no, no Yet, yet, yet, no, no Yet, yet, yet, no, no Yet, yet, yet, no, no
Never made it as a wise man I couldn't cut it as a poor man stealin' And this is how you remind me This is how you remind me
This is how you remind me of what I really am This is how you remind me of what I really am
It's not like you to say sorry I was waitin' on a different story This time I'm mistaken For handing you a heart worth breakin' And I've been wrong, I've been down Been to the bottom of every bottle These five words in my head Scream, "Are we havin' fun yet?"
Yet, yet, are we havin' fun yet? Yet, yet, are we havin' fun yet? Yeah, yeah (These five words in my head scream) Are we havin' fun yet? Yeah, yeah (These five words in my head) No, no
See the "preface" for details of the book - https://dl.acm.org/doi/10.1145/3477355.3477356
Review of the above book - https://www.researchgate.net/publication/365933441_Review_on...
Somebody needs to contact ACM and have them make the above book freely available now; there can be no better epitaph.
2) Tony Hoare's lecture in honour of Edsger Dijkstra (2010); What can we learn from Edsger W. Dijkstra? - https://www.cs.utexas.edu/~EWD/DijkstraMemorialLectures/Tony...
Somebody needs to now write a similar one for Hoare.
Truly one of the absolute greats in the history of Computer Science.
RIP good sir
Can anyone suggest a better approach for a situation like this in the future? What's better than exposing addressing the problem with a light solution?
https://blog.ploeh.dk/2015/04/13/less-is-more-language-featu...
Null pointers are a software abstraction, and nowadays we have better abstractions.
(Although Optional/Maybe types are definitely my preference based on the languages I've used)
Turing Award Legend.
Thank you for your work on ALGOL, you were multiple decade ahead of your time.
Rest in peace.
As a non-junior dev I realize how stupid that was.
The second enlightenment is that if you don't understand what the libraries are doing, you will probably ship things that assemble the libraries in unreasonably slow/expensive ways, lacking the intuition for how "hard" the overall operation should be.