Uncertain<T>
245 points
8 hours ago
| 19 comments
| nshipster.com
| HN
AlotOfReading
6 hours ago
[-]
A small note, but GPS is only well-approximated by a circular uncertainty in specific conditions, usually open sky and long-time fixes. The full uncertainty model is much more complicated, hence the profusion of ways to measure error. This becomes important in many of the same situations that would lead you to stop treating the fix as a point location in the first place. To give a concrete example, autonomous vehicles will encounter situations where localization uncertainty is dominated by non-circular multipath effects.

If you go down this road far enough you eventually end up reinventing particle filters and similar.

reply
mikepurvis
5 hours ago
[-]
Vehicle GPS is usually augmented by a lot of additional sensors and assumptions, notably the speedometer, compass, and knowledge the you'll be on one of the roads marked on its map. Not to mention a fast fix because you can assume you haven't changed position since you last powered on.
reply
monocasa
4 hours ago
[-]
As well as a fast fix because you know what mobile cell or wifi network you're on.
reply
boscillator
7 hours ago
[-]
Does this handle covariance between different variables? For example, the location of the object your measuring your distance to presumably also has some error in it's position, which may be correlated with your position (if, for example, if it comes from another GPS operating at a similar time).

Certainly a univarient model in the type system could be useful, but it would be extra powerful (and more correct) if it could handle covariance.

reply
evanb
5 hours ago
[-]
If you need to track covariance you might want to play with gvar https://gvar.readthedocs.io/en/latest/ in python.
reply
layer8
6 hours ago
[-]
To properly model quantum mechanics, you’d have to associate a complex-valued wave function with any set of entangled variables you might have.
reply
8note
5 hours ago
[-]
for mechanical engineering drawings to communicate with machinists and the like, we use tolerances

eg. 10cm +8mm/-3mm

for what the acceptable range is, both bigger and smaller.

id expect something like "are we there yet" referencing GPS should understand the direction of the error and what directions of uncertainty are better or worse

reply
mabster
3 hours ago
[-]
Something that's bugged me about this notation though is that sometimes it means "cannot exceed the bounds" and sometimes it means "only exceeds the bounds 10% of the time"
reply
taneq
2 hours ago
[-]
I don’t think I’ve ever seen mechanical drawings have “90% confidence” dimensions like this. If a part’s too big then it won’t fit, and it’s probably useless.
reply
kevin_thibedeau
2 hours ago
[-]
If a test procedure is verifying all dimensional accuracy, it can be assumed to be bounding tolerance. If it's a mass production line with less than 100% testing of parts, you'd have to expect that some outliers get by and the tolerance is something like 3-sigma on a Gaussian.
reply
j2kun
4 hours ago
[-]
This concept has been done many times in the past, under the name "interval arithmetic." Boost has it [1] as does flint [2]

What is really curious is why, after being reinvented so many times, it is not more mainstream. I would love to talk to people who have tried using it in production and then decided it was a bad idea (if they exist).

[1]: https://www.boost.org/doc/libs/1_89_0/libs/numeric/interval/... [2]: https://arblib.org/

reply
kccqzy
3 hours ago
[-]
The article says,

> Under the hood, Uncertain<T> models GPS uncertainty using a Rayleigh distribution.

And the Rayleigh distribution is clearly not just an interval with a uniformly random distribution in between. Normal interval arithmetic isn't useful because that uniform random distribution isn't at all a good model for the real world.

Take for example that Boost library you linked. Ask it to compute (-2,2)*(-2,2). It will give (-4,4). A more sensible result might be something like (-2.35, 2.35). The -4 lower bound is only attainable when you have -2 and 2 as the multiplicands which are at the extremes of the interval; probabilistically if we assume these are independent random variables then two of them achieving this extreme value simultaneously should have an even lower probability.

reply
woah
1 hour ago
[-]
Using simple types (booleans etc) is very simple and easy to reason about, and any shortcomings are obvious. Trying to model physical uncertainty is difficult and requires different models for different domains. Once you have committed to needing to do that, it would be much better to use a purpose built model instead of a library which put some bell curves behind a pretty API.
reply
orlp
43 minutes ago
[-]
Not sure why this is being upvoted as the article is not describing interval arithmetic. It supports all kinds of uncertainty distributions.
reply
PaulDavisThe1st
1 hour ago
[-]
Several years ago when I discovered some of the historical work on interval arithmetic, I was astounded to find that there was a notable contingent in the 60s that was urging hardware developers to make interval arithmetic be the basic design of new CPUs, and saying quite forcefully that if we simply went with "normal" integers and floating point, we'd be unable to correctly model the world.
reply
skissane
28 minutes ago
[-]
I think as another commenter pointed out, interval arithmetic’s problem is that while it acknowledges the reality of uncertainty, its model of uncertainty is so simplistic, in many applications it is unusable. So making it the standard primitive, could potentially result in the situation where apps that don’t need to explicitly model uncertainty at all have to pay the price of being forced to do so; meanwhile, apps which need a more realistic model of uncertainty are being forced to do so while being hamstrung by its interactions with another overly simple model. It is one of those ideas which sounds great in theory, but there are good reasons it never succeeded in practice-the space of use cases where explicitly modelling uncertainty is desirable, but where the simplistic model of interval arithmetic is entirely adequate, is rather small-a standard primitive which only addresses the needs of a narrow subset of use cases is not a good architecture
reply
Tarean
4 hours ago
[-]
Interval arithmetic is only a constant factor slower but may simplify at every step. For every operation over numbers there is a unique most precise equivalent op over intervals, because there's a Galois connection. But just because there is a most precise way to represent a set of numbers as an interval doesn't mean the representation is precise.

A computation graph which gets sampled like here is much slower but can be accurate. You don't need an abstract domain which loses precision at every step.

reply
bee_rider
4 hours ago
[-]
It would have been sort of interesting if we’d gone down the road of often using interval arithmetic. Constant factor slower, but also the operations are independent. So if it was the conventional way of handling non-integer numbers, I guess we’d have hardware acceleration by now to do it in parallel “for free.”
reply
pklausler
4 hours ago
[-]
Interval arithmetic makes good intuitive sense when the endpoints of the intervals can be represented exactly. Figuring out how to do that, however, is not obvious.
reply
anal_reactor
2 hours ago
[-]
Because reasoning about uncertain values / random variables / intervals / fuzzy logic / whatever is difficult and the model where things are certain is much easier to process while it models the reality well enough.
reply
black_knight
5 hours ago
[-]
This seems closely related to this classic Functional Pearl: https://web.engr.oregonstate.edu/~erwig/papers/PFP_JFP06.pdf

It’s so cool!

I always start my introductory course on Haskell with a demo of the Monty Hall problem with the probability monad and using rationals to get the exact probability of winning using the two strategies as a fraction.

reply
contravariant
3 hours ago
[-]
I feel like if you're worried about picking the right abstraction then this is almost certainly the wrong one.
reply
lxe
2 hours ago
[-]
I really like that this leans on computing probabilities instead of forcing everything into closed-form math or classical probability exercises. I’ve always found it way more intuitive to simulate, sample, and work directly with distributions. With a computer, it feels much more natural to uh... compute: you just run the process, look at the results, and reason from there.
reply
layer8
6 hours ago
[-]
Arguably Uncertain should be the default, and you should have to annotate a type as certain T when you are really certain. ;)
reply
nine_k
5 hours ago
[-]
Only for physical measurements. For things like money, you should be pretty certain, often down to exact fractional cents.

It appears that a similar approach is implemented in some modern Fortran libraries.

reply
rictic
4 hours ago
[-]
A person might have mistyped a price, a barcode may have been misread, the unit prices might be correct but the quantity could be mistaken. Modeling uncertainty well isn't just about measurement error from sensors.

I wonder what it'd look like to propagate this kind of uncertainty around. You might want to check the user's input against a representative distribution to see if it's unusual and, depending on the cost of an error vs the friction of asking, double-check the input.

reply
bee_rider
40 minutes ago
[-]
Typos seem like a different type of error from physical tolerances, and one that would be really hard to reason about mathematically.
reply
XorNot
4 hours ago
[-]
Money has the problem that no matter how clever you are someone will punch all the values into Excel and then complain they don't match.

Or specify they're paying X per day, but want hourly itemized billing...but it should definitely come out to X per day (this was one employer which meant I invoiced them with like 8 digits of precision due to how it divided, and they refused to accept a line item for mathematical uncertainty aggregates).

reply
random3
4 hours ago
[-]
have you ever tried working computationally with money? Forget money, have you worked with floating points? There really isn't anything certain.
reply
nine_k
4 hours ago
[-]
Yes, I worked in a billing department. No, floats are emphatically not suitable for representing money, except the very rounded values in presentations.

Floats try to keep the relative error at bay, so their absolute precision varies greatly. You need to sum them starting with the smallest magnitude, and do many other subtle tricks, to limit rounding errors.

reply
esafak
5 hours ago
[-]
A complement to Optional.
reply
cb321
5 hours ago
[-]
If you are in an even more "approximate" mindset (as opposed to propagating by simulation to get real world re-sampled skewed distributions, as often happens in experimental physics labs, or at least their undergraduate courses), there is an error propagation (https://en.wikipedia.org/wiki/Propagation_of_uncertainty) simplification for "small" errors thing you can do. Then translating "root" errors to "downstream errors" is just simple chain rule calculus stuff. (There is a Nim library for that at https://github.com/SciNim/Measuremancer that I use at least every week or two - whenever I'm timing anything.)

It usually takes some "finesse" to get your data / measurements into territory where the errors are even small in the first place. So, I think it is probably better to do things like this Uncertain<T> for the kinds of long/fat/heavy tailed and oddly shaped distributions that occur in real world data { IF the expense doesn't get in your way some other way, that is, as per Senior Engineer in the article }.

reply
dcsommer
2 hours ago
[-]
Seems more proper to call it a `ProbabilityDistribution` type. It's a more general and intuitive way to handle the concept.
reply
bee_rider
36 minutes ago
[-]
But the pun, uncertainty.
reply
ngruhn
1 hour ago
[-]
Yeah but the shorter name wins
reply
munchler
5 hours ago
[-]
Is this essentially a programmatic version of fuzzy logic?

https://en.wikipedia.org/wiki/Fuzzy_logic

reply
esafak
5 hours ago
[-]
https://en.wikipedia.org/wiki/Probabilistic_programming more like. It is already a thing; see, for example, https://pyro.ai/
reply
nicois
4 hours ago
[-]
Is there a risk that this will underemphasise some values when the source of error is not independent? For example, the ROI on financial instruments may be inversely correlated to the risk of losing your job. If you associate errors with each, then combine them in a way which loses this relationship, there will be problems.
reply
mackross
7 hours ago
[-]
Always enjoy mattt’s work. Looks like a great library.
reply
krukah
5 hours ago
[-]
Monads are really undefeated. This particular application feels to me akin to wavefunction evolution? Density matrices as probability monads over Hilbert space, with unitary evolution as bind, measurement/collapse as pure/return. I guess everything just seems to rhyme under a category theory lens.
reply
valcron1000
5 hours ago
[-]
reply
keeganpoppen
2 hours ago
[-]
oh man i had forgotten about this blog from when i orbited the swift ecosystem a bit... it's clearly as great as always! fun post!
reply
droideqa
4 hours ago
[-]
Could this be implemented in Rust or Clojure?

Does Anglican kind of do this?

reply
lloydatkinson
3 hours ago
[-]
IS there the complete C# available for this? I looked over the original paper and it's just snippets.
reply
kittoes
1 hour ago
[-]
reply
Pxtl
1 hour ago
[-]
10 years since commit and no attached documents besides a tiny readme. Pass.
reply
tricky_theclown
4 hours ago
[-]
S
reply
jakubmazanec
6 hours ago
[-]
[flagged]
reply
frizlab
6 hours ago
[-]
> And why does it need to be part of the type system?

As presented in the article, it is indeed just a library.

reply
muxl
6 hours ago
[-]
It was chosen to be implemented as a generic type in this design because the way that uncertainty "pollutes" underlying values maps well onto monads which were expressed through generics in this case.
reply
cobbal
6 hours ago
[-]
I don't think inference is part of this at all, frequentist or otherwise.

It's not part of the type system, it's just the giry monad as a library.

reply
geocar
6 hours ago
[-]
> What if I want Bayesian?

Bayes is mentioned on page 46.

> And why does it need to be part of the type system? It could be just a library.

It is a library that defines a type.

It is not a new type system, or an extension to any particularly complicated type system.

> Am I missing something?

Did you read it?

https://www.microsoft.com/en-us/research/wp-content/uploads/...

https://github.com/klipto/Uncertainty/

reply
jakubmazanec
6 hours ago
[-]
> Bayes is mentioned on page 46.

Bayes isn't mentioned in the linked article. But thanks for the links.

reply