If you go down this road far enough you eventually end up reinventing particle filters and similar.
Certainly a univarient model in the type system could be useful, but it would be extra powerful (and more correct) if it could handle covariance.
eg. 10cm +8mm/-3mm
for what the acceptable range is, both bigger and smaller.
id expect something like "are we there yet" referencing GPS should understand the direction of the error and what directions of uncertainty are better or worse
What is really curious is why, after being reinvented so many times, it is not more mainstream. I would love to talk to people who have tried using it in production and then decided it was a bad idea (if they exist).
[1]: https://www.boost.org/doc/libs/1_89_0/libs/numeric/interval/... [2]: https://arblib.org/
> Under the hood, Uncertain<T> models GPS uncertainty using a Rayleigh distribution.
And the Rayleigh distribution is clearly not just an interval with a uniformly random distribution in between. Normal interval arithmetic isn't useful because that uniform random distribution isn't at all a good model for the real world.
Take for example that Boost library you linked. Ask it to compute (-2,2)*(-2,2). It will give (-4,4). A more sensible result might be something like (-2.35, 2.35). The -4 lower bound is only attainable when you have -2 and 2 as the multiplicands which are at the extremes of the interval; probabilistically if we assume these are independent random variables then two of them achieving this extreme value simultaneously should have an even lower probability.
A computation graph which gets sampled like here is much slower but can be accurate. You don't need an abstract domain which loses precision at every step.
It’s so cool!
I always start my introductory course on Haskell with a demo of the Monty Hall problem with the probability monad and using rationals to get the exact probability of winning using the two strategies as a fraction.
It appears that a similar approach is implemented in some modern Fortran libraries.
I wonder what it'd look like to propagate this kind of uncertainty around. You might want to check the user's input against a representative distribution to see if it's unusual and, depending on the cost of an error vs the friction of asking, double-check the input.
Or specify they're paying X per day, but want hourly itemized billing...but it should definitely come out to X per day (this was one employer which meant I invoiced them with like 8 digits of precision due to how it divided, and they refused to accept a line item for mathematical uncertainty aggregates).
Floats try to keep the relative error at bay, so their absolute precision varies greatly. You need to sum them starting with the smallest magnitude, and do many other subtle tricks, to limit rounding errors.
It usually takes some "finesse" to get your data / measurements into territory where the errors are even small in the first place. So, I think it is probably better to do things like this Uncertain<T> for the kinds of long/fat/heavy tailed and oddly shaped distributions that occur in real world data { IF the expense doesn't get in your way some other way, that is, as per Senior Engineer in the article }.
Does Anglican kind of do this?
As presented in the article, it is indeed just a library.
It's not part of the type system, it's just the giry monad as a library.
Bayes is mentioned on page 46.
> And why does it need to be part of the type system? It could be just a library.
It is a library that defines a type.
It is not a new type system, or an extension to any particularly complicated type system.
> Am I missing something?
Did you read it?
https://www.microsoft.com/en-us/research/wp-content/uploads/...
Bayes isn't mentioned in the linked article. But thanks for the links.