The thing with computational geometry is, that its usually someone else's geometry, i.e you have no control over its quality or intention. In other words, whether two points or planes or lines actually align or align within 1e-4 is no longer really mathematically interesting because its all about the intention of the user: does the user think these planes overlap?.
This is why most geometry kernels (see open cascade) sport things like "fuzzy boolean operations" [0]) that lean into epsilons. These epsilons mask the error-prone supply chain of these meshes that arrive in your program by allowing some tolerance.
Finally, the remark "There are many ways of solving this problem" is also overly reductive, everyone reading here should really understand that this is a topic that is being actively researched right now in 2026, hence there are currently no blessed solutions to this problem, otherwise this research would not be needed. Even more so, to some extent this problem is fundamentally unsolvable depending on what you mean by "solvable", because your input is inexact not all geometrical operations are topologically valid, hence an "exact" or let alone "correct along some dimension" result cannot be achieved for all (combination of) inputs.
[0] https://dev.opencascade.org/content/fuzzy-boolean-operations
They don’t just lean into epsilons, the session context tolerance is used for almost every single point classification operation in geometric kernels and many primitives carry their own accumulating error component for downstream math.
Even then the current state of the art (in production kernels) is tolerance expansion where the kernel goes through up to 7 expansion steps retrying point classification until it just gives up. Those edge cases were some of the hardest parts of working on a kernel.
This is a fundamentally unsolvable problem with floating point math (I worked on both Parasolid and ACIS in the 2000s). Even the ray-box intersection example TFA gives is a long standing thorn - raytracing is one of the last fallbacks for nasty point classification problems.
The GP wasn't wrong. To "lean in" means to fully commit to, go all in on, (or, equivalently, go all out on).
The Rust code in the assert_f64_eq macro is:
if (a >= b && a - b < f64::EPSILON) || (a <= b && b - a < f64::EPSILON)
I'm the author of the Rust assertables crate. It provides floating-point assert macros much as described in the article.https://github.com/SixArm/assertables-rust-crate/blob/main/s...
If there's a way to make it more precise and/or specific and/or faster, or create similar macros with better functionality and/or correctness, that's great.
See the same directory for corresponding assert_* macros for less than, greater than, etc.
It's defined as the difference between 1.0 and the smallest number larger than 1.0. More usefully, it's the spacing between adjacent representable float numbers in the range 1.0 to 2.0.
Because floats get less precise at every integer power of two, it's impossible for two numbers greater than or equal to 2.0 to be epsilon apart. The spacing between 2.0 and the next larger number is 2*epsilon.
That means `abs(a - b) <= epsilon` is equivalent to `a == b` for any a or b greater than or equal to 2.0. And if you use `<` then the limit will be 1.0 instead.
Epsilon is the wrong tool for the job in 99.9% of cases.
So I'd probably rewrite that code to first find the ulp of the larger of the abs of a and b and then assert that their difference is less than or equal to that.
Edit: Or maybe the smaller of the abs of the two, I haven't totally thought through the consequences. It might not matter, because the ulps will only differ when the numbers are significantly apart and then it doesn't matter which one you pick. Perhaps you can just always pick the first number and get its ULP.
Multiplying epsilon by the largest number you are dealing with is a strategy that makes using epsilons at least somewhat logical.
epsilons are fine in the case that you actually want to put a static error bound on an equality comparison. numpy's relative errors are better for floats at arbitrary scales (https://numpy.org/doc/stable/reference/generated/numpy.isclo...).
edit: ahh i forgot all about ulps. that is what people often confuse ieee eps with. also, good background material in the necronomicon (https://en.wikipedia.org/wiki/Numerical_Recipes).
I would suggest that "equals" actually is for "exactly equals" as in (a == b). In many pieces of floating point code this is the correct thing to test. Then also add a function for "within range of" so your users can specify an epsilon of interest, using the formula (abs(a - b) < eps). You may also want to support multidimensional quantities by allowing the user to specify a distance metric. You probably also want a relative version of the comparison in addition to an absolute version.
Auto-computing epsilons for an equality check is really hard and depends on the usage, as well as the numerics of the code that is upstream and downstream of the comparison. I don't see how you would do it in an assertion library.
// Precise matching:
assert_f64_eq!(a, 0.1, Steps(2))
// same as: assert!(a == 0.1.next_down().next_down())
// Number of digits (after period) that are matching:
assert_f64_eq!(a, 0.1, Digits(5))
// Relative error:
assert_f64_eq!(a, 0.1, Rel(0.5))The usual pattern is abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol) to avoid both large-value and near-zero pitfalls.
https://github.com/python/cpython/blob/d61fcf834d197f0113a6a...
For my own soft-floating point math library, I expect the value is off by a some percentage, not just off by epsilon. And so I have my own almostSame method [1] which accounts for that and is quite a bit more complex. Actually multiple such methods. But well, that's just my own use case.
[1] https://github.com/thomasmueller/bau-lang/blob/main/src/test...
‘a.to_bits() == b.to_bits()’
Alternatively, use ‘partial_eq’ and fall back to bit equality if it returns None.
https://numpy.org/doc/stable/reference/generated/numpy.allcl...
if a.abs()+b.abs() >= (a-b).abs() * 2f64.powi(48)
It remains accurate for small and for big numbers. 48 is slightly less than 52.
If all you're comparing is the result from the same operations, you _may_ be fine using equality, but you should really know that you're never getting a number from an uncontrolled source.
Done some reading. Thanks to the article to waking me up to this fact at least. I didn't realize that the epsilon provided by languages tends to be the one that only works around 1.0, and if you want to use episilons globally (which the article would say is generally a bad idea) you need to be more dynamic as your ranges, and potential errors, increase.
I only played it rather than modded it, so happy to be corrected or further enlightened, but seems like an interesting problem to have to solve.
Edit: sure enough, it was actually discussed here: https://news.ycombinator.com/item?id=26938812
I'm unconvinced. Doesnt this just replace the need to choose a suitable epsilon with the need to choose the right number of bits to strip? With the latter affording much fewer choices for degree of "roughness" than does the former.
I think I'll just use scaled epsilon... though I've gotten lots of performance wins out of direct bitwise trickery with floats (e.g., fast rounding with mantissa normalization and casting).
This breaks down across the positive/negative boundary, but honestly, that's probably a good property. -0.00001 is not all that similar to +0.00001 despite being close on the number line.
It also requires that the inputs are finite (no INF/NAN), unless you are okay saying that FLT_MAX is roughly equal to infinity.
Anything else is basically a nightmare to however has to maintain the code in the future.
Also, good luck with e.g. checking if points are aligned to a grid or the like without introducing a concept of epsilon _somewhere_.