Also, I’m not sure I understand the speedup. Is it latency or throughput?
Like looking at this example,
https://herbie.uwplse.org/demo/b070b371a661191752fe37ce0321c...
It is claimed that for the function f(x) =sqrt(x+1) -1
Accuracy is increased by from 8.5% accuracy to 98% for alternative 5 Which has f(x) = 0.5x
Ok so x=99, the right answer is sqrt(100) -1 = 9
But 0.5 * 99 = 49.5 which doesn't seem too accurate to me.
What would be cool is if you could some how have this kind of analysis done automatically for your whole program where it finds the needle in the haystack expression that can be improved, assuming you gave expected ranges for your variables
Some responses seem obvious:
* A useful exposition would describe what problems are being addressed and how an improvement is achieved, rather than offering examples without analysis.
* Robust, meaningful accuracy improvements should be made part of the underlying language, not attached to each application program in the form of a special-purpose library.
* If the issue is that people write bad floating-point expressions, a code-writing tutorial would be a better solution.
* If a programmer thinks a special-purpose library is a meaningful alternative to understanding floating-point code issues, then what stops Herbie from being a source of additional errors?