Taylor series in general are a poor choice, and Pade approximants of Taylor series are equally poor. If you're going to use Pade approximants, they should be of the original function.
I prefer Chebyshev approximation: https://www.embeddedrelated.com/showarticle/152.php which is often close enough to the more complicated Remez algorithm.
Regrettably, this is NOT from Hastings 1955. Hastings provides Taylor series and Chebyshev polynomial approximations. The OP's solution is a Pade approximation, which are not covered at all in Hastings.
> This amazing snippet of code was languishing in the docs of dead software, which in turn the original formula was scrawled away in a math textbook from the 60s.
was kind of telling for me. I have some background in this sort of work (and long ago concluded that there was pretty much nothing you can do to improve on existing code, unless either you have some new specific hardware or domain constraint, or you're just looking for something quick-n-dirty for whatever reason, or are willing to invest research-paper levels of time and effort) and to think that someone would call Abramowitz and Stegun "a math textbook from the 60s" is kind of funny. It's got a similar level of importance to its field as Knuth's Art of Computer Programming or stuff like that. It's not an obscure text. Yeah, you might forget what all is in it if you don't use it often, but you'd go "oh, of course that would be in there, wouldn't it...."
The SLEEF Vectorized Math Library [1] does this and therefore can usually provide accuracy guarantees for the whole floating point range with a polynomial order lower than theory would predict.
Its asinf function [2] is accurate to 1 ULP for all single precision floats, and is similar to the `asin_cg` from the article, with the main difference the sqrt is done on the input of the polynomial instead of the output.
[1] https://sleef.org/ [2] https://github.com/shibatch/sleef/blob/master/src/libm/sleef...
In physics grad school, professors would occasionally allude to it, and textbooks would cite it ... pretty often. So it's a thing anyone with postgraduate physics education should know exists, but you wouldn't ever be assigned it.
And, lo and behold, the ASIN implementation is minimax.
float asin(float x) {
float x2 = 1.0f-fabs(x);
u32 i = bitcast(x2);
i = 0x5f3759df - (i>>1);
float inv = bitcast(i);
return copysign(pi/2-pi/2*(x2*inv),x);
}
Courtesy of evil floating point bithacks.JK
The forbidden magic
FTFY :P
Counterpoint from the man himself, "avoiding trigonometry":
Impressive that an LLM managed to produce the answer from a 7 year old stack overflow answer all on its own! [1] This would have been the first search result for “fast asin” before this article was published.
So if you’ve got one that’s fast you have them both.
They could be orthogonal improvements, but if I were prioritizing, I'd go for SIMD first.
I searched for asin on Intel's intrinsics guide. They have a AVX-512 instrinsic `_mm512_asin_ps` but it says "sequence" rather than single-instruction. Presumably the actual sequence they use is in some header file somewhere, but I don't know off-hand where to look, so I don't know how it compares to a SIMDified version of `fast_asin_cg`.
https://www.intel.com/content/www/us/en/docs/intrinsics-guid...
I skimmed the author's source code, and this is where I'd start: https://github.com/define-private-public/PSRayTracing/blob/8...
Instead of an `_objects`, I might try for a `_spheres`, `_boxes`, etc. (Or just `_lists` still using the virtual dispatch but for each list, rather than each object.) The `asin` seems to be used just for spheres. Within my `Spheres::closest_hit` (note plural), I'd work to SIMDify it. (I'd try to SIMDify the others too of course but apparently not with `asin`.) I think it's doable: https://github.com/define-private-public/PSRayTracing/blob/8...
I don't know much about ray tracers either (having only written a super-naive one back in college) but this is the general technique used to speed up games, I believe. Besides enabling SIMD, it's more cache-efficient and minimizes dispatch overhead.
edit: there's also stuff that you can hoist in this impl. Restructuring as SoA isn't strictly necessary to do that, but it might make it more obvious and natural. As an example, this `ray_dir.length_squared()` is the same for the whole list. You'd notice that when iterating over the spheres. https://github.com/define-private-public/PSRayTracing/blob/8...
With "primary" ray-tracing (i.e. camera rays, rays from surfaces to area lights), it's quite easy to batch them up and run SIMD operations on them.
But once you start doing global illumination, with rays bouncing off surfaces in all directions (and with complex materials, with multiple BSDF lobes, where lobes can be chosen stochastically), you start having to put a LOT of effort into sorting and batching rays such that they all (within a batch) hit the same objects or are going in roughly the same direction.
I think it's also more fun sometimes to take existing systems and to try to optimize them given whatever constraints exist. I've had to do that a lot in my day job already.
The value of course is contingent on there being a decent number of objects of a given type in the list rather than just a huge number of rays being sent to a small number of objects; I didn't evaluate that. If it's the other way around, the structure would be better flipped, and I don't know how reasonable that is with bounces (that maybe then aren't all being evaluated against the same objects?).
I was curious what "sequence" would end up being but my compiler is too old for that intrinsic. Even godbolt didn't help for gcc or clang but it did reveal that icc produced a call https://godbolt.org/z/a3EsKK4aY
I like throwing away work I've done. Frees up my mental capacity for other work to throw away.
A fantastic amount of collective human thought has been dedicated to function approximations in the last century; Taylor methods are over 200 years old and unlikely to come close to state-of-the-art.
There’s generally not a faster version of inverse trig functions to inline, but it might optimize some other stuff out.
Unrelated to that, I’ve seen implementations (ie julia/base/special/trig) that use a “rational approximation” to asin, did you go down that road at any point?
So this article got me wondering how much accuracy is needed before computing a series beats pre-computed lookup tables and interpolation. Anyone got any relevant experience to share?
How much accuracy does ray tracing require?
On a general note, instructions like division and square root are roughly equal to trig functions in cycle count on modern CPUs. So, replacing one with the other will not confer much benefit, as evidenced from the results. They're all typically implemented using LUTs, and it's hard to beat the performance of an optimized LUT, which is basically a multiplexer connected to some dedicated memory cells in hardware.
For a little toy ray tracer, it is pretty measly. But for a larger corporation (with a professional project) a 4% speed improvement can mean MASSIVE cost savings.
Some of these tiny improvements can also have a cascading effect. Imagining finding a +4%, a +2% somewhere else, +3% in neighboring code, and a bunch of +1%s here and there. Eventually you'll have built up something that is 15-20% faster. Down the road you'll come across those optimizations which can yield the big results too (e.g. the +25%).
If you're alluding to gcc vs fbstring's performance (circa 15:43), then the performance improvement is not because fbstring is faster/simpler, but due to a foundational gcc design decision to always use the heap for string variables. Also, at around 16:40, the speaker concedes that gcc's simpler size() implementation runs significantly faster (3x faster at 0.3 ns) when the test conditions are different.
People have gotten PhDs for smaller optimizations. I know. I've worked with them.
> instructions like division and square root are roughly equal to trig functions in cycle count on modern CPUs.
What's the x86-64 opcode for arcsin?
Not required. ATAN and SQRTS(S|D) are sufficient, the half-angle approach in the article is the recommended way.
> People have gotten PhDs for smaller optimizations. I know. I've worked with them.
I understand the can, not sure about the should. Not trying to be snarky, we just seem to be producing PhDs with the slimmest of justifications. The bar needs to be higher.
I couldn't disagree more. Sure, making a 4% faster asin isn't going to change the world, but if it makes all callers a teensy bit faster, multiplied by the number of callers using it, then it adds up. Imagine the savings for a hyperscaler if they managed to made a more common instruction 4% faster.
x86-64 had instructions for the exponential and logarithmic functions in Xeon Phi, but those instructions have been removed in Skylake Server and the later Intel or AMD CPUs with AVX-512 support.
However, instructions for trigonometric functions have no longer been added after Intel 80387, and those of 8087 and 80387 are deprecated.
I've spent the past few months improving the performance of some work thing by ~8% and the fun I've been having reminds me of the nineties, when I tried to squeeze every last % of performance out of the 3D graphics engine that I wrote as a hobby.
I could imagine some graphics workloads tend compute asin() repeatedly with nearby input values. But I'd guess the locality isn't local enough to matter, only eight double precision floats fit in a cache line.
> if you use smaller data types than double or float. Even dropping interpolation worked fine,
That's kinda tautological isn't it? Of course reduced precision is acceptable where reduced precision is acceptable... I guess I'm assuming double precision was used for a good reason, it often isn't :)
About dropping the interpolation: Yes you are right of course. I was thinking about the speed. No noticable speed improvement by dropping interpolation. The asin calls are only a small fraction of everything.