Technologies like ARM AFRC and PVRIC4 can only be used on modern flagship devices. Since flagship memory bandwidth isn't particularly strained to begin with, we end up spending a massive amount of effort on optimizations that only benefit a fraction of users. In most cases, teams are simply unwilling to pay that development cost.
The driver behavior of PVRIC4 perfectly encapsulates the current state of mobile GPU development: 1. The API promises support for flexible compression ratios. 2. The driver silently ignores your request and defaults to 1:2 regardless. 3. You only discover this because a PowerVR developer quietly confirmed it in a random comment section.
This is a microcosm of the "texture compression hell" we face. Beyond the mess of format fragmentation, even the driver layer is now fragmented. You can't trust the hardware, and you can't trust the software.
While the test results for ARM AFRC are genuinely impressive—it's not easy to outperform a software encoder in terms of quality—it remains problematic. As long as you cannot guarantee consistent behavior for a single codebase across different vendors, real-time CPU and GPU encoders remain the only pragmatic choice.
For now, hardware compression encoders are just "nice-to-haves" rather than reliable infrastructure. I am curious if anyone has used AFRC in a production environment? If so, I’d love to know how your fallback strategy was designed.
This is in a frustrating state at the moment. CPU compression is way too slow. Some people have demoed on-the-fly GPU compression using a compute shader, but annoyingly there is (or at least was at the time) no way in the GPU APIs to `reinterpret_cast` the compute output as a compressed texture input. Meaning the whole thing had to be dragged down to CPU memory and uploaded again.
we hit some wired case on Adreno 530, ran into bizarre GPU instruction set issues with the compute shader compressor, that only manifested on Adreno 53x. Ended up having to add a device detection path, and fall back to CPU compression. which defeated much of the point.
Direct3D called its variants DXTn, later rename to BCn. From what I recall, Microsoft had some sort of patent licensing deal that implicitly allowed Direct3D implementers to support their formats.
OpenGL had an extension called GL_EXT_texture_compression_S3TC[2].
Under "IP Status" the extension specification explicitly warns that even if you are e.g. shipping graphics cards with Direct3D drivers, supporting S3TC, you may not legally be able to just turn that feature on in your OpenGL driver.
[1] https://en.wikipedia.org/wiki/S3_Texture_Compression#Patent
[2] https://registry.khronos.org/OpenGL/extensions/EXT/EXT_textu...