Here's a related disucssion on what 'pixelated' should mean from the css working group
https://github.com/w3c/csswg-drafts/issues/5837
(every so often browsers break/rename how nearest-neighbouring filtering works. I hope at some point it stabilizes lol - I note in the discussion linked nobody else cares about backwards compatibility ...).
With a 4K display the pixel density is high enough that virtually everything looks good scaled this way, though once you go higher than SD content you're usually dealing with 720p and 1080p, both of which 2160p divides into evenly anyway.
It's surprising how often I see bad pixel art scaling given how easy it is to fix.
Single bilinear samples can lose information and leave out pixels of the higher res image, it's essentially a worse triangle filter.
Can you do [A B] -> [A 0.5*(A+B) B] 1.5x upscaling with a triangle filter? (I think this is not possible, but I might be wrong).
Also triangle filter samples too many pixels and makes a blurry mess of pixel-art images/sprites/...
Linear downscaling under the assumptions of pixel-center mapping and clamp-to-edge always simplifies into a polyphase filter with position-independent coefficients using at most the current input pixel and the previous one; and integer upscaling obviously is too.
Therefore any form of "sharp bilinear" that does not use bilinear upscaling reduces into such a polyphase filter. [A B] -> [A 0.5*(A+B) B] is equivalent to 2x integer upscale -> 0.75 bilinear scale (= 1.5x of input), and works on GPUs without fragment shaders too.
First, upscaling with a filter kernel (weighted average) doesn't make as much sense because you aren't weighting multiple pixels to make a single pixel, you are interpolating, so "upscaling with a triangle filter" isn't something practical.
Second, lots of signal processing things that can be technically applied to pixels on a row by row basis don't work well visually and don't make a lot of sense when trying to get useful results. This is why things like a fourier transform is not the backbone of image processing.
Polyphase filtering doesn't make any sense here, you have access to all the data verbatim, and you want to use it all when you upscale or downsample. There is no compression and no analog signal that needs to be sampled.
Third, any filter kernel is going to use the pixels under it's width/support. Using 'too many pixels' isn't something that makes sense and isn't the problem. How they are weighted when scaling an image down is what makes sense. If you want a sharper filter you can always use one. What I actually said was that linear interpolating samples to downsample an image doesn't make sense and is like using a triangle filter or half of a triangle filter.
This all seems to be work arounds for what people probably actually want if they are trying to get some sharpness, which is something like a bilateral filter, that weights similiar pixels more. This
> Polyphase filtering (...) There is no compression and no analog signal that needs to be sampled.
The term "polyphase scaling" is used at least by AMD: https://docs.amd.com/r/en-US/pg325-v-multi-scaler/Polyphase-... , that's why I used the term.
> What I actually said was that linear interpolating samples to downsample an image doesn't make sense and is like using a triangle filter or half of a triangle filter.
In isolation yes it doesn't make sense, but linear downsampling is a mere implementation detail here: "4x nearest neighbor and then downscaling that to the display resolution using bilinear" is an upscaling filter (unless the output resolution is lower) that doesn't discard any pixel of the initial input.
This looks likes a custom video scaler and the context is custom filtering when one image is only slightly different in dimensions to another.
How does that apply here?
In isolation yes it doesn't make sense, but linear downsampling is a mere implementation detail here
There is no such thing as "linear downsampling". There are box filters, triangle filters and other weighted averages, then there are more sophisticated weighting schemes that take into account more than just distance.
that doesn't discard any pixel of the initial input.
It creates more data then discards it by sampling too sparsely, but the sparse samples get linear interpolated so you don't notice the aliasing as much. This is not a technically sound way to upscale an image, it only can seem to work if you compare it to a poor enough example.
If you want to see a better example look at bilateral upscaling, which would weight similiar pixels more heavily when doing interpolation, which should keep edges sharper. You can probably see this in motion with the right settings on a recent TV.
"Bilinear downscaling" also doesn't make sense because scaling an image down means doing a weighted average of the multiple pixels going into a single pixel. Pixels being weighted linearly based on distance would be a triangle filter.
Aliasing is therefore more limited and controlled.
Aliasing doesn't need to happen at all with a reasonable filter width. If someone is interpolating between four pixels, that's a triangle filter with four samples.
On that topic, Pillow so-called binilnear isn't actually bilinear interpolation [1][2], same with Magick IIRC (but Magick at least gives you -define filter:blur=<value> to counteract this)
[1] https://pillow.readthedocs.io/en/stable/releasenotes/2.7.0.h...
[2] https://github.com/python-pillow/Pillow/blob/main/src/libIma...