For those, some "free FPS (as in Frame Per Second)" is a good thing.
edit: clarified the two FPS
Framegen is also a good fit for low-end hardware like the Steam Deck, which can hit 30 or 45 FPS in stuff like Elden Ring but is far from the max 90hz of the OLED model's panel. For a handheld, trading a bit of 720p visual clarity for locked 90hz gameplay is a solid trade if you can get it working.
How about if the two frames are 100% identical?
Does either of these situations differ substantially from what is being discussed, wherein the render pipeline can only produce a new render 45 times per second?
If I understand what you describe, this is generating a frame "in the past", an average between 2 frames you already generated, so not very useful? If you already have frames #1 and #2, you want to guess frame #3, not generate frame #1.5.
The higher the "real frame" rate, the smaller the differences from one to the next. This makes it easier to predict those differences, and "hide" a bad prediction. On the other hand if you have 10FPS you have to "guess" 100ms worth of changes to the frame which is a lot to guess or hide if the algorithm gets it wrong.
In my opinion it is quite difficult to provide a definition of "fps" that somehow makes 45-fps-native-with-frame-doubling be counted as 90 but doesn't also make either of the ludicrous examples I presented be counted as 90.
A measure for "FPS effectiveness" sounds interesting. Like how much detail, changes, information can you discretely convey per second relative to what the game is continuously generating.
A Nyquist of sorts. Are you just duplicating samples? Are you sampling a high frequency signal (fast motion in the game) at high enough rate (lots of discrete FPS)?
"90fps at 95% fidelity" is a meaningful way to describe performance. AFAIK nobody measures this when discussing xess or dlss or fsr.
I've only seen videos, so from a somewhat unrealistic perspective, it seems like an acceptable compromise for low end hardware in particular.
Boosting 120hz to 240hz admittedly seems silly.
It's pointing out the absurdity of calling "45fps plus 1-for-1 frame generation" as if it is in any sense "90fps". It's not, and you aren't hitting a 90Hz refresh rate target at any more with it than you were without it. In point of fact, it lowers real FPS because it consumes resources that would have otherwise been available for the render pipeline.
I wish reviewers in particular would stop saying e.g. "120fps with DLSS FG enabled" and instead call out the original render rate. It makes the discourse very confusing.
At 100 Hz or less, I've yet to experience frame generation in any form that doesn't result in unacceptably floaty input relative to the same system with framegen disabled.
So arguably you never need frame gen for a game, since it only really works when it’s already pretty nice.
There's this article from Unreal on the topic: https://www.unrealengine.com/en-US/tech-blog/game-engines-an...
If you read their proposed solutions, it's quite clear they only have patchy workarounds, and the inability to actually pre-compile the needed PSOs and avoid shader and traversal stutter is architectural. It should be noted that these engines are also stuttering on console, but it's not as noticeable since performance is generally much lower anyway.
XeSS is actually pretty great, played Talos Principle 2, a UE5 game on the Steam Deck at 800p 30fps thanks to XeSS.
I get that people want more real frames rather than more "fake" frames, but in that case you wouldn't be buying integrated graphics, or if you did end up with iGPU, you'd be aware of the limits and be happy for any improvements arriving via software.
It's like people let their hate of AI and LLM bubble blind them, and their brains can't compartmentalize good from bad news anymore.
DLSS is also AI and people like it.
People don't like framegen because the manufacturers are not being honest about it and using it for deceptive hype marketing. Anyone with a brain knows that it introduces latency and is only useful if you're already 40+ FPS, we also know that companies will use it to pad benchmarks. NVIDIA themselves said that the 5070 had 4090 performance because it supports framegen.
Unlike Nvidia, Intel explicitly doesn't use it to pad benchmarks.
The frame generation has access to movement vectors and can predict it quite well.
I think its a great thing to have, whats your concern tbh?
it's input lag that defines experience, not frame time. I am comfortable with 30 FPS (sometimes less frames even fits the style of the game, e.g. Dishonored 2, Clair Obscur) as long as the game responds instantaneously.
https://youtu.be/f8piCZz0p-Y?si=OLq9iZUjuRMYKPDo
If you have never heard of it, the basic idea is that you make low FPS feel responsive in first person games by having the mouse motion warp the existing frame independently of when a new frame is actually rendered.
This could be combined with some AI techniques to help sort out the edge artifacts you get from this.
Completely agree, input lag is the most important thing.
Though, to be honest, with the amounts of UE5 slop out there, I'll probably need to give an unreasonable amount of my money to Nvidia or AMD sooner or later (since many games don't exactly let you turn off Lumen and Nanite).
It's just unfortunate that Intel themselves won't provide the much needed market competition in the form of a B770.
Something like Split Fiction is delightful, Satisfactory is satisfactory with the right settings, Incursion Red River is pushing it, STALKER 2 is barely playable and The Forever Winter is unplayable trash.
All on 1080p by the way, on progressively lower graphics settings and recent drivers, the latter half can’t get stable 60 FPS.
I even lowered everything and ran Forever Winter at 10% resolution scale, it still wasn’t a smooth 60, whereas others report better success on different hardware.
It’s abysmal cause something like War Thunder easily gets hundreds of FPS (with RT off) and the likes of Cyberpunk and KDC also run great. It saddens me how much of a mess UE5 can be, their defaults should be way different.
Maybe this was an issue with performance bugs in the early release version, not with the graphics card. This looks very playable to me:
For readable patterns it's probably fine; for reaction-window timing you're being misled.