Imagine a game with bare-bones graphics and lighting, and a NN that converts it into something pretty. Indie developers can make AA-looking games and all game developers can devote more effort into design and logic. Artists will still be needed for art direction and possibly fine-tuning, although there will be less needed for each game (also less developers needed with AI agents and better tools).
Related, ML also has potential for AI enemies (and allies). Lots of players still prefer multiplayer, in part because humans are more realistic enemies (but also because they want to beat real humans); but multiplayer games struggle because good netcode is nontrivial, servers are expensive, some players are obnoxious, and most games don’t have a consistent enough playerbase.
https://eu-images.contentstack.com/v3/assets/blt740a130ae3c5...
# The Art Of Braid: Creating A Visual Identity For An Unusual Game
https://www.gamedeveloper.com/design/the-art-of-braid-creati...
First, porn.
Second, artificial botting to make your game look active.
Third, hire a art developer in india, VPN them to your AI tool, fire them when the game is done.
You really should check your prescription rose colored glasses.
MMOs have been using artificial players produced by the developers since at least the early EverQuest days.
The choice space in an mmo isn't that great, it's trivial to make a realistic acting NPC that mimics player behaviors and hand-wave the poor language capability as the other player being a unable to understand your chosen language.
NCSoft was involved in things like this in the early Lineage days and were fined for it. I would have a real hard time thinking this behavior is now uncommon given how low the fruit is.
Deck an NPC-Player in the most expensive cash-shop goods and have it stand around in a social area doing emotes but otherwise silent just to make the other players jealous and apt to purchase goods -- self-generated whale-bait.
There's no reason to involve an NN in this one. We had convincing bots with varied behaviours for ages.
https://community.arm.com/arm-community-blogs/b/mobile-graph...
Upscaling solution mainly targeted at mobile gaming, with an 'AI pipeline' for upscaling graphics (They claim 540p upscaled to 1080p at 4ms per frame). I'm a bit skeptical because this is a press release for chips that are in the works and claim to be releasing in DEC-26, and then on actual devices after that. So sounds more like a strategic/political move (Perhaps stock price related manoeuvring).
Unreal Engine 5 plugin will allow previewing the upscaled effects using the though, which will be nice for game developers.
And there seems to be a lot of hate towards DLSS from Gaming community.
- GPU compute units (used for LLMs)
- GPU "neural accelerators"/"tensor cores" etc (used for video game anti-aliasing and increasing resolution or frame rate)
- NPUs (not sure what they are actually used for)
And of course models can also be run, without acceleration, on the CPU.It seems ARM believe it makes sense to go a different route for mobile gaming.
Google and Apple have been doing NPUs for a while now.
Extension spaghetti is fine, I'd much rather end up with AI acceleration being handled like Vulkan than suffering a fate like Metal or DirectX.
Same applies to proprietary 3D APIs.
There is a reason why only FOSS devs make such big fuss out of APIs, while professional game studios keep talk about how to take each hardware to its limits at GDC, since 8 bit heterogeneous home game systems.
Neural nets are great for replacing manually-written heuristics or complex function approximations, and 3d rendering is _full_ of these heuristics. Texture compression, light sampling, image denoising/upscaling/antialiasing, etc.
Actual "generative" API in graphics is pretty rare, at least currently. That's more of an artist thing. But there's a lot of really great use cases for small neural networks (think 3-layer MLPs, absolutely nowhere near LLM-levels of size) to approximate expensive or manually-tuned heuristics in existing rendering pipeline, and it just so happens that the GPUs used for rendering also now come with dedicated NPU accelerator things.
> Most of these corner cases can be resolved by providing the model with enough training data without increase the complexity and cost of the technique. This also enables game developers to train the neural upscalers with their content, resulting in a completely customized solution fine-tuned for the gameplay, performance, or art direction needs of a particular title.
Source: https://developer.arm.com/documentation/111019/latest/
HiSilicon Kirin 970 had an NPU in like 2017. I think almost every performance-oriented Arm chip released in the last 5 years has had some kind of NPU on it.
I suspect they are using Arm here to mean "Arm-the-company-and-brand" not "Arm the architecture", which is both misleading and makes the claim completely meaningless.
In all recent documents issued by the Arm company, "Arm" is used for the architecture, i.e. the architecture variants are named "Armv6", "Armv7", "Armv8", "Armv9".
Samsung Exynos uses AMD RDNA but I am not even sure if they are being used at all. Nvidia seems to have no interest in the market.
- https://github.com/KhronosGroup/Vulkan-Docs/blob/5d386163f25... Adding tensor ops to the shader kernel vocaborary (SPIR-V). Promising.
- https://github.com/KhronosGroup/Vulkan-Docs/blob/5d386163f25... Adding TenforFlow/NNAPI/-like graph API. Good luck.
The novel thing seems to be that they will make it a part of the GPU? Really? Even my Samsung Galaxy S7 (quite few years old by now) supported Vulcan and run neural nets pretty well with Vulcan etc.
Where is the novelty?