In this case, the creator wrote a custom 3D renderer and recreated the models/meshes to get as close of an approximation of the N64 experience onto the GBA.
I wouldn't call it a port necessarily ("recreation" seems more apt), but it's closer to that than a demake.
ClassiCube has a WIP GBA port, but according to commits it only hits 2 FPS as of now and is not listed in its README.
On a related tangent, there's also Fromage, a separate Minecraft Classic clone written for the PS1 (https://chenthread.asie.pl/fromage/).
Still bravo! I know getting it working and complete is the real goal and it is commendable.
What were you expecting?
One of the listed features in the PS1 port in the OP article is tesselation to reduce the issues of the PS1 HW affine texture mapper, on the GBA you have some base cost of doing manual software texture mapping but also oppurtunities to do some minor perspective correction to lessen the worst effects (such as doing perspective correction during the clipping process).
I think the resolution makes it particularly rough though.
Also flat shading (vs. say gouraud shading) is isomorphic to the question of texture mapping, and concerns how lighting is calculated across the surface of the polygon. A polygon can be flat shaded and textured, flat shaded and untextured, smoothly shaded and textured, or smoothly shaded and untextured.
I still remember gasping when I first saw the basically unattainable (for me) Japanese‑import N64 running Mario 64.
Such an interesting and varied gaming landscape back then; for example, the Wipeout experience on PSX was beyond the N64 port in that particular niche, for its own set of reasons.
From the videos I've watched there is still insane amounts of affine transformation texture warping, is that because it's not enable or because 2x is not enough?
I guess they will need to also redo all level geometry to be more amenable to tesselation... I guess that's why many ps1 games had blocky looking levels.
[1]: https://github.com/spicyjpeg/ps1-bare-metal/blob/main/src/08... - bit of a shameless plug, but notice how the Z coordinates are never sent to the GPU in this example.
I guess the main thing the console brought to the table that made 3d (more) feasible was that the CPU had a multiplication instruction?
Also, even though it didn't handle truly 3D transformations, the rasterizer was built for pumping out texture mapped, Gouraud shaded triangles at an impressive clip for the time. That's not nothing for 3D, compared to an unaccelerated frame buffer or the sprite/tile approach of consoles past.
If you have some basic familiarity with C, you can see both the GTE and the Z bucket sorting of GPU commands in action in the cube example I linked in the parent comment.
[1]: https://psx-spx.consoledev.net/geometrytransformationengineg...
Unless I'm mistaken, the PS1 just plain doesn't support perspective correction. All texture mapping is done in hardware using a very not-programmable GPU; there'd be no way to do perspective correction, decent frame rate or not, outside of software rendering the whole thing (which would be beyond intractable).
The common workaround for this was, as suggested, tessellation - smaller polygons are going to suffer less from affine textures. Of course that does up your poly count.
I guess you could pretend to have sub-pixel precision on the PS1, if you did it manually? Eg change the colours around 'between pixels' or something like that?
But that would probably get very expensive very soon.
this is also necessary to fix the occasional stretched textures, as texture coordinates are also limited to a smaller range per polygon on PS1
Maybe it just needs more tessellation or something else is going on, because you're right - even as someone who grew up on the PS1 and is accustomed to early 3D jank, it looks painfully janky.
The first comment is pretty funny:
> Finally, Super Mario 32.
Playstation rendered with affine texturing which made it impossible to get perspective correct rendering without hacks. The porting team ultimately did a very interesting hack where they would use polygons to render 1 pixel wide strips effectively simulating how non-hardware (that is CPU-based/integer) acclerated rendering was done on the PC.
For others who run into the same problem, the file can be accessed via https://fabiensanglard.net/gebbdoom/index.html#:~:text=High%... . (I've highlighted the link to click.)
But even during the PSX era I found it distracting and annoying to look at so I can't say I have any nostalgia for it even now in the way I do for N64-style low-poly 3-D games or good pixel art.
Even with realism, ports to dreamcast were better overall and considering the latest port of Final Fantasy Tactics does not emulate any of its PS1 limitations, I don't think a lot of people strive/like the aesthetic.
I guess you can pretend that the JRPG or Resident Evil are Visual Novels with some action game play (or turn based combat) thrown in?
Huh, I generally see megaman legends cited as an example where the PSX version looks better due to the crisper textures.
Then again, the good games would have been $50 instead of $70, and there would have been a lot more developers willing to pay $0.20 per unit to ship games than $20 per unit for the common 12MB and 16MB ROM chips.
However, I don't know if Ocarina of Time or Majora's Mask would have worked as well without that ability to load entire scenes in < 500ms. Diddy Kong Racing and Indiana Jones & The Infernal Machine relied on the ability to stream data from the cartridge in real time to smoothly transition between scenes/areas. DKR only used it in the overworld AFAIK, but it was still impressive.
Not saying you're wrong, just that I'm glad things turned out the way they did because Ocarina and Majora's Mask likely could not have been done on a Saturn or PS1 beefed up with the N64's GPU.
I could be wrong, and some experienced romhackers could conjure up enough clever optimizations to make a faithful PS1 port of Ocarina of Time that doesn't have noticeable load times, but it would have been the result of years of research with no deadline pressure. I admit I'm just speculating, but not in a presumptuous and baseless way.
They could have used SDRAM and it would perform so much better, and I believe the cost is around the same.
If you wanted to cut something, cut the antialiasing. While very cool, it is a bit wasted on CRTs. Worst of all, for some reason they have this blur filter which smears the picture horizontally. Luckily it can be deblured by appliying the inverse operation.
By the time the N64 launched, SDRAM was better and cheaper, and they considered it was too late to make the switch. Allegedly SGI wanted to make changes but Nintendo refused.
Basically they made the wrong bet and didn't want to change it closer to release.
OK, I also just read that basically Nintendo bet on ram bandwidth, but ignored latency.
A more general lesson: Nintendo bet on cutting edge, speculative technology with RDRAM, instead of concentrating on 'Lateral Thinking with Withered Technology'.
Of course, Nintendo clearly cared about the CPU a lot for marketing purposes (it's in the console's name), but from a purely technological perspective, it is wasteful. Most of the actual compute is done on the RSP anyway. So, getting a much smaller CPU would have been a big corner to cut, that could have saved enough resources to increase the texture cache to a useful resolution like 128x128 or so.
It should be noted, though, that the N64 was designed with multitexturing capabilities, which would have helped with the mushy colors had games actually taken advantage of it (but they didn't, which here again, the Nintendo SDK is to blame for).
Only really in the marketing material. It's a bit like calling a 386 with an arithmetic co-processor an 80 bit machine, when it was still clearly a 32 bit machine by all metrics that matter.
However, I agree in general that the N64 CPU sits idle a lot of the time. It's overspecced compared to the rest of the system.
How? The texture RAM (TMEM) is in the RSP, not in the CPU.
If you sell games for roughly the same amount as before (or even a bit cheaper), you have extra surplus you can use to subsidise the cost of the console a bit.
Effectively, you'd be cutting a corner on worse load times, I guess?
Keep in mind that the above ignores questions of piracy. I don't know what the actual impact of a CD based solution would have been, but I can tell for sure that the officials at Nintendo thought it would have made a difference when they made their decision.
> Nintendo had a hard enough time with preventing piracy and unlicensed games with the NES and SNES [...]
Yes, so I'm not sure that the cartridge drawbacks bought them that much in terms of piracy protection?
I agree that the PS1 had more piracy, but I'm not sure that actually diminished its success?
At least in my corner of the world (Spain), piracy improved its success. Everybody wanted the PSX due to how cheap it was, I think it outsold the N64 10:1.
I just saw techtips Linus interview Linus Torvalds and the constant manboying and bad jokes was just embarrassing and badly hurt the interview. I really wish people like this would turn it way, way down. I think we all love some levity and whimsy, but now those gimmicks are bigger and louder than the actual content.
If you've been watching LTT for any amount of time, it wouldn't be surprising that that's just LTT Linus' nervous awkward style, he's just a person. The jokes can be cringe as hell, but I thought the video was great, I don't think most nerds would be any different in front of a camera.
But yeah, on a "real" PS1 it would be blockier due to lower res. The main rendering problems should be the same though.
Not if you watch the video on your phone or iPad or laptop!
Actually, even most desktop pc monitors aren't bigger than people's TVs back then.
(Of course, TVs now are bigger than TVs back then. And desktop pc monitors are bigger than desktop pc monitors back then.)
In the end if you reescaled the emulator window down to 320x240 or 640x480 with a 25% scanline filter on LCD's or a 50% in CRT, the result would be pretty close to what teenagers saw in late 90's.
Though I suspect for interactive use, CRTs might have had better latency?
I did not expect it to happen so soon.
https://www.youtube.com/watch?v=oZcbgNdWL7w - Mario 64 wastes SO MUCH MEMORY
I wonder what someone who has PS1 knowledge equivalent to Kaze's N64 knowledge could do on that console---perhaps using Mario 32 as the benchmark.
(Mario 32 = Mario 64 on PS1.)
That said, there is an argument to be made against matching decompilations: while their nature guarantees that they will replicate the exact behavior of the original code, getting them to match often involves fighting the entropy of a 20-to-30-year-old proprietary toolchain, hacks of the "add an empty asm() block exactly here" variety and in some cases fuzzing or even decompiling the compiler itself to better understand how e.g. the linking order is determined. This can be a huge amount of effort that in many cases would be better spent further cleaning up, optimizing and/or documenting the code, particularly if the end goal is to port the game to other platforms.
https://github.com/CharlotteCross1998/awesome-game-decompila...
edit: whoever did the gameplay video is really good at mario n64. They were playing to and reacting to stuff that had rendered very late, if at all.