Will spend more time on it in the coming days. I am quite interested in RISC-V and I think that it might have a bright future ahead.
If any AI expert is reading this now, please use Replit or Lovable or something like that to re-create "Core War" [0] with RISC-V assembly. It would be GREAT.
But then I got to wondering about the OP's statement that they would specifically like someone to create this with AI. It strikes me both as silly as saying "if you're good at using Visual Studio, could you do this?", because AI tools are just tools now, and those who want to use them don't need to be prompted... but also somehow fundamentally different.
OP, what was on your heart that caused you to phrase it that way?
Now that book is also available with a RISC-V edition, which has a very interesting chapter comparing all different RISC ISAs and what they do differently (SH, Alpha, SPARC, PA-RISC, POWER, ARM, ...),...
However I've been exploring AArch64 for some time and I think it has some very interesting ideas too. Maybe not as clean as RISC-V but with very pragmatic design and some choices that make me question if RISC-V was too conservative in its design.
https://aspire.eecs.berkeley.edu/2017/06/how-close-is-risc-v...
https://thechipletter.substack.com/p/risc-on-a-chip-david-pa...
Not enough people reflect on this, or the fact that it's remarkably hazy where exactly AArch64 came from and what guided the design of it.
Respectfully, the statement in question is partially erroneous and, in far greater measure, profoundly misleading. A distortion draped in fragments of truth remains a falsehood nonetheless.
Whilst AArch64 does retain condition flags, it is not simply because of «AArch32 stretched to 64-bit», and condition codes are not a «big mistake» for large out-of-order (OoO) cores. AArch64 also provides compare-and-branch forms similar to RISC-V, so the contrast given is a false dichotomy.
Namely:
– «AArch64 came from AArch32» – historically AArch64 was a fresh ARMv8-A ISA design that removed many AArch32 features. It has kept flags, but discarded pervasive per-instruction predication and redesigned much of the encoding and register model;
– «Flags are a big mistake for large OoO» – global flags do create extra dependencies, yet modern cores (x86 and ARM) eliminate most of the cost with techniques such as flag renaming, out-of-order flag generation and using instruction forms that avoid setting flags when unnecessary. As implemented in high-IPC x86 and ARM cores, it shows that flags are not an inherent limiter;
– «RISC-V avoids this by having condition-and-branch» – AArch64 also has condition-and-branch style forms that do not use flags, for example:
1) CBZ/CBNZ xN, label – compare register to zero and branch;
2) TBZ/TBNZ xN, #bit, label – test bit and branch.
Compilers freely choose between these and flag-based sequences, depending on what is already available and the code/data flow. Also, many arithmetic operations do not set flags unless explicitly requested, which reduces false flag dependencies.Lastly, but not least importantly, Apple’s big cores are among the widest, deepest out-of-order designs in production, with very high IPC and excellent branch handling. Their microarchitectures and toolchains make effective use of:
– Flag-free branches where convenient – CBZ/CBNZ, TBZ/TBNZ (see above);
– Flag-setting only when it is free or beneficial – ADDS/SUBS feeding a conditional branch or CSEL;
– Advanced renaming – including flag renaming – which removes most practical downsides of a global NZCV.It looks a lot like Zeno's paradox of RISC-V implementation.
Tenstorrent's first "Atlantis" Ascalon dev board is going to be similar µarch to Apple M1 but running at a lower clock speed, but all 8 cores are "performance" cores, so it should be in N150 ballpack single-core and soundly beating it multi-core.
They are currently saying Q2 2026, which is only 4-7 months from now.
If Apple didn't need Arm then they would have probably found a way of going it alone.
Not insurmountable, as evidenced by recent AMDs. But still a limitation.
That's not saying much, it's basically impossible to remove an instruction. Just because something is easier than impossible doesn't mean that it's easy.
And sure, from a technical perspective, it's quite easy to add new instructions to RISC-V. Anyone can draft up a spec and implement it in their core.
But if you actually want wide-spread adoption of a new instruction, to the point where compilers can actually emit it by default and expect it to run everywhere, that's really, really hard. First you have to prove that this instruction is worthwhile standardizing, then debate the details and actually agree on a spec. Then you have to repeat the process and argue the extension is worth including in the next RVA profile, which is highly contentious.
Then you have to wait. Not just for the first CPUs to support that profile. You have to wait for every single processor that doesn't support that profile to become irrelevant. It might be over a decade before a compiler can safely switch on that instruction by default.
The ORC.B instruction in Zbb was my idea, never done anywhere before as far as anyone has been able to find. I proposed it in late 2019, it was in the ratified spec in later 2021, and implemented in the very popular JH7110 quad core 1.5 GHz SoC in the VisionFive 2 (and many others later on) that was delivered to pre-order customers in Dec 2022 / Jan 2023.
You might say that's a long time, but that's pretty fast in the microprocessor industry -- just over three years from proposal (by an individual member of RISC-V International) to mass-produced hardware.
Compare that to Arm who published the spec for SVE in 2016 and SVE 2 in 2019. The first time you've been able to buy an SBC with SVE was early 2025 with the Radxa Orion O6.
In contrast RISC-V Vector extension (RVV) 1.0 was published in late 2021 and was available on the CanMV-K230 development board in November 2023, just two years later, and in a flood of much more powerful octa-core SpacemiT K1/M1 boards (BPI-F3, Milk-V Jupiter, Sipeed LicheePi 3A, Muse Pi, DC-Roma II laptop) starting around six months later.
It varies from instruction to instruction, but alternative code paths are expensive, and not well supported by compilers, so new instructions tend to go unused (unless you are compiling code with -march=native).
In one way, RISC-V is lucky. It's not that currently widely deployed anywhere, so RVA23 should be picked up as the default target, and anything included in it will have widespread support.
But RVA23 is kind of pulling the door closed after itself. It will probably become the default target that all binary distributions will target for the next decade, and anything that didn't make it into RVA23 will have a hard time gaining adoption.
Every ISA adds new instructions over time. Exactly the same considerations apply to all of them.
Some Linux distros are still built for original AMD64 spec published in August 2000, while some now require the x86-64-v2 spec defined in 2020 but actually met by CPUs from Nehalem and Jaguar on.
The ARMv8-A ecosystem (other than Apple) seems to have been very reluctant to move past the 8.2 spec published in January 2016, even on the hardware side, and no Linux distro I'm aware of requires anything past original October 2011 ARMv8.0-A spec.
What I'm against is the idea that it's easy to add instructions. Or more the idea that it's a good idea to start with the minimum subset of instructions and add them later as needed.
It seems like a good idea; Save yourself some upfront work. Be able to respond to actual real-world needs rather than trying to predict them all in advance. But IMO it just doesn't work in the real world.
The fact that distros get stuck on the older spec is the exact problem that drives me mad, and it's not even their fault. For example, compilers are forced generate some absolute horrid ARMv8.0-A exclusive load/store loops when it comes to atomics, yet there are some excellent atomic instructions right there in ARMv8.1-A, which most ARM SoCs support.
But they can't emit them because that code would then fail on the (substantial) minority of SoCs that are stuck on ARMv8.0-A. So those wonderful instructions end up largely unused on ARMv8 android/linux, simply because they arrived 11 years ago instead of 14 years ago.
At least I can use them on my Mac, or any linux code I compile myself.
-------
There isn't really a solution. Ecosystems getting stuck on increasingly outdated baseline is a necessary evil. It has happened to every single ecosystem to some extent or another, and it will happen to the various RISC-V ecosystems too.
I just disagree with the implication that the RISC-V approach was the right approach [1]. I think ARMv8.0-A did a much better job, including almost all the instructions you need in the very first version, if only they had included proper atomics.
[1] That is, not the right approach for creating a modern, commercially relevant ISA. RISC-V was originally intended as more of an academic ISA, so focusing on minimalism and "RISCness" was probably the best approach for that field.
I think RISC-V did pretty well to get everything in RVA23 -- which is more equivalent to ARMv9.0-A than to ARMv8.0-A -- out after RV64GC aka RVA20 in the 2nd half of 2019.
We don't know how long Arm was cooking up ARMv8 in secret before they announced it in 2011. Was it five years? Was it 10? More? It would not surprise me at all if it was kicked off when AMD demonstrated that Itanium was not going to be the only 64 bit future by starting to talk about AMD64 in 1999, publishing the spec in 2001, and shipping Opteron in April 2003 and Athlon64 five months later.
It's pretty hard to do that with an open and community-developed specification. By which I mean impossible.
I can't even imagine the mess if everyone knew RISC-V was being developed from 2015 but no official spec was published until late 2024.
I am sure it would not have the momentum that it has now.
Of course not. You can replace an instruction with a polyfill. This will generally be a lot slower, but it won't break any code if you implement it correctly.
Also, it seems at least some of the RISC-V ecosystem is willing to be a little bit more aggressive. With Ubuntu making RVA23 the minimum profile for Ubuntu, perhaps we will not be waiting a decade for it to become the default. RVA23 was only ratafied a year ago.
laughs and/or cries in one of the myriad OISC ISAs
Patterson and Hennessy "Computer Organization and Design: The Hardware/Software Interface" has had 6 editions (1998, 2003, 2005, 2012, 2014, 2020) but various editions have had ARM, MIPS, and RISC-V specific editions.
I wrote TCP Socket for RISCV by using ISA RV64I. You have to know about linker relaxation and how to using it. Some of reference I have attached there
start:
auipc a0, 3
addi a0, a0, 4
The text says that this should result in 0x3004; was this example intended to be start:
lui a0, 3
addi a0, a0, 4The `auipc/addi` sequence results in 0x3004 + whatever the address of the `auipc` instruction itself is. If the `auipc` is at address 0 then the result will be the same.
I have reached “intro to assembly” in my C course this week and had decided on RISC-V to bridge the gap that everyone has different CPUs and that x86-64 is a little harder to learn than MIPS32, but MIPS32 isn’t as relevant.
And here’s someone who made my course material for the subject entirely.
Thank you so much.
As a C/C++ dev, I've always thought assembly was much harder. But this interactive content makes assembly clearer.
Once RISC-V has performant silicon implementations (server/desktop/mobile/embedded), the really hard part will be to move the software stack, mostly closed source applications like video games.
And a very big warning: do NOT abuse an assembler macro-preprocessor or you will lock your code on that very assembler. On my side I have been using a simple C pre-processor to keep the door open for any other assembler, at minimal technical cost.
The only exception is console games, where the architecture doesn’t matter anyway.
If you put emulation (hardware accelerated or not), game devs won't compile for rv64. Look at how valve proton is hurting native elf/linux gaming: game devs hardly build anymore for native elf/linux (but this may be related to the toxic behavior of some glibc and gcc devs though), even though their game engine has everything linux support.
Since proton/wine is unreliable in time, this is a disaster.
And there is a lot of BS around that: if some devs support their game on elf/linux via proton, they will have to QA it on a linux system anyway, so it does not change that much, even less with a game engine which has everything linux already... it only add the ultra-heavy-complex and then bug inducing proton/wine layer... an horrible mess to debug. One unfortunate patch, proton/wine side or game side, and compat is gone, and you are toast... and those patches do happen.
Conclusion: the only sane way is PROTON = 0 BUCKS.
I play only F2P games with proton (mostly gachas), no way I'll pay bucks for a game without technical support.
Valve should allow to pay games _only_ for the games with official elf/linux/proton support (aka the game devs do QA on elf/linux with valve proton... which would be no better than stupid if their game engine has elf/linux support already in). Why not let elf/linux users play all games which do not have official elf/linux support for free, well, those which run, and run decently, and until an unfortunate patch...
It is 0xfffffedd.
> Hmm, we get 0xfffffccd.
No, we didn't. The emulator shows 0xfffffedd, and I've checked it manually. The emulator is right.
There was also some Rust specific OS tutorial somewhere that was written nicely, but I can't find it right now.
Also if you want to get real hardware, there is the neorv32 that has nice documentation: https://stnolting.github.io/neorv32/ https://github.com/stnolting/neorv32
It's a risc-v core written in VHDL
Got myself a https://docs.banana-pi.org/en/BPI-F3/BananaPi_BPI-F3 in May last year for 90€. Tinkered again few weeks ago https://bsky.app/profile/benetou.fr/post/3m2m62st3hk2w
> I'm especially interested in using Rust for this if possible
I didn't tinker with Rust on it but if I had to I'd probably try via https://github.com/dockcross/dockcross/tree/master/linux-ris... and avoid compiling on the device itself, unless it's basically a HelloWorld project.
https://www.kickstarter.com/projects/starfive/visionfive-2-l...
And if someone wants to get fancy and do some HW-SW-Codesign with risc-v and FPGAs, then there is the PolarFireSoC. A lot cheaper than AMD/Xilinx products and okay-ish to use. Their dev enviroment feels kinda outdated, but at least it's snappy. Also the documentation feels kinda sparse, but most stuff is documented __somewhere__ - you just gotta find it.
https://www.microchip.com/en-us/development-tool/MPFS-DISCO-...
Fun fact: The dev board costs less than the chip itself. (Apparently that's often the case, but I just noticed that the first time)
The Microchip "Icicle" came out in late 2020 with the largest FPGA in the range, made using pre-production chips. It was several years more before you could buy the chips themselves. Digikey says it's no longer manufactured and they're just running down stocks.
The BeagleV "Fire" is much cheaper ($150) and uses one of the smallest FPGA parts in the range.
GOWIN also has RISC-V FPGA SoCs (Arora V GW5AST series)
Does anyone know of a complete list, machine readable? e.g.
instructions = [{"name": "lui", "description": "load upper immediate", "args": [...]}, ...]
What is that sp? Is it important? Why isn't that at 0x000000? Why isn't that explained? That's when I get lost.
Maybe doing RV32E plus a graphics output would be a good compromise. Sixteen registers is probably enough for any program people are likely to write in this --- and you can tell GCC/LLVM to generate for RV32E if you want to compile C code and paste the asm in. (I'm not sure whether the assembler can actually cope with that)