It's honestly a bit hard to understand why they bothered with this one. No hate for the Milk-V folks; I have 4 Jupiters sitting next to me running in Zig's CI. But hopefully they'll have something RVA23-compliant out soon (SpacemiT K3?).
A handful of developers already have access to SpacemiT K3 hardware, which is indeed RVA23 compliant and already runs Ubuntu 26.04.
geekbench: https://browser.geekbench.com/v6/cpu/16145076
rvv-bench: https://camel-cdr.github.io/rvv-bench-results/spacemit_x100/... (which as instruction throughput measurements and more)
It's slightly faster than a 3GHz Core 2 Dua in scalar single threaded performance, but it has 8 cores instead of two and more SIMD performance. There are also 8 additional SpacemiT-A100 cores with 1024-bit wide vectors, which are more like an additional accelerator.
The geekbench score is a bit lower than it should be, because at least three benchmarks are still missing SIMD acceleration on RISC-V (File Compression, Asset Compression, Ray Tracer), and the HTML5 browser test is also missing optimizations.
I'd estimate it should be able to get to the 500 range with comparable optimization to other architectures.
The Milk-V Titan mention in the original post is actually slightly faster in scalar performance, but has no RISC-V Vector support at all, which causes it's geekbench score to be way lower.
The problem is that you can't migrate threads between cores with different vector length.
The current ubuntu 26.04 image, that is installed, lists 16 cores in htop, but you can only run applications on the first 8 (e.g. taskset -c 10 fails). If you query whats running on the A100 cores you see things like a "kworker" processes.
I suspect that it should be possible to write a custom kernel module that runs on the A100s with the current kernel, but I'm not sure.
I expect it will definitely be possible to boot a OS only one the 8 A100 cores.
Well have to see if they manage to figure out how to add support for explicitly pinning user mode processes to the cores.
The ideal configuration would be to have everything run only on the X100s, but with an opt-in mechanism to run a program only on an A100 core.
To wit the Geekbench 6.5.0 RISC-V preview has 3 files, 'geekbench6', 'geekbench_riscv64', and 'geekbench_rv64gcv', which are presumably the executables for the benchmark in addition to their supported instruction sets. This makes the score an unreliable narrator of performance, as someone could have run the other benchmarks and the posted score would not be genuine. And that's on top of a perennial remark that even the benchmark(s) could just not be optimized for RISC-V.
As far as I understand the RVA23 requirement is an ubuntu thing only and only for current non LTS and future releases. Current LTS doesn't have such requirements and neither other distributions such as Fedora and Debian that support riscv64.
So no, you are not stuck running custom vendor distros because of this but more because the other weird device drivers and boot systems that have no mainline support.
It is of course possible that Debian sticks with RV64GC for the long term, but I seriously doubt it. It's just too much performance to leave on the table for a relatively new port, especially when RVA23 will (very) soon be the expected baseline for general-purpose RISC-V systems.
Things are different for CentOS / RHEL where we'll be able to move to RVA23 (and beyond) much more aggressively.
That being said: does it make sense to keep a nee but low performance platform alive? As the platform is new and likely doesn’t have many users, wouldn’t it make sense to nudge (as in “gently push”) users towards a higher performance platform?
Chances are the low-performance platform will die anyway, and fedora will not be exploiting the full offering of the high performance platform.
But the baseline is quite minimal. It's biased towards efficient emulation of the instructions in portable C code. I'm not sure why anyone would target an enterprise distribution to that.
On the other hand, even RVA23 is quite poor at signed overflow checking. Like MIPS before it, RISC-V is a bet that we're going to write software in C-like languages for a long time.
When I tried to measure the impact of -ftrapv in RVA23 and armv9, it was roughly the same: https://news.ycombinator.com/item?id=46228597#46250569
reminder:
unsigned 64-bit:
add: RV: add+bltu Arm: adds+bcc
sub: RV: sub+bltu Arm: subs+bcs
mul: RV: mulhu+mul+beqz Arm: umulh+mul+cbz
unsigned 32-bit:
add: RV: addw+bgeu Arm: adds+bcc
sub: RV: subw+bgeu Arm: subs+bcs
mul: RV: mul+slli+beqz Arm: umul+cmp lsr 32
signed 64-bit:
add: RV: add+slt+slti+beq Arm: adds+bcc
sub: RV: sub+slt+slti+beq Arm: subs+bcs
mul: RV: mulh+mul+srai+beq Arm: smulh+mul+cmp asr 63
signed 32-bit:
add: RV: addw+add+beq Arm: adds+bvc
sub: RV: subw+sub+beq Arm: subs+bvs
mul: RV: mul+sext.w+bew Arm: smul+asr+cmp asr 31On the other hand it avoids integer flags which is nice. I doubt it makes a measurable performance impact either way on modern OoO CPUs. There's going to be no data dependence on the extra instructions needed to calculate overflow except for the branch, which will be predicted not-taken, so the other instructions after it will basically always run speculatively in parallel with the overflow-checking instructions.
Even with XNOR (which isn't even part of RVA23, if I recall correctly), the sequence for doing an overflow check is quite messy. On AArch64 and x86-64, it's just the operation followed by a conditional jump: https://godbolt.org/z/968Eb1dh1
But RVA23 doesn't help with the hardware layer - it's going to be exactly the same as ARM SBCs where there's no hardware discovery mechanism and everything has to be hard-coded in the Linux device tree. You still need a custom distro for Raspberry Pi for example.
I believe there has been some progress in getting RISC-V ACPI support, and there's at least the intent of making mconfigptr do something useful - for a while there was a "unified discovery" task group, but it seems like there just wasn't enough manpower behind it and it disbanded.
https://github.com/riscvarchive/configuration-structure/blob...
Are you sure that's still the case? I just checked the Raspberry Pi Imager and I see several "stock" distro options that aren't Raspbian.
Regardless, I take your point that we're reliant on vendors actually doing the upstreaming work for device trees (and drivers). But so far the recognizable players in the RISC-V space do all(?) seem to be doing that, so for now I remain hopeful that we can avoid the Arm mess.
See this for example: https://www.phoronix.com/news/Raspberry-Pi-5-Ethernet-Linux
If you look at the patch series, it's directly adding information about the address of the ethernet device. That's the sort of thing that would be discovered automatically in the x86 world. It wouldn't need to be hard-coded into the kernel for each individual board that is supported.
It means that the UR-DP1000 chip would have been RVA23-compliant if only it had supported the V (Vector) extension. The Vector extension is mandatory in the RVA23 profile.
There are other chips out there even closer to being RVA23-compliant, that have V but not a couple of scalar extensions. The latter have been emulated in software using trap handlers, but there was a significant performance penalty. V is such a big extension, with many instructions and requiring more resources, that I don't think that it would be worth the effort.
This is a thing SoC vendors have done before without informing their customers until it's way too late. Quite a few players in that industry really do have shockingly poor ethical standards.
This is the sort of comment that makes people lose faith in HN.
There totally are cases where it's intentional, and no they are not discussed on the internet for obvious reasons. People in the industry will absolutely know what I'm on about.
They will then issue errata later, after millions of devices have been shipped.
Either way, it's currently hard to be excited about RISC-V ITX boards with performance below that of a RPi5. I can go on AliExpress right now and buy a mini itx board with a Ryzen 9 7845HX for the same price.
Are you aware of any actual credible attempts?
I do agree that it takes a lot of work to get something usable, and so I think we are a ways off from mainstream risc-v. I do also think there is a lot more value for low power devices like embedded/IoT or instances where you need special hardware. Facebook uses it to make special video transcoding hardware.
Why would you think that? ARM is not like x86 CPUs where you get the completed devices as a black box. Chinese silicon customers have access to the full design. I guess it's not completely impossible to hide backdoors at that level but it'd be extremely hard and would be a huge reputational risk if they were found.
They also can't really be locked out of ARM since if push comes to shove, Chinese silicon makers would just keep making chips without a license.
Everybody sane will want to move away from them, there is nothing chinese specific.
The most performant RISC-V implementations are from the US if I am not too mistaken.
Wonder if that hardware can handle an AMD 9070 XT (resizable bar). If so, we need the steam client to be recompiled for RISC-V and some games... if this RISC-V implementation is performant enough (I wish we would have trashed ELF before...)
There's a difference between announcement, offering IP for licensing (so you still have to make your own CPUs), shipping CPUs, and having those CPUs in systems that can actually boot something.
https://github.com/geerlingguy/sbc-reviews/issues/62#issueco...
https://github.com/System64fumo/linux/blob/main/hardware/dev...
2.0ghz isn’t a whole lot of performance for RISC-V system.
tldr; 236 vs 666 single core score
This is why a bunch of RISC-V people won't buy boards without RV Vector instructions.
What is the target audience for these development boards anyway?
sure, prototypes are good. but maybe it shouldn't be sold as a general product, because it implies that the sellers deem it a good product (when it obviously isn't.)
maybe it should be a closed offering, e.g. we're only making 1000, and we're only sending them to select few specialists/reviewers.