Even with the increased pricing the Cortex X5 / X925 and upcoming X6 / X930 they are still pretty good value. Unless Apple has something big with A19 / M5 the X6 / X930 should be competitive with M4 already. I just wish they spend a little more money on R&D for the GPU IP side of things.
Hoepfully we have some more news from Nvidia in Computex 2025
That said, ARM’s increased license fees are a fantastic advocate for RISC-V. Some of the more interesting RISC-V cores are Tenstorrent’s Ascalon and Ventana’s Veyron V2. I am looking forward to them being in competition with ARM’s X925 and X930 designs.
As for Qualcomm, they won the lawsuit ARM filed against them. Changing from ARM to RISC-V would delay their ambition to take marketshare from Intel and AMD, so they are likely content to continue paying ARM royalties because they have their eyes on a much bigger prize. It also came out during the lawsuit that Qualcomm considers their in-house design team to be saving them billions of dollars in ARM royalty fees, since they only need to pay royalties for the ISA and nothing else when they use their own in-house designs.
https://icircuit.net/benchmarking-raspberry-pi-pico-2/3983
The Hazard3 core was designed by a single person while the ARM Cortex cores were presumably designed by a team of people. The Hazard3 cores mostly outperforms the Cortex-M0+ cores in the older RP2040 and are competitive with the Cortex-M33 cores that share the RP2350 silicon. For integer addition and multiplication, they actually outperform the Cortex-M33 cores. Before you point out that they lost most of the benchmarks against the Cortex-M33 cores, let me clarify that the integer addition and multiplication performance matter far more for microcontrollers than the other tests, which is why I consider them to be competitive despite the losses. The Hazard3 cores are open source:
https://github.com/Wren6991/Hazard3
That said, not all RISC-V designs are open source, but some of the open source ones are performance competitive with higher end closed source cores, such as the SonicBoom core from Berkeley:
https://adept.eecs.berkeley.edu/wiki/_media/eop/adept-eop-je...
As for the other problem you cite, the RP2350 has both RISC-V and ARM cores. It is a certainty that if the ARM cores had not been present, the RP2350 would have been cheaper, since less die area would have been needed and ARM license fees would have been avoided.
Just because something is open source will not stop you from being stung during manufacturing, rather like how Android deployments are not free.
Patents.
Open ISA != all implementations of it are free (although in RISC-V case, many are).
My point is that if RISC-V takes off people will struggle to do decent implementations of it without stepping on the toes of the people already in the area.
I'd go so far as to say this is the entire SiFive strategy.
So that is merely the entire semiconductor industry patent portfolio that you will have to avoid.
> Patents tend to expire at different times around the world, plus there is the possibility of submarine patents. Without a declaration from Hitachi, adopting any processor design using their ISA is likely considered a legal risk.
If you combine this with your observation that CPU patents tend to be ISA independent then surely any widespread commercial deployment of RISC-V requires an assertion from everyone else in the semi industry that they do not in fact own patents on your implementation of it or it is likely considered a legal risk.
That or you just hold some things to different standards than others.
You keep trying to spread FUD concerning RISC-V. The issue you are trying to raise is one that if valid, would prevent anyone from designing a CPU, yet many do without legal issues. Hence, the issue you raise is invalid (by modus tollens).
If this was a winning strategy those open source implementations of SuperH cores would have been incredibly popular instead of dying in obscurity.
In any case, you should probably stop writing before you shove your foot any deeper into your mouth.
> In any case, you should probably stop writing before you shove your foot any deeper into your mouth.
Apology expected.
As for the J2, its creator does not request licensing fees, but Hitachi might require them. Unlike RISC-V, the creator of SuperH (Hitachi) is not known to have declared the ISA to be royalty free. I am not aware of such a declaration and even if there was, it is irrelevant because there is no reason to use SuperH over RISC-V. Nothing about the J2 supports the FUD you are spreading about RISC-V.
You're absolutely out of line.
> As for the J2, its creator does not request licensing fees, but Hitachi might require them.
"FUD". The whole point of the timing of the release of the J2 was it is based purely on now expired Hitachi patents, so they do not require any licensing fees.
By the way, my comment telling you that you should apologize to the community received an upvote and likely will receive more. You really are wasting people’s time with your anti-RISC-V FUD.
I too was upvoted for asking for your apology.
I will not apologize for speaking facts, and nor should you, but it is your random unnecessary insults that are unacceptable.
That's me done with this. You clearly have your opinions, but your behavior has been a discredit to the community you apparently represent.
By the way, having one’s foot in one’s mouth is an idiom meaning you said something wrong, which refers to behavior. It being obvious you are clueless is a reference to your writing, which again, refers to behavior. Saying you should apologize to people for wasting their time is similarly a reference to your behavior, and you invited that criticism by demanding an apology in broken English.
https://hc2024.hotchips.org/assets/program/conference/day2/2...
I personally hope China get's competitive in the node size as well as I want gpu and cpus start getting cheaper every generation again as once TSMC got big lead over Intel/Samsung and Nvidia got a big lead over AMD prices have stopped coming down generation to generation for CPU's and GPU's
* Loongson is pushing a MIPS derivative forward.
* Sugon is pushing a x86 derivative (originally derived from AMD Zen) forward
* Zhaoxin is pushing a x86 derivative (derived from VIA’s chips) forward.
There was Shenwei with its Alpha processor derivative, but that effort has not had any announcements in years. However, there is still ARM China. Tianjin Phytium and HiSilicon continue to design ARM cores presumably under license from ARM China. There are probably others I missed.There is also substantial RISC-V development outside of China. Some notable ones are:
* SiFive - They are the first company to be in this space and are behind many of the early/current designs.
* Tenstorrent - This company has Jim Keller and people formerly from Apple’s chip design team and others. They have high performance designs up to 8-wide.
* Ventana - They claim to have a high performance core design that is 15-wide.
* AheadComputing - they hired Intel’s Oregon design team to design high performance RISC-V cores after the Royal Core project was cancelled last year.
* The Raspberry Pi foundation - their RP2350 contains Hazard3 RISC-V cores designed by one of their engineers.
* Nvidia - They design RISC-V cores for the microcontrollers in their GPUs, of which the GPU System Processor is the most well known. They ship billions of RISC-V cores each year as part of their GPUs. This is despite using ARM for the high end CPUs that they sell to the community.
* Western Digital - Like Nvidia, they design RISC-V cores for use in their products. They are particularly notable because they released the SweRV Core as open source.
* Meta - They are making in-house RISC-V based chips for AI training/inference.
This is a short list. It would be relatively easy to assemble a list of dozens of companies designing RISC-V cores outside of China if one tried.https://www.bis.gov/media/documents/general-prohibition-10-g...
China is unlikely to move away from x86 and ARM internally even in a 10 year span. The only way that would happen is if RISC-V convinces the rest of the world to move away from those architectures in such a short span of time. ISA lock-in from legacy software is a deterrent for migration in China just as much as it is in any other country.
By the way, RISC-V is considered a foreign ISA in China, while the MIPS-derived LoongArch is considered (or at least marketed as) a domestic ISA. If the Chinese make a push to use domestic technology, RISC-V would be at a disadvantage, unless it is rebranded like MIPS was.
Right now, AFAIK only Apple is serious about designing their own ARM cores, while there are multiple competing implementations for RISC-V (which are still way behind both ARM and x86, but slooowly making their way).
VERY long-term, I expect RISC-V to become more competitive, unless whoever-owns-ARM-at-the-time adjusts strategy.
Either way, I'm glad to see competition after decades of Intel/x86 dominance.
Even at a higher IP price their final product are cheaper, faster and competitive. There may be a strategy about leaving money on the table, but it is another thing about leaving TOO much money on the table. If Intel and AMD's pricing is so far above ARM, there is nothing wrong with increasing your highest performance core 's pricing.
I would not be surprised in a 2 - 3 years time the highest PC performance CPU / SoC is coming from Nvidia with ARM CPU Core rather than x86. But knowing Nvidia I know they will charge similar pricing to Intel :D
It is interesting that you should mention MediaTek. They joined the RISC-V Software Ecosystem in May 2023:
It seems reasonable to think that they are considering jumping ship. If they are designing their own in-house CPU cores, it will likely be a while before we see them as part of a mediatek SoC.
In any case, people do not like added fees. They had previously tolerated ARM’s fees since they were low, but now that they are raising them, people are interested in alternatives. At least some of ARM’s partners are paying the higher for now, but it is an incentive to move to RISC-V, which is no fee for the ISA and either no fee or low fee for IP cores. For example, the hazard3 cores that the Raspberry Pi Foundation adopted in the RP2350 did not require them to pay royalty fees to anyone.
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-1...
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-2...
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-3...
I am not sure Huawei would go for riscv - they could easily go for their own isa or an arm fork.
Cortex-M33 (2016) derives–as you allude to–from ARMv8 (2015). But yeah it barely seems only barely popular, even now.
Having witnessed some of the 9p's & aughts computing, I never in a million years would have guessed microcontrollers & power efficient small chips would see so little change across a decade of time!!
It's because the software ecosystem around them is so incredibly lousy and painful.
Once you get something embedded to work, you never want to touch it again if you can avoid it.
I was really, really, really hoping that the RISC-V folks were going to do better. Alas, the RISC-V ecosystem seems doomed to be repeating the same levels of idiocy.
Meanwhile the RISC-V spec only defines very basic interrupt functionality, with most MCU vendors adding different external interrupt controllers or changing their cores to more closely follow the faster Cortex-M style, where the core itself handles stashing/unstashing registers, exit of interrupt handler on ret, vectoring for external interrupts, ... .
The low knowledge/priority of embedded of RISC-V can be seen in how long it took to specify an extension tha only includes multiplication, not division.
Especially for smaller MCUs the debug situation is unfortunate: In ARM-World you can use any CMSIS-DAP debug probe to debug different MCUs over SWD. RISC-V MCUs either have JTAG or a custom pin-reduced variant (as 4 pins for debugging is quite a lot) which is usually only supported by very few debug probes.
RISC-V just standardizes a whole lot less (and not sensibly for small embedded) than ARM.
That said, ARM’s SWD is certainly nice. It appears to be possible to debug the Hazard3 cores in the RP2350 in the same way as the ARM cores:
https://gigazine.net/gsc_news/en/20241004-raspberry-pi-pico-...
The problem was that the initial extension that included multiplication also included division[1]. A lot of small microcontrollers have multiplication hardware but not division hardware.
Thus it would make sense to have a multiplication-only extension.
IIRC the argument was that the CPU should just trap the division instructions and emulate them, but in the embedded world you'll want to know your performance envelopes so better to explicitly know if hardware division is available or not.
[1]: https://docs.openhwgroup.org/projects/cva6-user-manual/01_cv...
First of, that library requires you to fundamentally change the code, by moving some precomputation outside loops.
Of course I can do a similar trick to move the division outside the loop without that library using simple fixed-point math, something which is a very basic optimization technique. So any performance comparison would have to be against that, not the original code.
It is also much, much slower if your denominator changes for each invocation:
In terms of processor time, pre-computing the proper magic number and shift is on the order of one to three hardware divides, for non-powers of 2.
If you care about a fast hardware divider, then you're much more likely to have such code rather than the trivially-optimized code like the library example.
It is, but (as far as I understood it), they're using ARM SWD IP (which is a fine choice). But since their connection between the SWD IP and RISC-V DM is custom, you're going to need your adjust your debug probe software quite a bit more than between different Cortex MCUs.
Other vendors with similar issues (for example WCH) build something similar but incompatible, requiring their own debug probe. This is a solved problem for ARM cortex.
This is reaching breaking point entirely because of how powerful modern MCUs are too. You simply cannot develop and maintain software of scale and complexity to exploit those machines using the mainstream practices of the embedded industry.
STM32H5 in 2023 (M33): https://newsroom.st.com/media-center/press-item.html/p4519.h...
GD32F5 in 2024: https://www.gigadevice.com/about/news-and-event/news/gigadev...
STM32N6 in 2025 (M55): https://blog.st.com/stm32n6/
i.e. it takes some time for new chips to hit cost targets, and most applications don’t need the latest chips?
I don’t know a lot about the market, though, and interested to learn more
Microchip SAMA7D65 and SAMA7G54. Allwinner V853 and T113-S3.
It's not like a massive stream of A7's. But even pretty big players don't really seem to have any competitive options to try. The A-35 has some adoption. There is an A34 and A32 that I don't see much of, don't know what they'd bring above the A7. All over a decade old now and barely seen.
To be fair, just this year ARM announced Cortex-A320 which I don't know much about, but might perhaps be a viable new low power chip.
cheap Chinese clones of US IP is a broader problem that affects more than just STM
Cortex-M7 belongs to the biggest-size class of ARM-based microcontrollers. There is one newer replacement for it, Cortex-M85, but for now Cortex-M7 is not completely obsolete, because it is available in various configurations from much more vendors and at lower prices than Cortex-M85.
Cortex-M7 and its successor Cortex-M85 have similar die sizes and instructions-per-clock performance with the Cortex-R8x and Cortex-A5xx cores (Cortex-M5x, Cortex-R5x and Cortex-A3x are smaller and slower cores), but while the Cortex-M8x and Cortex-R8x cores have short instruction pipelines, suitable for maximum clock frequencies around 1 GHz, the Cortex-A5xx cores have longer instruction pipelines, suitable for maximum clock frequencies around 2 GHz (allowing greater throughput, but also greater worst-case latency).
Unlike Cortex-M7, Cortex-A7 is really completely obsolete. It has been succeeded by Cortex-A53, then by Cortex-A55, then by Cortex-A510, then by Cortex-A520.
For now, Cortex-A55 is the most frequently used among this class of cores and both Cortex-A7 and Cortex-A53 are truly completely obsolete.
Even Cortex-A55 should have been obsolete by now, but the inertia in embedded computers is great, so it will remain for some time the choice for cheap embedded computers where the price of the complete computer must be well under $50 (above that price Cortex-A7x or Intel Atom cores become preferable).
[1] https://www.ti.com/microcontrollers-mcus-processors/arm-base...
On the other hand, for the CPUs intended for cheap embedded computers there are a very large number of companies that offer products with Cortex-A55, or with Cortex-A76 or Cortex-A78, so there is no reason to accept anything older than that.
Texas Instruments is not really representative for embedded microcontrollers or computers, because everything that it offers is based on exceedingly obsolete cores.
Even if we ignore the Chinese companies, which usually have more up-to-date products, there are other companies, like Renesas and NXP, or for smaller microcontrollers Infineon and ST, all of which offer much less ancient chips than TI.
Unfortunately, the US-based companies that are active in the embedded ARM-based computer segment have clearly the most obsolete lines of products, with the exception of NVIDIA and Qualcomm, which however target only the higher end of the automotive and embedded markets, by having expensive products. If you want something made or at least designed in USA, embedded computers with Intel Atom CPUs are likely to be a better choice than something with an archaic ARM core.
For the Intel Atom cores, Gracemont has similar performance to Cortex-A78, Tremont to Cortex-A76 and Goldmont Plus to Cortex-A75; moreover, unlike the CPUs based on Cortex-A78, which are either scarce or expensive (like Qualcomm QCM6490 or NVIDIA Orin), the CPUs based on Gracemont, i.e. Amston Lake (Atom x7000 series) or Twin Lake (N?50 series), are both cheap and ubiquitous.
The latest Cortex-A7xx cores that implement the Armv9-A ISA are better than any Intel Atom core, but for now they are available only in smartphones from 2022 or more recent or in some servers, not in embedded computers (with the exception of a product with Cortex-A720 offered by some obscure Chinese company).
I have also not seen Cortex-M85 except from Renesas. Cortex-M55 is seldom an alternative to Cortex-M7, because Cortex-M55 is smaller and slower than Cortex-M7 (but faster than the old Cortex-M4 or than the newer Cortex-M33).
Cortex-M55 implements the Helium vector instruction set, so for an application that can use Helium it may match or exceed the speed of a Cortex-M7, in which case it could replace it. Cortex-M55 may also be used to upgrade an old product with Cortex-M7, if the M7 was used only because Cortex-M4 would have been too slow, but the full speed of Cortex-M7 was not really needed.
Microchip Technology has a number of RISC-V options.
Of course Boris Johnson (the next PM) __tried to woo ARM back to LSE__ because they realised they fucked up, and of course what huge foreign company would refloat on the LSE when you have NASDAQ, or bother floating on both?
Can you imagine if America had decided to allow Intel or Apple to be sold to a company in another country? Same sentiment.
- Yep I'm a pissed off ex-ARM shareholder forced out by the board's buyout decision and Mrs May waving it through.
After reading article: I suddenly realize that CPUs will probably no longer pursue making "traditional computing" any faster/efficient. Instead, everything will be focused on AI processing. There are absolutely no market/hype forces that will prompt the investment in "traditional" computing optimization anymore.
I mean, yeah, there's probably three years of planning and execution inertia, but any push to close the gap with Apple by ARM / AMD / Intel is probably dead, and Apple will probably stop innovating the M series.
The thing is, there aren't that many HPC applications for that level of parallelism that aren't better served by GPUs.
apple has to do something.
im not sure intel cpus can have 196GB ram, or it is some mobile ram manufacturing limit, but i really want to have atleast 96GB in notebook, tablet.
For multithreaded applications, where all available threads are active, the advantage in performance per watt of Apple becomes much lower than "2x" and actually much lower than 1.5x, because it is determined mostly by the superior CMOS manufacturing process used by Apple and the influence of the CPU microarchitecture is small.
While the big Apple cores have a much better IPC than the competition, i.e. they do more work per clock cycle so they can use lower clock frequencies, therefore lower supply voltages, when at most a few cores are active, the performance per die area of such big cores is modest. For a complete chip, the die area is limited, so the best multithreaded performance is obtained with cores that have maximum performance per area, so that more cores can be crammed in a given die area. The cores with maximum performance per area are cores with intermediate IPC, neither too low, nor too high, like ARM Cortex-X4, Intel Skymont or AMD Zen 5 compact. The latter core from AMD has a higher IPC, which would have led to a lower performance per area, but that is compensated by its wider vector execution units. Bigger cores like ARM Cortex-X925 and Intel Lion Cove have very poor performance per area.
Perhaps. But that’s an edge case, very few people run their laptop at 100% for any extended period of time.
What that really means (I think) is they aren’t using the power and cooling available to them in traditional desktop setups. The iMac and the Studio/Mini and yes, even the Mac Pro, are essentially just laptop designs in different cases.
Yes, they (Studio/Pro) can run an Ultra variant (vs Max being the highest on the laptop lines) but the 2x Ultra chip so far has not materialized. Rumors say Apple has tried it but rather could get efficiencies to where they needed to be or ran into other problems connecting 2 Ultras to make a ???.
The current Mac Pro would be hilarious if it wasn’t so sad, it’s just “Mac Studio with expansion slots”. One would expect/hope that the Mac Pro would take advantage of the space in some way (other than just expansion slots, which most people have no use for aside from GPUs which the os can’t/won’t leverage IIRC).
Their most performant chip, the M3 Ultra is only a bit faster than the 14900k which you can gee for $400 these days.
in notebooks it's been possible for years. a friend of mine had 128gb (4x32gb ddr4) in his laptop about 4-6 years ago already. it was a dell precision workstation (2100 euros for the laptop alone, core i9 cpu, nothing fancy).
Nowadays you can get 64gb individual ddr5 laptop ram sticks. as long as you can find a laptop with two ram sockets you can easily get 128b memory on laptops.
regarding tablets... it's unlikely to be seen (<edit> in the near future</edit>). tablet OEMs tip their hats to the general consumer markets, where <=16gb ram is more than enough (and 96gb memory would cost more than the rest of the hardware for no real user/market/sales advantage)