"Qualcomm’s counsel turned Arm’s Piano analogy on its head. Arm compared its ISA to a Piano Keyboard design during the opening statement and used it throughout the trial. It claimed that no matter how big or small the Piano is, the keyboard design remains the same and is covered by its license. Qualcomm’s counsel extended that analogy to show how ridiculous it would be to say that because you designed the keyboard, you own all the pianos in the world. Suggesting that is what Arm is trying to do."
Source: https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-3...
Wouldn't that be similar to the Google v Oracle Java API case except the claim would be even stronger - that all programs making use of the Java API were derivative works of the Java API and thus subject to licensing arrangements with Oracle?
Or similarly, a hypothetical claim by Intel that a compiler such as LLVM is derivative work of the x86 ISA.
That can't possibly be right. What have I misunderstood about this situation?
> "Throughout expert testimony, Arm has been asserting that all Arm-compliant CPUs are derivatives of the Arm instruction set architecture (ISA)."
> "Arm countered with an examination of the similarities in the register-transfer language (RTL) code, which is used in the design of integrated circuits, of the latest Qualcomm Snapdragon Elite processors, the pre-acquisition Nuvia Phoenix processor, and the Arm ISA (commonly referred to as the Arm Arm)."
Were they trying to argue that the RTL is too similar to the pseudocode in the ARM ARM or something?? That is absolutely crazy. (Of course, [when we have a license agreement and] you publish a public specification for the interface, I am going to use it to implement the interface. What do you expect me to do, implement the ARM ISA without looking at the spec?)
edit: Wow, I guess this really is what they were arguing?? Look at the points from Gerard's testimony [^2]. That is absolutely crazy.
[^1]: https://www.forbes.com/sites/tiriasresearch/2024/12/19/arm-s...
[^2]: https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-2...
I really feel like I must have misunderstood something here.
In other words, Nuvia failing to destroy the designs might or might not have been a breach of contract. At least if I understand all of this correctly. But I feel like I must be missing some key details.
I assume Arm has some patents on the ISA [1] and the only way to get a license to them is to sign something that effectively says all your work exists at Arm's sufferance. After that we're just negotiating the price.
[1] You and I hate this but it's probably valid in the US.
Because all Arm ALAs are secret, we do not know if Arm has any right to do such a unilateral cancellation.
It is likely that the ALAs cannot be cancelled without a good reason, like breech of contract, so the cancellation of the Qualcomm ALA must be invalid now, after the trial.
The conflict between Arm and Qualcomm has started because the Qualcomm ALA, which Qualcomm says that it is applicable for any Qualcomm products, specifies much lower royalties than the Nuvia ALA.
This is absolutely normal, because Qualcomm sells a huge number of CPUs for which it has to pay royalties, while Nuvia would have sold a negligible number of CPUs, if any.
Arm receives a much greater revenue based on the Qualcomm ALA than what they would have received from Nuvia.
Therefore the real reason of the conflict is that Qualcomm has stopped using CPU cores designed by Arm, so Arm no longer receives any royalty from Qualcomm from licensing cores, and those royalties would have been higher than the royalties specified by the ALA for Qualcomm-designed cores.
When Arm has given an architectural license to Nuvia, they did not expect that the cores designed by Nuvia could compete with Arm-designed cores. Nuvia being bought by Quacomm has changed that, and Arm attempts now to crush any competition for its own cores.
Intel has been lenient toward compiler implementers, but their stance is that emulation of x86 instructions still under patent (e.g., later SSE, AVX512) is infringing if not done under a license agreement. This has had negative implications for, for example, Microsoft's x86 emulation on ARM Windows devices.
(I'm guessing Apple probably did the right thing and ponied up the license fees.)
I thought the whole thing Qualcomm was way more professional. ARM's case was that what they think was written in the contract, what they "should" have written in contract and what Qualcomm shows clearly contradict.
It is more of a lesson for ARM to learn. And now the damage has been done. This also makes me think who was pushing this lawsuit. Softbank ?
I also gained more respect to Qualcomm. After what they showed Apple vs Qualcomm's case and here.
Side Note: ARM's Design has caught on. The Cortex X5 is close to Apple's Design. We should have news about X6 soon.
I thought the entire point of this was that Arm was trying to prevent Qualcomm from switching away from products that fall under the TLA. Isn't revenue from TLA fees a huge difference from that of ALA fees?
"I don't think either side had a clear victory or would have had a clear victory if this case is tried again," Noreika told the parties."
After more than nine hours of deliberations over two days, the eight-person jury could not reach a unanimous verdict on the question of whether startup Nuvia breached the terms of its license with Arm.
[1] https://www.reuters.com/legal/us-jury-deadlocked-arm-trial-a...
When your contracts are airtight, you usually want a bench trial. Then the defendant demands a jury.
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-1...
> Arm’s opening statement.. presented with a soft, almost victim-like demeanor. Qualcomm’s statement was more assertive and included many strong facts (e.g., Arm internal communications saying Qualcomm has “Bombproof” ALA). Testimonials were quite informative and revealed many interesting facts, some rumored and others unknown (e.g. Arm considered a fully vertically integrated approach).
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-2...
> The most important discussion was whether processor design and RTL are a derivative of Arm’s technology.. This assertion of derivative seems an overreach and should put a chill down the spine of every Arm customer, especially the ones that have ALA, which include NXP, Infineon, TI, ST Micro, Microchip, Broadcom, Nvidia, MediaTek, Qualcomm, Apple, and Marvell. No matter how much they innovate in processor design and architecture, it can all be deemed Arm’s derivative and, hence, its technology.
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-3...
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-4...
ARM did massive damage to their ecosystem for nothing. There will for sure be consequences of suing your largest customer.
Lots of people that would have defaulted to licensing designed off ARM for whatever chips they have planned will now be considering RISC-V instead. ARM just accelerated the timeline for their biggest future competitor. Genius.
I’ve written about that here: https://benhouston3d.com/blog/risc-v-in-2024-is-slow
They pivoted to embedded shortly after spinning off into a separate company.
Acorn Computers started off much earlier (I owned an Acorn Atom when it was released) which begat the Electron, then the BBC Micro and then the Archimedes.
At that time ARM was just an architecture owned by Acorn. They created it with VSLI technology (Acorn’s Silicon partner) and used the first RISC chip in the BBC Micro before then pivoting it to the Archimedes.
Whilst Acorn itself was initially purchased by Olivetti, who eventually sold what remained years later to Morgan Stanley.
The ARM division was spun off as “Advanced RISC Machines” in a deal with both Apple, and VSLI Technology after Olivetti came onto the scene.
It is this company that we now know as Arm Holdings.
So it’s not entirely accurate to claim “they had a full computer system” as that was Acorn Computers, PLC.
Some of the other details you have are wrong too, to the point your comment is really quite misleading.
Anyone wanting an accurate version should check wikipedia: https://en.m.wikipedia.org/wiki/Acorn_Computers
(To be blunt the above comment is like a very bad LLM summary of the Acorn article).
As far as I am aware, there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM. The people designing their own cores just haven't bothered to do so yet.
RISC-V isn't competitive in 2024, but that doesn't mean that it still won't be competitive in 2030 or 2035. If you were starting a project today at a company like Amazon or Google to develop a fully custom core, would you really stick with ARM - knowing what they tried to do with Qualcomm?
Having a competitive CPU is 1% of the job. Then you need To have a competitive SoC (oh and not infringe IP), so that you can build the software ecosystem, which is the hard bit.
We still have problems with software not being optimised for Arm these days, which is just astounding given the prevalence on mobile devices, let alone the market share represented by Apple. Even Golang is still lacking a whole bunch of optimisations that are present in x86, and Google has their own Arm based chips.
Compilers pull off miracles, but a lot of optimisations are going to take direct experience and dedicated work.
I think the keys to Risc-V in terms of software will be,
LLVM (gives us C, C++, Rust, Zig,etc), this is probably already happening?
JavaScript (V8 support for Android should be the biggest driver, also enabling Node,Deno,etc but it's speed will depend on Google interest)
JVM (Does Oracle have interest at all? Could be a major roadblock unless Google funds it, again depends on Android interest).
So Android on Risc-V could really be a game-changer but Google just backed down a bit recently.
Dotnet(games) and Ruby (and even Python?) would probably be like Go with custom runtimes/JIT's needing custom work but no obvious clear marketshare/funding.
It'll remain a niche but I do really think Android devices (or something else equally popular, a chinese home-PC?) would be the gamechanger to push demand over the top.
Golang's compiler is weak compared to the competition. It's probably not a good demonstration of most ISAs really.
This meant that you had (a) strong demand for ARM apps/libraries, (b) large pool of testers, (c) developers able to port their code without needing additional hardware, (d) developers able to seamlessly test their x86/ARM code side by side.
RISC-V will have none of this.
I think people are blind to the amount of pre-emptive work a transition like that requires. Sure, Linux and FreeBSD support a bunch of architectures, but are they really all free of bugs due to the architecture? You can't convince me that choosing an esoteric, lightly used arch like Big Endian PowerPC won't come with bugs related to that you'll have to deal with. And then you need to figure out who's responsible for the code, and whether or not they have the hardware to test it on.
It happened to me; small project I put on my ARM-based AWS server, and it was not working even though it was compiled for the architecture.
Having a clear software stack that you control plays a key role in this success, right?
Wanting to have the general solution with millions of random off label hardware combinations to support is the challenge.
Edit, link: https://box86.org/2024/08/box64-and-risc-v-in-2024/
The new (but tier1 like x86-64) Debian port is doing alright[0]. It'll soon pass ppc64 and close up to arm64.
We can't know and won't for up to until 2030 or 2035. Humans are just not very good when it comes projecting the future (if predictions of 1950-60's were correct, I would be typing this up from my cozy cosmic dwelling on a Jovian or a Saturnian moon after all).
History has had numerous examples when better ISA and CPU designs have lost out to a combination or mysteries and other compounding factors that are usually attributed to «market forces» (whatever that means to whomever). The 1980-90's were the heydays of some of the most brilliant ISA designs and nearly everyone was confident that a design X or Y would become dominant, or the next best thing, or anywhere in between. Yet, we were left with a x86 monopoly for several decades that has only recently turned into a duopoly because of the arrival of ARM into the mainstream and through a completely unexpected vector: the advent of smartphones. It was not the turn than anyone expected.
And since innovations tend to be product oriented, it is not possible to even design, leave alone build, a product with something does not exist yet. Breaking a new ground in the CPU design requires an involvement of a large number of driving and very un–mysterious (so to speak) forces, exorbitant investment (from the product design and manufacturing perspectives) that are available to the largest incumbents only. And even that is not guaranteed as we have seen it with the Itanium architecture.
So unless the incumbents commit and follow through, it is not likely (at least not obvious) that RISC-V will enter the mainstream and will rather remain a niche (albeit a viable one). Within the realms of possibility it can be assessed as «maybe» at this very moment.
I would bet on china making risc-v the default solution for entry level and cost sensitive commodity devices within the next couple of years. It’s already happening in the embedded space.
The row with Qualcomm only validates the rationale for fast iterating companies to lean into riscv if they want to meaningfully own any of their processor IP.
The fact that the best ARM cores aren’t actually designed by ARM, but arm claims them as its IP is really enough to understand that migrating to riscv is eventually going to be on the table as a way to maximize shareholder value.
Lack of reg+shifted reg addressing mode and or things like BFI/UBFX/TBZ
The perpetual promise of magic fusion inside the cores has not played out. No core exists to my knowledge that fuses more than two instructions at a time. Most of those take more than two to make. Thus no core exists that could fuse them.
Whether prebuilt distribution binaries support it or not, I can't tell. Simple glance at Debian and Fedora wiki pages doesn't reveal what profile they target, and I CBA to boot an image in qemu to check. In the worst case they target only GC so they won't have Zba. Source distributions like Gentoo would not have a problem.
In any case, talking about the current level of extension support is moving the goalposts. You countered "there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM" with "Lack of reg+shifted reg addressing mode", which is an argument about ISA, not implementation.
25.10 -> RVA23
26.04 (LTS) -> RVA23
where are they?
On the open-source front, I can right now download a RVA23 supporting RISC-V implementation, simulate the RTL and have it out perform my current Zen1 desktop per cycle in scalar code: https://news.ycombinator.com/item?id=41331786 (see the XiangShanV3 numbers)
How long is this supposed to take?
How long until it is accepted that RISC-V looks like a semiconductor nerd snipe of epic proportion designed to divert energy away from anything that might actually work? If it was not designed for this it is definitely what it has achieved.
The absolutely essential bitmanip and vector extensions were just ratified at the end of 2021 and the also quite important vector crypto just in 2023.
Somehow I suspect in 10 years there will be a new set of extensions promising to solve all your woes.
Seriously, the way to do this is for someone to just go off and do what it takes to build a good CPU and for the ISA it uses to become the standard. Trying to do this the other way around is asking for trouble, unless sitting in committees for twenty years is actually the whole idea.
Ventana announced their second-gen Veyron 2 core at the beginning of this year and they are releasing a 192-core 4nm chip using it in 2025. They claim Veyron 2 is an 8-wide decoder with a uop cache allowing up to 15-wide issue and a 512-bit vector unit too. In raw numbers, they claim SpecInt per chip is significantly higher than an EPYC 9754 (Zen4) with the same TDP.
We can argue about what things will look like after it launches, but it certainly crushes the idea that RISC-V isn't going to be competing with ARM any time soon.
If qualcomm changes instruction decoding over you’ll likely see a dramatic difference
Also correct: RISC-V is not anywhere near competitive to ARM at the level that Qualcomm operates.
Also, actually searching the chip in question is impossible.
I highly recommend it, most incisive RISC-V article I've read.
Geekbench does indeed not support the Vector extension, and thus yields very poor results on RISC-V.
RISC-V does indeed not have pipeline reordering.
The Geekbench score thing is a strawman invented to distract from that, no one has mentioned Geekbench except the people arguing it doesn't matter. Everyone agrees, it doesn't matter. So why pound the table about it?
But if you're talking IP, which would be what matters for the argument being made (core IP to use on new design), here's where we at (thanks to camel-cdr- on reddit[0]):
(rule of thumb SPEC2006*10 = SPEC2017)
SiFive P870-D: >18 SpecINT2006/GHz, >2 SpecINT2017/GHz
Akeana 5300: 25 SpecINT2006/GHz @ 3GHz
Tenstorrent Ascalon: >18 SpecINT2006/GHz, IIRC they mentioned targeting 18-20 at a high frequency
Some references for comparing:
Apple M1: 21.7 SpecINT2006/GHz, 2.33 SpecINT2017/GHz
Apple M4: 2.6 SpecINT2017/GHz
Zen5 9950x: 1.8 SpecINT2017/GHz
Current license-able RISC-V IP is certainly not slow.
0. https://www.reddit.com/r/hardware/comments/1gpssxy/x8664_pat...
This is unbelievably understated. If I were Qualcomm, I would put parts of the Nuvia team's expertise to work designing RISC-V applications cores for their various SoC markets.
Software ecosystem either takes lots of time (see ARM) or you need to be in a position to force it (Apple & M chips).
RISC-V is still a long way off from consumer (or server) prime time
No, it's not just around-the-corner but Qualcomm has a role to play here. Not like they should just sit on the sidelines and say "call me when we are RISC-V"
Or is that question irrelevant in light of the other findings, and the legal fight is actually over, with Qualcomm as the clear winner?
ARM should be able to re-file the lawsuit and get financial damages out of Nuvia, which Qualcomm will need to pay. But I doubt the damages will be high enough to bother Qualcomm. I don't think ARM will even bother.
As far as I could tell, this was never about money for ARM. It was about control over their licensees and the products they developed. Control which they could turn into money later.
The Nuvia ALA (architecture license agreement) specified much higher royalties, i.e. a much bigger cut for Arm, than the Qualcomm ALA.
The official reason for the conflict is that Qualcomm says that the Qualcomm ALA is applicable for anything made by Qualcomm, while Arm says that for any Qualcomm product that includes a trace of something designed at the former Nuvia company the Nuvia ALA must be applied.
The real reason of the conflict is that Qualcomm is replacing the CPU cores designed by Arm with CPU cores designed by the former Nuvia team in all their products. Had this not happened, Arm would have received a much greater revenue as a result of Nuvia being bought by Qualcomm, even when applying the Qualcomm ALA.
This is what you use to fund the next generation of said IP. There is no magic.
This is why companies are pushing toward RISC-V so hard. If ARM's ISA were open, then ARM would have to compete with lots of other companies to create the best core implementations possible or disappear.
The number of servers with Arm-based CPUs is growing fast, but they are not sold on the free market, they are produced internally by the big cloud operators.
Only Ampere Computing sells a few Arm-based CPUs, but they have fewer and fewer customers (i.e. mainly Oracle), after almost every big cloud operator has launched their own server CPUs.
So for anyone hoping to enter the market of Arm-based server CPUs the chances of success are extremely small, no matter how good their designs may be.
Meanwhile, Qualcomm's ALA expires in 2033. They will almost certainly have launched RISC-V chips by then specifically because they know their royalties will be going WAY up if they don't make the switch.
So the actual central issue was if Qualcomm had the right to transfer the technology developed under the Nuvia architecture license to the Qualcomm architecture license.
It strikes me as a surprising diversion to this, and I wonder how prepared for this outcome the respective teams were.
The first argument was always the weaker argument, the license explicitly banned licence transfers between companies without explicit prior permission. Qualcomm were arguing that the fact they already had a licence counted as prior permission.
But ARM bypassed that argument by just terminating the Nuvia licence, and by the time of the lawsuit qualcomm was down to the second argument. Which was good, it was the much stronger argument.
Qualcomm for years has sworn they didn't transfer the existing Nuvia tech. It's the Nuvia team, and a from scratch implementation. Qualcomm was saying pretty strongly they didn't allow any direct Nuvia IP to be imported and to pollute the new design.
They saw this argument coming from day 1 & worked to avoid it.
But everytime this case comes up, folks immediately revert to the ARM position that it's Nuvia IP being transfered. This alone is taking ARMs side, and seems not to resemble what Qualcomm tried to do.
Edit: I guess it wouldn't be that simple to do
Sometimes its the entire goal. Leave a company to fill a need it can't solve itself with the plan to get bought back in later.
Times have changed tho so who knows.
ARM is (edit) figuratively blowing up their ecosystem for no reason; now everyone will be racing to develop RISC-V just to cut out ARM...
With explosives?