AWS introduces Graviton5–the company's most powerful and efficient CPU (14 comments)
Parent comment made a low quality joke that lacked substance.
[Disclaimer, I'm CEO of Vantage - the company that maintains the site]
Thank you for maintaining this, I do use it every few months at $DAYJOB and it's quite useful for my capacity/deployment planning.
Or is your value proposition for companies that use a bunch of different cloud providers ?
This AWS EC2 site is just an open-source project and site we maintain for the benefit of the community. So it's not directly our business but it promotes our brand and is just a helpful site that I think should exist. It's very popular and has been around for about 15 years now.
Our main business hosted on the main domain of https://www.vantage.sh/ is around cloud cost management across 25 different providers (AWS, Azure, Datadog, OpenAI, Anthropic, etc) and the use-cases there about carving up one bill and showing specific costs back to engineers for them to be held accountable to, take action on, etc. Cloud costs and their impact on companies' margins is a big enough problem for vendors like us to exist and we're one player in a larger market.
I've been experimenting FEX on Ampere A1 with x86 game servers but the performance is not that impressed
I need a reference point so I can compare it to Intel/AMD and Apple's ARM cpus.
Otherwise it is buzzwords and superlatives. I need numbers so I can understand.
The cobbler's children have no shoes.
We actually recently made the decision to staff someone full time on the site just to maintain it for the community. Even the JSON file for the site gets hit hundreds of thousands of times per day...feels like it's become kind of the de-facto source of truth in the community for where to get reliable AWS pricing information and I believe its powering a pretty remarkable amount of downstream applications with how much usage its getting.
We acquired the site almost 5 years ago and want to continue to improve it for the community. If you have any cloud cost management needs, we're also able to help for our main business here: https://www.vantage.sh/
Awesome to see all the comments on it here!
Core per core it pales compared to Apple's superlative processors, and falls behind AMD as well.
But...that doesn't matter. You buy cloud resources generally for $/perf, and the Graviton's are far and away ahead on that metric.
While I know this thread will turn into some noisy whack-a-mole bit of nonsense, an easy comparison is the c8g.2xlarge vs the c8i.2xlarge. The former is Graviton 4 vs Granite Rapids in the latter. Otherwise both 16GB, 15Gbps networking, and both are compute optimized, 8 vCPU machines.
Performance is very similar. Indeed, since you herald the ffmpeg result elsewhere the Graviton machine beats the Intel device by 16%.
And the Graviton is 17% cheaper.
Like, this is a ridiculous canard to even go down. Over half of AWS' new machines are Graviton based, but per your rhetoric they're actually uncompetitive. So I guess no one is using them? Wow, silly Amazon.
c8g passmark score: 1853 c8i passmark score: 3008
I guess the fps column isn't a good representation of single thread score. Also looking at the passmark scores for i4i vs i4g, i4g is about 1k and intel is about 2k, and the more modern Graviton equivalent of i4 is the same price, so...
https://go.runs-on.com/instances/ec2/c8g
https://go.runs-on.com/instances/ec2/c8i
https://go.runs-on.com/instances/ec2/i4g
https://go.runs-on.com/instances/ec2/i4i
Silly amazon.
See the comment by electroly. They actually know what they're talking about.
See, the FPS score is for the whole machine. The c8g gives you 8 real cores. The C8i gives you 4 real cores, 4 hyperthreading pseudo-cores. So for those two machines the c8g unequivocally gives you more absolute computing performance, regardless of the passmark single thread (on a single core) on the c8i being better than a single core on the c8g. And the c8g comes at a big discount as well.
That's...the point. The Graviton processors are cheaper per core, and lower performance per core, and you make it up in bulk. You get more performance per $ if you're okay with the ARM stack and your software is good with it, and this is basically universally true comparing Graviton instances versus Intel/AMD alternatives.
You're wrong. Maybe cite some other random nonsense now?
It really is incredible how ARM basically commoditized processors (in a good way).
I would much rather see some kind of mandatory open market sale of all cpu lines so that in theory you can run graviton procs in rackspace, apple m5 servers in azure, etc.
Yes and maybe no. They do "cheat" in that internal / managed services often use Graviton where possible. It works out cheaper without the Intel / AMD "tax".
Neoverse V3 is the server version of the Cortex-X4 core which has been used in a large number of smartphones.
The Neoverse V3 and Cortex-X4 cores are very similar in size and performance with the Intel E-cores Skymont and Darkmont (the E-cores of Arrow Lake and of the future Panther Lake).
Intel will launch next year a server CPU with Darkmont cores (Clearwater Forest), which will have cores similar to this AWS Graviton5, but for now Intel only has the Sierra Forest server CPUs with E-cores (belonging to the Xeon 6 series), which use much weaker CPU cores than those of the new Graviton5 (i.e. cores equivalent with the Crestmont E-cores of the old Meteor Lake).
AMD Zen 5 CPUs are significantly better for computationally-intensive workloads, but for general-purpose applications without great computational demands the cores of Graviton5 and also Intel Skymont/Darkmont have greater performance per die area and power consumption, therefore lower cost.
That is not entirely accurate. X4 is big core design. All of its predecessor and successor has always had >1mm2 die space design. X4 is already on the smaller scale, it was the last ARM design before they went all in chasing Apple's A Series IPC. IRRC it was about 1.5mm2 depending on L2 cache. E-Core for Intel has always been below 1mm2. And again IRRC that die size has always been Intel's design guidelines and limits for E-Core design.
More recent X5 / X925 and X6 / X930 / C1 Ultra?? ( I can no longer remember those names ) are double the size of X4. With X930 / C1 Ultra very close to A19 Pro Performance. Within ~5%.
I assume they stick with X4 is simply because it offers best Performance / Die Space, but it is still a 2-3 years old design. On the other hand I am eagerly waiting for Zen 6c with 256 Core. I cant wait to see the Oxide team using Zen 6c, forget about the cloud. 90%+ of companies could fit their IT resources in a few racks.
The cores designed now by the Arm company for non-embedded applications are distributed into 4 sizes, of which the smaller 2 sizes correspond to what were the original "big and little sizes", but what was originally the big size has been continued into what are now medium-to-small cores, and the last such core before the rebranding was Cortex-A725.
Cortex-X4 is of the second size, medium-to-large. Cortex-X925 was the last big core design before Arm changed the branding this year, so several recent smartphones use Cortex-X925 as the big core, Cortex-X4 as the medium-sized core and Cortex-A725 as the small cores, omitting the smallest Cortex-A520 cores.
Cortex-X4 and Intel Skymont have exactly the same size, 1.7 square millimeter with 1 MB L2 cache memory (in Dimensity 9400 and Lunar Lake). This is about a third of the area of a big core like an Intel P-core and less than a half of the area of a Zen 5 compact core (but AMD uses an older less dense CMOS process; had AMD also used a "3 nm" process the area ratio would not have been so great, and Zen 5 has a double throughput for array operations).
Moreover, Neoverse V3/Cortex-X4 and Intel Skymont/Darkmont have approximately the same number of execution units of each kind in their backends. Only their frontends are very different, which is caused by the different ISAs that must be decoded, Aarch64 vs. x86-64.
The last Arm big core before rebranding, Cortex-X925, was totally unsuitable as a server core, as it had very poor performance per area, having a double area in comparison with Cortex-X4, but a performance greater by only a few tens percent at most. Therefore the performance per socket of a server CPU would have been much lower than that of a Graviton5, had it been implemented with Cortex-X925, due to the much lower number of cores per socket that could have been achieved.
Cortex-X4 was launched in 2023 and it was the big core of the 2024 flagship smartphones, then it has become the medium core of the 2025 flagship smartphones. Its server variant, Neoverse V3, has been launched in 2024 and it has been deployed in products only this year, first by NVIDIA (in Orin) and now by AWS.
It is not at all an obsolete core. As I have said, Intel will have only next year a server CPU with E-cores as good as Cortex-X4. We do not know yet any real numbers about the newly announced Arm cores that have replaced Cortex-A520, Cortex-A725, Cortex-X4 and Cortex-X925, so we do not know if they are really significantly better. The numbers used by Arm in presentations cannot be verified independently and usually when the performance is measured much later in actual products it does not match the optimistic predictions.
The new generation of cores might be measurably better only for computational applications, because they now include matrix execution units, but their first implementation may be not optimal yet, as it happened in the past with the first implementation of SVE, when the new cores had worse energy efficiency than the previous generation (which was corrected by improved implementations later).
I have no connection with them, so I have no idea when these instances will be generally available.
Privileged big customers appear to be already testing them.
It seems obvious to me that AWS using their market dominance to shift workloads to Graviton.
The competitive advantage right now is in NVIDIA chips and I guess AWS needs all their free cash to buy those instead of non-competitive advantage CPUs.
I believe the main motivator for AWS is efficiency, not performance. $ of income per watt spend is much better for them on Graviton.
Not if you are looking at price/performance. AWS could be taking a loss to elevate the product though, no way to know for sure.
Meanwhile, when AWS announces a new chip its probably something they have already been building out in their datacenters.
Don't they still offer free nano EC2s? This is not a better price than $0.