Free interactive tool that shows you how PCIe lanes work on motherboards
221 points
2 days ago
| 15 comments
| mobomaps.com
| HN
nirav72
1 day ago
[-]
Nice! One suggestion - please add AM4 socket boards. With current memory prices, AM5 with DDR5 is becoming unattainable for some. DDR4 prices are rising as well. But not nearly as bad as DDR5.
reply
Dylan16807
4 hours ago
[-]
So you're specifically considering the people that would have gone AM5 but are now looking at AM4 at the end of 2025 and into 2026?

Is that a significant number of people? I kind of expect almost everyone that waited this long to sit tight on their current builds and keep waiting until RAM goes back down.

reply
KronisLV
24 minutes ago
[-]
If I needed a new build right now, it would most likely be AM4. Going for the latest gen would just be out of my price point, whereas AM4 still has plenty of good chips and motherboards out there.
reply
dontlaugh
4 hours ago
[-]
Some don’t already have a build.

AM4 has high end gaming CPUs available and there are not yet major limitations besides future upgrades.

AM4 parts probably makes the best value gaming build you can get right now.

reply
Dylan16807
3 hours ago
[-]
> Some don’t already have a build.

The number of people that don't have a previous computer and are shopping based on individual parts is even smaller.

> AM4 parts probably makes the best value gaming build you can get right now.

Has the value improved in the last year and a half? The people that would want AM4 have had a long time to buy AM4.

reply
dontlaugh
2 hours ago
[-]
Yes, due to DDR5 prices going up after a period of people upgrading to AM5. There is currently a lot of second hand AM4 DDR4 stock, making the value particularly good.

FWIW I recently upgraded and ended up getting something else, but mostly due to lack of availability of ITX AM4 motherboards. I got one with a soldered high end laptop CPU, it was much cheaper.

reply
Dylan16807
1 hour ago
[-]
DDR5 price increases don't make AM4 improve, they just make AM5 worse.

Consistent drops from secondhand stock sound good.

reply
dontlaugh
46 minutes ago
[-]
It makes AM5 less attractive and AM4 better value relatively. This situation will end eventually, of course.
reply
matja
15 hours ago
[-]
How can I contribute the data for the boards I own which are not on the site?
reply
rao-v
18 hours ago
[-]
I’ve been struggling to find an AM5 board that can run three MI50s at 4x. This is perfect thank you.

Him are you sure about some of the PCI slots? I think some marked as 4x get downgraded to 1x on these boards…

Further edit - this maybe accurate - how are you getting this / confirming it?

reply
tagyro
17 hours ago
[-]
For disclosure, this was created by "Ronin Wilde" - https://www.youtube.com/watch?v=cgdXj75VSMo

I found it useful and thought others might also like it.

reply
max002
1 hour ago
[-]
So cool xD i think you could take it to tool with premoum features for people who are learning :)
reply
throw7
16 hours ago
[-]
I wish all manufacturers clearly gave info like this up front. AM4 boards would be nice.
reply
PunchyHamster
16 hours ago
[-]
Yeah my ASRock have nice map of the every lane and interface and where they are connected on the board. Especially important as some devices go thru second io expander
reply
temp0826
13 hours ago
[-]
Probably a good thing SLI fell out of fashion. No consumer boards with multiple 16x, but a few with 2 8x (gated behind a "mode" switch). A few years ago it was looking like we were on our way to full 4 16x slots. For cuda/llm/whatever does it really matter if the cards are in 1x slots?
reply
cjensen
6 hours ago
[-]
It's the other way around. SLI falling out of fashion is why there are no consumer boards with multiple x16 slots. There's no longer any demand for it on the consumer side, so the CPU vendors only provide lots of PCIe lanes for expensive chips.

On the server side, seven x16 slot motherboards exist.

reply
Dylan16807
4 hours ago
[-]
I would expect x8 at 5.0 speeds to be plenty for SLI. That's twice as fast as x16 slots were around the end of the SLI era.
reply
hoss1474489
12 hours ago
[-]
GPUs in 16x slots is still important for LLM stuff, especially multi-GPU, where lots of data needs to move between cards during computation.
reply
Dylan16807
4 hours ago
[-]
Depends on what you're doing. I'm pretty sure the bandwidth for inference isn't much.
reply
tryauuum
11 hours ago
[-]
... shouldn't the logic be opposite? "Bad that SLI went out of fashion, there's no way for two GPUs to communicate fast over pcie, and SLI would allow such fast bridge"
reply
wtallis
11 hours ago
[-]
Whether or not SLI remained viable for gaming, Broadcom was going to jack up the prices on PCIe switches to the enterprise-only range. That's the real reason why consumer motherboards don't have more GPU slots. Mainstream consumer CPU sockets never had a wealth of PCIe lanes, there was just a brief span of years where PCIe switches were cheap so high-end consumer boards could offer several x8 or x16 slots (sharing bandwidth in ways that make diagrams like these important).

In previous decades, non-mainstream CPU sockets were also more accessible to consumer budgets; first-gen Threadripper started at only 8 cores, so it was possible to pay extra for more memory channels and IO lanes without also buying an excess of CPU cores. But that had little to do with the popularity or viability of multi-GPU consumer systems.

reply
crote
4 hours ago
[-]
But PCIe switches are now more common than ever. How else do you think those high-end consumer boards are able to provide six M.2 slots?
reply
wtallis
3 hours ago
[-]
PCIe switches with current-generation link speeds and high lane counts are prohibitively expensive and have been absent from consumer motherboards since PCIe gen3 showed up.

The chipsets on consumer motherboards are pretty much PCIe switches plus some SATA and USB controllers, but they're clearly in a different league from anything that's relevant to connecting GPUs. The host interfaces are x4 or occasionally x8, and the downstream links don't support links wider than x4, and at most a few of those. The link speeds are often a generation (sometimes two) behind what the CPU's PCIe lanes support. The high-end motherboards for AMD's consumer platform support more SSD slots by daisy-chaining a second chipset off the first; you get more M.2 slots but it's all still sharing a single PCIe gen4 x4 link to the CPU.

In the PCIe gen2 era, it was common to see high-end consumer motherboards include a 48-lane PCIe switch to take x16 from the processor and fan it out to two x16 slots or some combination of x16 and x8 slots. That kind of connectivity has vanished from consumer motherboards, and isn't really common in the server or workstation markets, either. 48-lane and larger PCIe switches exist, but are mostly just used for servers to connect lots of NVMe SSDs.

reply
bayindirh
4 hours ago
[-]
Not all PCIe switches are made equal. IIRC, there are two kinds, and only the expensive one can work without adding excessive latency, important for GPUs.

NVMe traffic is very tame when compared to GPU traffic.

reply
tripdout
13 hours ago
[-]
Very cool. Seeing how almost everything from WiFi, to NVME SSDs, (to apparently USB ports sometimes?) are connected to it, is PCIe the only high-speed interconnect we have for peripherals to communicate with modern CPUs?
reply
stinkbeetle
12 hours ago
[-]
The high speed signals that come out of mainstream CPU chips are generally DDR, SMP, and PCIe. Outside of a very few exotic things that use QPI or HT to connect, or exotic storage might use DDR, yes high speed off-chip peripherals use PCIe.

NVLink is another one you might have heard of, although it might also fall in the exotic category. I think some systems take AXI off-chip too. So there's various other weird and wonderful things. But none you're likely to have in your PC I think.

On-chip is another story, you can connect USB or NVMe or GPU "peripherals" using an on-chip interconnect type. But I guess you are asking about off-chip.

reply
crote
4 hours ago
[-]
USB4 needs PCIe because its Thunderbolt part has PCIe tunneling.
reply
baby_souffle
12 hours ago
[-]
> PCIe the only high-speed interconnect we have for peripherals to communicate with modern CPUs?

In a pedantic/technical sense, no. Practically speaking though, yes.

reply
sidewndr46
1 day ago
[-]
Wow, this is great! I don't know how they generate this but it's really impressive. One of the things that I've been surprised with is some older dual socket workstations have tons of PCI-E lanes, but none are hooked to the second CPU it seems
reply
rkagerer
13 hours ago
[-]
Can anyone recommend a specific, well-made, high-performance motherboard with loads of PCIe lanes and expansion slots, and sensible lane topology?

All the motherboards these days make me feel claustrophobic. My current workstation is pretty old, but feels like it had more expansion capability (relative to its time) than what's on the market today.

reply
Aurornis
13 hours ago
[-]
You’ll have to be more specific about your price range. There are a lot of server and workstation chipsets/platforms that will have a large number of PCIe lanes, but you will pay for them.

I really suggest not seeking a lot of PCIe lanes unless you really need them right now, though. The price premium for a platform with a lot of extra PCIe is very steep once you get past consumer boards. It would be a shame to spend a huge premium on a server board and settle for slower older tech CPUs only to have all of those slots sit empty.

It’s a good idea to add up the PCIe devices you will use and the actual bandwidth they need. You lose very little by running a GPU in a PCIe x8 slot instead of a full x16 slot, for example. A 10G Ethernet card only needs 1 lane of PCIe 4.0. Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers.

reply
crote
4 hours ago
[-]
For a lot of enthousiasts the problem genuinely is in the lane count.

For example, putting a cheap 2nd-hand dual 25G NIC in a DIY server is quite attractive. But those are using PCIe Gen3 - so unless you're giving it 8 lanes it is being bottlenecked.

Same on the storage side: that PCIe Gen4 x4 slot might technically have enough bandwidth for four SSDs in most storage applications, but the board doesn't support bifurcating it.

The total platform bandwidth is plenty, it just isn't available in the way I want to use it.

reply
shadowpho
13 hours ago
[-]
>Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers

Sorta yes but kinda the other way around — you’ll mostly notice in short high burst of I/O. This is mostly the case for people who use them to run remote mounted VM.

Nowadays all nvme have a cache on board (ddr3 memory is common), which is how they manage to keep up with high speed. However once you exhaust the cache speeds drop dramatically.

But your point is valid that very few people actually notice a difference

reply
wtallis
10 hours ago
[-]
You're pretty far off the mark about SSD caching. A majority of consumer SSDs are now DRAMless, and still can exceed PCIe 4.0 x4 bandwidth for sequential transfers. Only a seriously outdated SSD would still be using DDR3; good ones should be using LPDDR4 or maybe DDR4. And when a SSD does have DRAM, it isn't there for the sake of caching your data, it's for caching the driver's internal metadata that tracks the mapping of logical block addresses to physical NAND flash pages.
reply
shadowpho
9 hours ago
[-]
Here’s a page comparison 8 modern SSD cache, notice how they all fall off once the cache is full.

https://pcpartpicker.com/forums/topic/423337-animated-graphs...

reply
wtallis
3 hours ago
[-]
That has nothing to do with DRAM; that would be completely obvious if you stopped to think about the cache sizes implied by writing at 5-6GB/s for tens of seconds before speeds drop. Nobody's putting 100+ GB of DRAM on a single SSD. You get at most 1GB of DRAM per 1TB of NAND.

What those graphs illustrate is SLC caching: writing faster by storing one bit per NAND flash memory cell (imprecisely), then eventually re-packing that data to store three or four memory bits per cell (as is necessary to achieve the drive's nominal capacity). Note that this only directly affects write operations; reading data at several GB/s is possible even for data that's stored in TLC/QLC cells, and can be sustained for the entire capacity of the drive.

reply
rkagerer
11 hours ago
[-]
Sky's the limit. (Short of hiring a team of engineers to design and fab a one-off board, anyway).

I appreciate your advice. I use the machine for a variety of different tasks, and am looking to accommodate at least two high-end GPU's (1 for passthrough to virtual machines for running things like Solidworks), a number of SSD's, and as many PCIe expansion cards as possible. Many of the cards are older-gen, so could be consolidated to just a few modern lanes if I could find an external expander with sufficiently generous capacity. Here's a quick inventory of what's in the existing box:

- Mellanox Infiniband. For high-speed, low-latency networking... these days, probably replaceable with integrated NIC's, particularly if they come with RDMA.

- High-performance RAID. I've found dedicated cards offer better features, performance, capacity, resilience and reliability than any of the mobo-integrated garbage I've tried over the years. Things like BBU's/SuperCaps, seamless migration and capacity upgrades, out-of-band monitoring, etc. e.g. I've taken my existing mass storage array created on a modest ARC-1231ML 15+ years ago, through several newer generations to an ARC-1883, with many disk and capacity upgrades along the way, but it's still the same array without ever having had to reformat and restore from scratch. Incidentally I've been particularly happy with Areca's hardware, and they've even implemented some features I requested over the years (like the ability to hot-clone a replacement disk for one expected to fail soon then swap in the new one, without having to degrade the array and wait for a lengthy rebuild process that reduces your fault tolerance while hammering all member disks; as well as some other tweaks for better compatibility with tools like Hard Disk Sentinel). I notice they're finally starting to come out with controllers oriented to SSD's, like a PCI 5.0 product (https://www.areca.com.tw/products/nvme-1689-8N.html) for up to 8 x4 M.2 SSD's that boasts up to 60 GB/s, which is interesting (though the high-queue-depth random performance still doesn't match directly-plugged drives). I know software-RAID for the solid state stuff is also an option (as is just living without redundancy), but it's been convenient outsourcing the complexity.

- Slim, low-performance accessory GPU for more displays

- A few others this crowd would just laugh at me for (e.g. a PCI I/O card that includes a true parallel port, because nothing is more fun™ for hobbyist stuff and USB-based alternatives were found to have too much abstraction or latency; a SCSI adapter for an archaic piece of vintage hardware I'd love to keep installed permanently but there ain't space, and occasional one-off use stuff like a high-bandwidth digitizer).

The motherboard had 6 PCIe slots, and I've got two more provided by an external PCIe expander (after accounting for the one lost for it's own connection). If I could find some kind of expander that took a single PCIe 5.0 slot and turned it into half a dozen PCIe 3.0 slots (some full-width) I'd be set.

I know I'm at the crazy end of how-much-crap-can-you-jam-in-one-PC, but it still seems bizarre to me that newer boards have so many fewer slots yet feel lane-constrained, when between leading-edge SSD's and high-bandwidth GPU's the demand for more lanes is skyrocketing. When I built the previous PC it felt tight but doable... these days it feels like I can barely accommodate the level of graphics and storage I'd like, and by the time I do, there's nothing left for anything else. Granted it's been a few years since I got my hands dirty with this stuff, so maybe I'm just doing it wrong?

And yes, I've heard of USB... and have a bazillion devices plugged in (including some of exotic ones like an LCD display, logic analyzer, and a legit floppy drive that does get used once in a blue moon like when I need to make a memtest86 boot disk for a vintage PC). I've actually found some motherboards have issues where the USB stack gets flakey once you have too many devices connected (even using powered hubs to mitigate power constraints).

Ok... go ahead and have at me; tell me I'm old and dusty and I should take my one GPU and one SSD and be happy with them ;-).

reply
michaelt
12 hours ago
[-]
Given that you've said 'workstation', if you've got a spare $5000, a Threadripper Pro comes with 128 PCIe 5.0 lanes.

This means you can get a motherboard like the "Asus Pro WS WRX90E-SAGE SE" which dedicates 104 lanes to seven PCIe slots and 16 lanes to four M.2 slots.

For more like $3000 you can get a non-Pro Threadripper; the "Asus Pro WS TRX50-SAGE" has a more restrained 48 PCIe 5.0 and 32 PCIe 4.0 lanes, meaning the board's five PCIe slots and three M.2 slots have a mixture of speeds and lanes.

The rest of the market seems to think you just want to plug in one huge four-slot GPU and perhaps one other card.

reply
rkagerer
11 hours ago
[-]
Now we're talking - thanks!

(ps. I don't suppose they make a "supersized" version of that board with a gap beside the first one or two GPU slots? So you can install a couple double-width cards without losing the underlying slots? Or a good source for a single-width, high-end GPU like the Inno3D RTX 5090 iChill Frostbite Pro?)

reply
michaelt
3 hours ago
[-]
The card in the bottom slot can overhang the board, if you only need to add a single multi-slot card.

Beyond that point you're probably looking at getting a server. Boards like the X9DRG-OF have eight dual-width slots, and you can get them in a quadruple-power-supply configuration - which you'll want when you add up the power consumption of a bunch of GPUs. Anything rack-mount will be very noisy though.

Another options is a cryptocurrency mining style 'open frame' system with PCIe risers. Google that and you'll find some crazy setups enthusiasts have posted to reddit.

Or you can just run something in the cloud - you can buy a lot of cloud time for $6000.

reply
PeterStuer
5 hours ago
[-]
You would typically use good riser cables for that. Spacing out the MB pcb itself is not an effective option.
reply
rkagerer
13 hours ago
[-]
Some builds I kept tabs on:

Let's Encrypt documented their early 2021 whitebox that used 128 PCIe 4.0 lanes, mainly for storage: https://letsencrypt.org/2021/01/21/next-gen-database-servers...

Troy Hunt (HaveIBeenPwned) recently solicited upgrade advice from the internet and settled on an Asus Pro WS TRX50-SAGE WIFI (which doesn't appear to be in the MoboMaps database yet): https://gist.github.com/troyhunt/a6e565981e4769976e9cffb705f...

reply
PeterStuer
6 hours ago
[-]
If you really need lots of pcie lanes, you are going to be moving up to the TRX50 (or used TRX40) and its ilk. Different price ranges from your typical enthusiast MB though.
reply
hengheng
10 hours ago
[-]
Look into CXL, Oculink, and riser cables.
reply
gitpusher
15 hours ago
[-]
Whoa. This is so cool and helpful. Too bad my board is Intel. Is there a way to contribute to this?
reply
tagyro
14 hours ago
[-]
I dropped a message to the creator :fingers_crossed: they open the motherboard database so we can make contributions
reply
mifreewil
17 hours ago
[-]
Very nice! Just a note (as the site says on bottom left side), this can vary depending on the CPU you use, would be nice to be able to select all different variations of supported CPUs as a future feature.
reply
smcleod
17 hours ago
[-]
That is so incredibly useful, hardware vendors do such a bad job of properly advertising how many GPUs will actually work and with what combination of m.2 slots in use.
reply
consp
2 hours ago
[-]
Bifurcation support is also almost never mentioned, even if the bios supports it.
reply
asciii
18 hours ago
[-]
Warning: addicting site :)
reply
NedCode
1 day ago
[-]
Legendary!
reply