If it’s design-level, and tapeout needs to happen for a given chiplet design, why does the article mention different process nodes? And also, how is this different than any fabs available IP?
Thanks!
The benefit of multiple dies on one package is that on-package wires are denser and shorter which increases performance. Multiple chiplets on an interposer is even better.
https://imapsource.org/article/128222-enabling-heterogenous-...
I do not think that using various kinds of silicon bridges warrants a change in name. It should be fine to call any such product that uses multiple chips as a MCM (multi-chip module). Even for consumer CPUs, MCMs have already a long history, e.g. with AMD Bulldozer variants, or with the earlier Pentium and Athlon that had the L2 cache memory on a separate chip.
The word "chiplet" is relatively new and its appearance is justified for naming a chip whose purpose is to be a part of a MCM, instead of being packaged separately. A modern chiplet should be designed specifically for this purpose, because its I/O buffers need very different characteristics for driving the internal interfaces of the MCM, in comparison with driving interfaces on a normal PCB, i.e. they can be smaller and faster.
And from there, stuff like the Vac 9000's packaging were very advanced for the time designs that I'd hesitate to call PCBs unless we're calling all organic packaging PCBs now.
The explanation by 'kurthr provides some insight as to why we don't use the term MCM for them today, because the term would imply that they're not integrated as tightly as they are today.
The best example of MCM's are probably Intel’s Pentium D and Core 2 Quad. In these older "MCM" designs, generally the multiple chips were mostly fully working chips - they each had their own last-level (e.g. L3) cache (LLC). They also happened to be manufactured on the same lithography node. When a core on Die A needed data that a core on Die B was working on, Die B had to send the data off the CPU entirely, down the motherboard's Front Side Bus (FSB), into the system RAM, and then Die A had to retrieve it from the RAM.
IBM POWER4 and POWER5 MCM's did share L3 cache though.
So parent was 'wrong' that "chiplets" were ever called MCM's. But right that "chips designed with multiple chiplet-looking-things" did used to be called "MCM's".
Today's 'chiplets' term implies that the pieces aren't fully-functioning by themselves, they're more like individual "organs". Functionality like I/O, memory controllers, and LLC are split off and manufactured on separate wafers/nodes. In the case of memory controllers that might be a bit confusing because back in the days of MCM's these were not in the same silicon, rather a separate chip entirely on the motherboard, but I digress.
Also MCM's lacked the kind of high-bandwidth, low-latency fabric for CPU's to communicate more directly with each other. For the Pentiums, that was organic substrates (the usual green PCB material) and routing copper traces between the dies. For the IBM's, that was an advanced ceramic-glass substrate, which had much higher bandwidth than PCB traces but still required a lot of space to route all the copper traces (latency taking a hit) and generated a lot of heat. Today we use silicon for those interconnects, which gives exemplary bandwidth+latency+heat performance.
No, chiplets were called MCMs. IBM and others as you noted had chip(lets) in MCMs that were not "fully-functioning" by themselves.
> Also MCM's lacked the kind of high-bandwidth, low-latency fabric for CPU's to communicate more directly with each other. For the Pentiums, that was organic substrates (the usual green PCB material) and routing copper traces between the dies. For the IBM's, that was an advanced ceramic-glass substrate, which had much higher bandwidth than PCB traces but still required a lot of space to route all the copper traces (latency taking a hit) and generated a lot of heat. Today we use silicon for those interconnects, which gives exemplary bandwidth+latency+heat performance.
This all just smells like revisionist history to make the name be consistent with previous naming.
IBM's MCMs had incredibly high bandwidth low latency interconnects. Core<->L3 is much more important and latency critical than core+cache cluster <-> memory or other core+cache cluster, for example. And IBM and others had silicon interposers, TSVs, and other very advanced packaging and interconnection technology decades ago too, e.g.,
https://indico.cern.ch/event/209454/contributions/415011/att...
The real story is much simpler. MCM did not have a great name particularly in consumer space as CPUs and memory controllers and things consolidated to one die which was (at the time) the superior solution. Then reticle limit, yield equations, etc., conspired to turn the tables and it has more recently come to be that multi chip is superior (for some things), so some bright spark probably from a marketing department decided to call them chiplets instead of MCMs. That's about it.
Aside, funnily enough IBM actually used to (and may still), and quite possibly others, actually call various cookie cutter blocks in a chip (e.g., a cluster of cores and caches, or a memory controller block, or a PCIe block), chiplets. From https://www.redbooks.ibm.com/redpapers/pdfs/redp5102.pdf, "The most amount of energy can be saved when a whole POWER8 chiplet enters the winkle mode. In this mode, the entire chiplet is turned off, including the L3".
I don't follow; you seem to be using "chiplet" to directly mean a multi-chip module, whereas I consider "chiplet" to be a component of a multi-chip module. An assembly of multiple chiplets would not itself be "a chiplet", but a multi-chip module. This is also why I don't follow why the term "chiplet" would replace the term "multi-chip module", because to me, a multi-chip module is not even a chiplet, it's only built with chiplets.
Are chiplets ever more than a single die? Conversely, are there multi-chip modules of only a single die? At least one of these must be true for "chiplet" and "multi-chip module" even to overlap.
Advances in technology and changing economics always shifts things around so maybe chiplets are viable for different things or will make sense for smaller production runs etc., but that doesn't make them fundamentally different that would make them not classified as an MCM like the article seems to suggest. It literally is just the same thing as it always was, multiple chips packaged up together with something that is not a standard PCB but is generally more specialized and higher performing.
Yes exactly. Take separate unpackaged chips, and put them all on one shared substrate (a "silicon interposer") that has wires to connect them. There are a bunch of different technologies to connect them. You can even stack dies.
https://www.imec-int.com/en/articles/chiplets-piecing-togeth...
I think typically you wouldn't stack logic dies due to power/cooling concerns (though you totally could). But you can definitely stack RAM (both SRAM and DRAM). Stacking is kind of a new process though as far as I understand it.
For example, the newer Intel Lunar Lake chip packages together a CPU+iGPU+NPU chiplet and two LPDDR5X RAM chiplets [0][1][2]. If laptop manufacturers want to offer different amounts of RAM, they have to buy a different CPU SKU for 16GB RAM vs 32GB RAM. Panther Lake, the succeeding generation, reversed this and will support off-package RAM modules, but some reasonable people might assume that in the long term RAM will generally be on-package for anything that's not a server/HEDT.
You won't have to worry about making sure the RAM you buy is on the CPU & Motherboard QVL list, but also you won't ever buy RAM by itself at all.
Intel's Clearwater Forest Xeon chip has 12 CPU chiplets, connected together in groups of 4 CPUs each connected to the same 1-of-3 "base" chiplets, and then 2 I/O chiplets. For a total of 17 chiplets, depending on how you count the "base" chiplets which are 'just' fabric, memory controllers, and L3 cache (which is all shared directly between all 4 CPU's). Each group of 4 CPU's sit on top of that base chiplet. [3]
0: https://www.flickr.com/photos/130561288@N04/albums/721777203...
1: (pages 3 and 4) https://www.intel.com/content/www/us/en/content-details/8244...
2: (also pages 3 and 4, but the technical detail in the rest of these slides is much more detailed) https://hc2024.hotchips.org/assets/program/conference/day1/5...
3: page 12 https://hc2025.hotchips.org/assets/program/conference/day1/1...
On the other hand, multi-die integration can greatly increase memory bandwidth.
So it's a reasonable move to integrate memory.
AMD is pretty famous for multi-chip, but they're only recently starting to do actually advanced integrating like Sea-of-Wires between chips. So far most of their chips have had big hot PHY to send data back & forth, rather than trying to make multiple chips that really can communicate directly with each other.
Interesting days ahead. The computer is on the chip now. A smaller domain of system building, with many of the same trade-offs & design challenges that it took to build a box.
Then we can have chiplets.