I thought this was going to mean each stack was able to directly talk to the controller, since all stacks are resting on an interposer thing. But actually there is still a logic controller slice at the bottom of the stack, not at a right angle to the stack.
Instead of HBM microbumps between layers there is a more compact/dense TSV ("fusion bonded via-in-one") system. Intel once more showing their strong chiplet packing prowess! The claim is that thermals are still much better somehow, in spite of volumetric cell density increasing (from thinner layers). The demo has 8+1 dram+controller layers.
https://patents.google.com/patent/US20260040969A1/en
It talks there about 'Z axis memory' rather than angle, and that one is talking about inductive stuff through stacks of vias.
he's also got: https://patentsgazette.uspto.gov/week02/OG/html/1542-2/US125...
He obviously likes thinking about stacks of dies.
The newer slides show a more conventionally stacked die arrangement. With yet a third way of connecting stacks, that at least visually looks different than via-in-one columns and more like tabs between the layers, and definitely not wireless. Hard to guess what to expect from this assorted material.
Get perpendicular: https://www.dailymotion.com/video/x62mja
The closest thing I can think of that's come close to maybe challenging DRAM is HP's memristors but those really didn't pan out (probably too much power consumption).
Pet peeve: stupid analogy seeing how wheels kept being improved throughout the millennia with every new technology. The only thing in common is that it's round.
Similarly, DRAM in any way you see it has been improving to the point of barely being recognizable since the 70s.
That said, DIMMs and the whole bus idea is in dire need of getting a new type of bearing.
IBM has been using their own memory bus technology for both their POWER and Z machines. IIRC, it’s somewhat reminiscent of CXL, trading latency for bandwidth and size.
The ultimate shape of DRAM is the same, the main thing that's changed is the materials and techniques to produce it. Making it very impressive, but none the less completely recognizable by someone who was familiar with DRAM in the 70s.
The wheel is the same. Pluck someone from 1000 years ago and they'll be able to correctly identify a modern wheel even though they've never seen any of the composites that go into it. The function of the wheel is identical to how they used it.
Then again, flight itself has obviated—or, rather, introduced—many transit workloads that could be performed by wheeled vehicles, and operates on different principles entirely.
pretty cool, sort of graceful (or at least planned) degradation of tires without cascading problems.
https://wccftech.com/intel-showcases-its-zam-memory-prototyp...
The connectors on the side indeed look like the letter Z. Maybe it disperses the stronger currents across the stack of the crystals, instead of concentrating.
And why it's not currently done is likely because it's hard enough to stack when everything is uniform. A small deformity in the first layer will spoil the entire chip.
Every time the recipient hypes the shit out of it, of course.
As far as I can tell, Intel more-or-less pioneered the idea of SSDs being the best storage rather than the cheap storage, for instance. The X25-M and X25-E were absurdly good. Then, once the market was established...they pulled out of it.
Not that releasing the GPU would be something super innovative, they already have the B70.
This makes perfect sense given that Intel's target margins are pretty high. They only want to sell advanced tech, not commodities. Once SSDs became commoditized Intel was out.
They knew it wouldn't be profitable enough long term, but it would increase demand for their products.
Popular science kind of backgrounder (can't vouch for the accuracy/relevancy - details are very scarce): https://www.geeksforgeeks.org/digital-logic/polymer-memory/
I think what's semi-unfortunate is all the swings and misses, especially the cases where it wasn't necessarily a bad idea but Intel gives up too soon;
- Massively parallel simple-ish x86 cores a-la Xeon Phi; okay maybe not the best idea on the surface but I feel like nowadays the opportunities could be more forthcoming with how to reuse parts of that tech (And maybe they do but are just quiet about it... i.e. GPU acceleration)
- Optane. I think the tech would have been cheaper if they made terms for licensing easier, but maybe I'm missing part of the equation...
- This thing where they keep half assing the GPU strategy; Imagine if B70 launched last year alongside the B60 and B50, before DRAM prices went sideways. Or if they didn't take so long to release a >16GB GPU in the first place; that would have built a lot of interest, but instead they finally release a 32GB GPU alongside more bad news for the overall roadmap. The whole situation instead becomes a jarring rollercoaster that makes everyone worry that Intel is gonna kill the project the way everything but CPUs gets killed lately.