Sadly not in this case though - the Kioxia drives are interesting, but the fact that Dell has put some in a box is much less so.
the extreme density of these SSDs is actually an anti-feature in the context of spacecraft hardware.
the RAD750 CPU [0] for example uses a 150nm process node. its successor the RAD5500 [1] is down to 45nm. that's an order of magnitude larger than chips currently made for terrestrial uses.
radiation-hardening involves a lot of things, but in general the more tightly packed the transistors are, the more susceptible the chip is to damage. sending these SSDs to space would be an absurd waste of money because of how quickly they would degrade.
and then there's the power consumption & heat dissipation. one of these drives draws 25W [2] and Dell is bragging about cramming 40 of them into one server. that's a full kilowatt of power - essentially a space heater in a 2U form factor.
0: https://en.wikipedia.org/wiki/RAD750
1: https://en.wikipedia.org/wiki/RAD5500
2: https://americas.kioxia.com/content/dam/kioxia/en-us/busines...
[1] https://www.pcmag.com/news/amd-chips-are-powering-newest-sta...
[2] https://docs.amd.com/r/en-US/ds955-xqr-versal-ai-edge/Genera...
Some error rate is acceptable for uses which aren't "mission-critical".
It is much worse than that. Even taking the node names at face value[1] that is just one dimension, there are two/three[2] dimensions to consider so it would be 100x different.
Nehalem(2008) was a 45nm node based chip and had ~3MTr/mm2 transistors in comparison today we have 3nm(N3E/P/X/C) nodes(2023-4) from TSMC area about 220MTr/mm2.
Of course that is just one metric- transistor count, there are many other improvements to consider over the last two decades.
[1] Processor node names after all haven't been tied to physical scale for 30 years https://www.eejournal.com/article/no-more-nanometers
[2] HBM that modern GPUs use already leverage 3D ICs.
doesn't mean i'm correct. [2]
Or even better not yeeting it into an environment where its cooked/cooled every 90 minutes
Or even better where its not absolutely pelted by cosmic rays enough to obliterate a good GB a day of data.
Or space data centre.
But does it solve a problem that we actually have? Is uplink bandwidth a pressing limitation?
Now how heavy the discounts you can get I don't know.
Satan’s NAS!
If we could somehow increase the density further by 5x, we would be able to store 1EB in a single rack.
The most interesting part to me is the last sentence.
>Scality tells us it’s working on supporting a future nearline-class SSD from Samsung, viewed as an HDD killer, with similar or even larger capacity and a roadmap out to a 1 PB drive.
Finally a HDD killer. May be in another 5 - 10 years time. The day of everyone having an SSD NAS / AI Cloud at home will come.
So, yes.
These drives will arrive in the secondary market to be snapped up by businesses lower in the food chain. By the time you can find them they will be ridden hard and put away wet that you probably wont want them.
I think development like this might get many public sector focused firms sweating.
There’s probably bulk pricing, but if you bought 40 drives separately thats 2,000,000USD in storage alone.
So a petabyte will be $600-800k alone, plus a server with enough high-speed PCIe lanes to serve the 40+ drives, definitely $1m+
Could you ever buy it?
I feel like we’re in that season.
The interesting thing here is ~256TB in a single drive, but it's in E3.L form factor.
I have about 160TB on hard drives that I'm waiting to offload onto a single SSD.
But that needs to come with a connector that has adapters to USB-C, so I can attach it to my Macbook Neo.
Hopefully they get it a bit more dense soon and into the 2.5" NVMe form.
I might be waiting forever, because clearly there's nothing coming. Though I'm not sure if it's because it's technically difficult (high power consumption to keep the flash lit?) or something else.
I'm aware that it leaves performance on the table for the chips, and probably that means that unit economics means that for the yeild: OEMs would rather make high performance drives which sell for more.
But a 4-bay NAS with 3.5" SSD's would be silent and theoretically sip power, and there so much space for chips, you could space them nicely and get 10+TiB in a drive...
I don't need to touch every cell, I just want something silent and stateless and less power intensive for my time-capsule backups and linux ISOs.
Alas.
Drop that in one of the many usb4 to pcie docks and you should be good to go. Pretty fugly but it ought to just work! I think there's some cheaper models that are under $90 still available, but here's a listing. https://www.dfrobot.com/product-2835.html
I believe a more focused dedicated usb<->NVMe chip might also work, if attached to an edsff connector. I didnt look hard, but I haven't seen any such products yet, but: it's mostly mechanical/packaging, some signal integrity checks, but generally wouldn't really be much different in the end than a NVMe adapter. Seems very doable.
Build it! Someone could sell (to quote a Daily Show) literally dozens of said adapter! (Eventually probably many many more, but not a huge second hand market for edsff atm).
Hopefully one of these 10 PB monsters will be under $2,000 someday, at which point I will pop it in my homelab :)