I haven't personally checked this against market data, FWIW.
And that is the reason why TSMC said they will never go into the DRAM business.
It is also possible to make the same mistakes for different reasons: lack of imagination, conservatism, entrenched interests...
Here's the catch. Because of these constraints Korean conglomerates dont create as many jobs.Korean software or services industry is almost non existent or heavily constrained to Korea.
With this, the "I need to be extremely profitable" burden is somewhat lifted, giving them the freedom to do hard R&D.
And it is true now still, with the last bribery in exchange of favors dating back to 2018 [1]
[1] https://bruinpoliticalreview.org/articles?post-slug=south-ko...
What? Korea's software jobs per capita is one of the highest of all wealthy (let's say top 30 HDI) countries. Please stop claiming this sort of stuff without being familiar with the country.
Your impression that they were at all new to the SSD market is largely due to the fact that SK Hynix operated mainly as a component supplier, and has never pursued promotion of their own retail SSD brand the way Samsung does. Hynix was a major player in the NAND industry before the SSD market as we know it even existed, and has been a major supplier of SSDs to PC OEMs for as long as PC OEMs have been buying SSDs in large volumes.
Note that "SK" does not stand for "South Korea", as one might be lead to believe.
From Toyota cars to Sony TVs to TSMC chips to DJI drones. It's been that way for a while.
Off the top of my head there is only like three manufacturers left. Micron being the only one not mentioned here.
As for who makes the best RAM, it changes from one generation to the next, and also depends on what you consider "best": you might be looking for chips that overclock well in a desktop, or that are least likely to suffer compatibility issues and performance loss when maxing out the capacity in a desktop, or maybe "best" might mean who has the lowest-power LPDDR for your phone.
The DRAM parts made by the big three largely all adhere to the same standards (thought not necessarily all supporting the same frequencies), with the most significant recent example being GDDR6X that was essentially a NVIDIA-Micron exclusive partnership. For the most part, it's the latest iteration of DDR (desktops and servers), LPDDR (handhelds and low-power laptops), GDDR (GPUs), and HBM (more expensive GPUs).
This reminds me of HDDs and SSDs, though I've always found RAM to be generally reliable or obviously bad, while storage can look ok for a while before it fails.
Whether that matters much is debateable - maybe they get higher yields as a result (since more chips are sufficiently performant to be useful) but JEDEC specs seem pretty generous relative to what you can achieve on consumer platforms, so I somewhat doubt much RAM is thrown out because it's too slow to meet spec.
It's pretty random, though. Back in DDR4 days Samsung produced the best memory (B-die) and it wasn't particularly close - near-DDR5 speeds with lower latencies. At the same time, some of their other dies (I guess from other fabs?) was absolutely awful.
Again, I don't know how much that translates into profit though, since the performance user market for RAM is probably a fraction of a fraction of the overall memory market.
Now with DDR5, Hynix's A-die is considered the best option.
Less impactful in DDR5 because they made it a design goal to have more "ranks" by default, though it does still have a small performance benefit.
Gamers only, but that's not a bad selection imho
Pretty cool
For example a 9950x3d officially supports 2 sticks at DDR5-5600 but 4 sticks at only DDR5-3600. [1]
I had a friend run into this issue on AM5 when he was trying to use 4x32GB DDR5 on his gaming PC.
[1] https://www.amd.com/en/products/processors/desktops/ryzen/90...
Recently I asked for my software developer colleage to be bought a 24 GB Macbook Air instead of 16 GB, and boss came back with "not everyone needs a super-big machine like yours Jamie!".
They seriously spent contractor time investigating whether 16 GB was "enough" to get by for our app development, for a price difference on one laptop (second hand) that was negligible compared with cost of my colleage's time.
When I was using 16 GB I regularly had to watch the spinning beachball waiting for tasks due to memory pressure. Between browsers and VMs, it was nowhere near enough for how I worked. So I knew why I was asking, and I knew the price difference was so small for the company, that it was a no-brainer. I gave justifications but it was seen as over-indulgent.
Helpful footnote on man-machine boundary.
I think my only concern is that I'm not sure how to make sure I'll always have an untainted set of reference material to check against in the post-LLM Internet. We've had LLM hallucinations result in software features. Are we possibly headed towards a world where LLM hallucinations occasionally reshape language and slang?
I feel bad for human translators right now. For various use cases, current-day machine translations and especially LLM translations are sufficient. For those not versed in the world of otaku and video game nerds, one extremely fascinating development of the last few years is the one-shot commission platform Skeb, where people can send various kinds of art commission requests to Japanese artists. They integrate with DeepL to support requests from people who don't speak Japanese fluently, and it seems to generally work very well. (The lower-stakes nature of one-shot art commissions helps a bit here too, but at the least I think communication issues are rarely a huge problem, which is pretty impressive.) And that kicked off before LLMs started to push machine translations even further.
I agree that bidirectional communication is probably going to work a lot better, because people are more likely to be alert to the possibility of translation issues and can confirm understanding interactively.
> It's capable of just dropping out entire paragraphs.
I suspect, though, that issues like this can be fixed by improving how we interface with the LLM for the purposes of translation. (Closed-loop systems that use full LLMs under the hood but output a translation directly as-if they are just translation models probably already have solved this kind of problem by structuring the prompt carefully and possibly incrementally.)
(the audio track switcher, which will give you back the original audio, is not available on mobile. Fortunately if you use newpipe it is ..)
> real time local translator hardware that we can just plug in our ear when traveling
These are definitely already a thing, popular in Asian countries. https://www.aliexpress.com/item/1005008777097933.html
> Sir, Your reader's reference to German word order reminds me of a UN meeting at which I worked when the German delegate ranted for ages while all French eyes turned to the French interpreter booth. The interpreter witheringly interjected "j'attends le verbe".
(disclaimer: I worked there)
I love it how one can mix languages in one sentence. "Anyway, re-add Tiefkühl and tell me to just check what they have there as TK-Gemüse, and a note at the end that I should go to EDEKA someday this or next week"
Maybe in the future they will spit this out as a result of the training data today, e.g. this article.