But in performance work, the relative speed of RAM relative to computation has dropped such that it's a common wisdom to treat today's cache as RAM of old (and today's RAM as disk of old, etc).
In software performance work it's been all about hitting the cache for a long time. LLMs aren't too amenable to caching though.
CAR is often used in early boot before the DRAM is initialized. It works because the x86 disable cache bit actually only decouples the cache from the memory, but the CPU will still use the cache if you primed it with valid cache lines before setting the cache disable bit.
So the technique is to mark a particular range of memory as write-back cacheable, prime the cache with valid cache lines for the entire region, and then set the bit to decouple the cache from memory. Now every access to this memory region is a cache hit that doesn't write back to DRAM.
The one downside is that when CAR is on, any cache you don't allocate as memory is wasted. You could allocate only half the cache as RAM to a particular memory region, but the disable bit is global, so the other half would just sit idle.
Also, additional information on instructions costs instruction bandwidth and I-cache.
That is very context-dependent. In high-performance code having explicit control over caches can be very beneficial. CUDA and similar give you that ability and it is used extensively.
Now, for general "I wrote some code and want the hardware to run it fast with little effort from my side", I agree that transparent caches are the way.
I guess this is one place where it seems possible to allow for compiler annotations without disabling the default heuristics so you could maybe get the best of both.
I can understand why they just decide to bake the cache algorithms into hardware, validate it and be done with it. Id love if a hardware engineer or more well-read fellow could chime in.
And because the abstraction is simple and easy enough to understand that when you do need close control, it's easy to achieve by just writing to the abstraction. Careful control of data layout and nontemporal instructions are almost always all you need.
https://www.intel.com/content/www/us/en/developer/articles/t...
;-)
Back to native apps without bloated toolkits!
Mail.app is sitting here using 137Mb of RAM. Outlook 1270Mb.
My main machine has 16Gb of RAM and I don't think I've ever seen it go over 4Gb and that was when I had a 200gb mmap'ed sparse array.
my issue is that my company won't issue laptops with more than 16 gbs of ram
guess i'm not virtualizing anything...
Will help reduce E-Waste, and to the end user there won't be a different. A machine from 5 years ago feels just as fast as a brand new machine.
Big Corporations offen trash IT equipment thats only 3 - 4 years old. And there is no recycling etc. Very sad.
Where are these luxurious big corporations that give their employees nice new equipment? :(
Except you can't install Windows 11 on it, and the org has to trash it anyway to keep up with security requirement (I know people on that line of work, they're all angry about it)
Also RAMsan will have a renaissance then? :-D
I don't think that ever happened. Using relatively sparse amount of memory turns into better cache management which in turn usually improves performance drastically.
And in embedded stuff being good with memory management can make the difference between 'works' and 'fail'.
This actually generalizes in a rather clean way: compared to the 1980s, you now want to cheaply compress data in memory and use succinct representations as much as practicable, since the extra compute involved in translating a more succinct representation into real data is practically free compared to even one extra cacheline fetch from RAM (which is now hundreds of cycles latency, and in parallel code often has surprisingly low throughput).
Your login isn’t slow because the developer couldn’t do leetcode
Not for fun but for convenience (laziness occasionally?). Someone needed to "pay" for the app being available on all platforms. Either the programmer by coding and optimizing multiple times, or the user by using a bloated unoptimized piece of software. The choice was made to have the user pay. It's been so long I doubt recent generations of coders could even do it differently.
Maybe a bit of engineering and planning could help here. Shipping always half finished products is it usually not a recipe to success.
They actively prefer keeping confortable margins than competing between each other. They have already been condemned for active collusion in the past.
New actors from China could shake things up a bit but the geopolitical situation makes that complicated. The market can stay broken for a long time.
Rapid increase in capacity leads to oversupply which leads to negative margins. They've been there before, and they don't want to go there again.
RAM manufacturers do routinely setup new fabs and decommision old fabs. Maybe they're trying to hurry up new fab construction in times like these, and they would likely defer shutting down old fabs or restart them where possible. But they're less likely to build new fabs that weren't already part of their long term plans.
But well, I think there is no right answer and there always be a trade off case by case depending on the context.
As 'just' a user in the 1990s and MS-DOS, fiddling with QEMM was a bit of a craft to get what you wanted to run in the memory you had.
* https://en.wikipedia.org/wiki/QEMM
(Also, DESQview was awesome.)
I do embedded Linux and ram usage is a major concern, same for other embedded applications.
I’m partying like it’s the 90s, on a 32-bit processor and a couple hundred MB of ram.
it's just a cartel cycle of gaining profits while soon eliminating all investments into competitors when flood of cheap ram "suddenly" appears
But it doesn't really need a nefarious plot for the price spikes. There is a serious lack of VRAM deployed out there. Filling that gap will take quite some time. Add to that the nefarious plot and the situation will most likely get even worse....
Something something, 2000 dot-com bubble, something
Although their stated reason for hoarding is that they "really need it", I think it was a strategic move to make their competitors' lives more difficult with little regard for the collateral consequences to non-competitors, such as regular people or companies needing new computers.
Key DRAM Factory Construction Projects:
Micron Technology (USA): Building a $100 billion, 4-fab complex in Clay, New York (first production expected around 2030) and a new $15 billion, 2-fab project in Boise, Idaho.
Micron (Global): Investing in expanding capacity in Singapore and Taiwan.
Nanya Technology (Taiwan): Previously initiated a $10.69 billion DRAM facility in New Taipei, Taiwan.
SK Hynix
The current HBM market leader is fast-tracking multiple "megafabs" and packaging centers. Cheongju, South Korea (P&T7): A new $13 billion advanced packaging and testing plant dedicated to stacking and testing HBM chips. Construction is set to begin in April 2026, with completion by late 2027.
Cheongju, South Korea (M15X): This fab is being fast-tracked for HBM4 mass production, with the first cleanroom now expected to open in February 2027.
Yongin, South Korea: SK Hynix is investing roughly $22 billion in the first fab of a massive new semiconductor cluster. Operations are planned to start in February 2027.
West Lafayette, Indiana, USA: A $3.87 billion advanced packaging site that will integrate HBM directly onto GPUs. Construction fencing was installed in February 2026, with production targeted for late 2028.
Samsung Electronics
Samsung is accelerating its "Shell First" strategy to secure production space ahead of competitors.
Pyeongtaek, South Korea (P4 & P5): Samsung has advanced the construction of the P5 cleanroom by several months, with a new operational target of late 2027. The P4 line is expected to come online even earlier, likely during 2026.
Taylor, Texas, USA: This $17 billion "megafab" is designed for advanced logic and HBM packaging. While hit by delays, it is now targeting a late 2026 opening.
Micron Technology
Micron is diversifying its HBM production across the U.S. and Asia to grow its market share.
Boise, Idaho, USA (ID1 & ID2): The ID1 fab reached a key milestone in June 2025 and is expected to start wafer output in the second half of 2027. ID2 is planned to follow shortly after.
Onondaga County, New York, USA: Micron officially broke ground in January 2026 on a $100 billion "megafab" complex, though significant supply is not expected until near 2030.
Hiroshima, Japan: A planned $9.6 billion HBM-focused fab is expected to come online between 2027 and 2028.
Singapore & Taiwan: Micron began construction on a $24 billion wafer facility in Singapore in January 2026 and acquired a fab in Taiwan for $1.8 billion to rapidly expand DRAM capacity by late 2027.
For lower end GPUs, like what goes into Apple machines.
New LPDDR Production Facilities
Samsung (Pyeongtaek P4 & P5): Samsung is converting several NAND flash lines to DRAM and accelerating the P4 and P5 fabs in South Korea. While these fabs support HBM, they are also designed for mass-producing 6th-generation 1c DRAM, which will form the basis of the next-gen LPDDR6 modules expected to debut in 2026.
SK Hynix (Icheon & M15X): SK Hynix is planning an 8-fold increase in 1c DRAM production by the end of 2026. This capacity will be split between HBM and "general-purpose" DRAM, which includes the LPDDR variants used in mobile and laptop chips.
Micron (Boise, Idaho - ID1): Micron's new ID1 fab in Boise is currently under construction, with structural steel completion reached in late 2025. It is scheduled to begin wafer output in the second half of 2027, focusing on leading-edge DRAM that includes LPDDR for the U.S. market.
The "Memory Wall" for Apple
The primary challenge is that HBM production requires significantly more wafer area than standard LPDDR. Consequently, even as these new factories open, the shortage of commodity DRAM (LPDDR5X/LPDDR6) is expected to persist through 2028 because manufacturers find HBM far more profitable.
Ram will always be in some demand, but that doesn't mean it's viable for everyone to start building production.
1) Prices aren't returning to "normal".
The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it
2) By building up capacity you influence the outcome.
If someone else enters the DRAM space, the duopoly has to actually start thinking about competing on price, maybe they become price competitive before the launch of your new fab in order to kill it, but, it will have an effect and probably before it even opens
3) A western supply chain has benefits by itself.
There's a reason some industries are not allowed to die, most notably farming- because security and external pressure are concerning.
---
Realistically there's no reason not to do this. It will be long, painful and expensive. The best time was a decade ago. The next best time is now.
I disagree.
Modern RAM is made in fabs, which are ridiculously expensive to manufacture. Modern EUV lithography machines cost around 500M each. They're manufactured by hand. Only one company in the world knows how to manufacture them right now. So we can't exactly increase global manufacturing capacity overnight.
The way I see, there's 2 ways this goes:
1. AI is a fad. RAM and storage demand falls. Prices drop back to normal.
2. AI is not a fad. Over time, more and more fabs come online to meet the supply needs of the AI industry. The price comes down as manufacturing supply increases.
Or some combination of the two.
The high prices right now are because there's a demand shock. There's way more demand for RAM than anyone expected, so the RAM that is produced sells at a premium. High prices aren't because RAM costs more to manufacture than it did a couple years ago. There's just not enough to go around. In 5-10 years, manufacturing capacity will match demand and prices will drop. Just give it time.
And that company is in Europe, isn't it? The EU has a great opportunity to enter the market: it's a high-tech manufacturing job, not something that requires lots of cheap labor.
You can't just get into RAM manufacturing overnight whenever you feel like it, like you're building washing machines. You need a lot more than just ASML machines, you need the supply chain, the IP, the experienced professionals with know-how, the education system, the energy, the right regulations, etc.
The EU exited the RAM manufacturing business a long time ago when RAM prices sunk, see Qimonda, meaning it would be a long, expensive uphill battle to get back in, and currently EU has no major semiconductor manufacturing ambitions, or ambitions in commodity hardware manufacturing of any kind, so that's not gonna happen.
Of course, RAM is no longer a commodity right now, but nobody can guarantee it won't be again when the AI bubble burst and RAM prices crash, so spinning up the know-how, manufacturing facilities and supply chains from the ground up just for RAM is insanely expensive and risky and might leave you holding the bag.
> it's a high-tech manufacturing job, not something that requires lots of cheap labor.
Except semiconductor manufacturing DOES require cheap labor relative to the high degrees of skills and specialization needed at that cutting edge. Unlike in Taiwan, skilled STEM grads in the EU (and even more in the US) who invest that time and effort in education and specialization, will go to better paying careers with better WLB like software or pharma, than in hardware and semi manufacturing that pays peanuts by comparison and works you to death in deadlines.
Also, profitable semi manufacturing requires cheap energy and lax environmental regulations, which EU lacks. So even more compounding reasons why you won't see too many new semi fabs opening here.
I hope we (Europe) can try some things even when they are not guaranteed to succeed and generate huge profits. Otherwise we are toast, though it might take some time to realise it.
The concept of trying not-guaranteed things should not be so alien here on news.ycombinator.com I would think.
If EU hopes were cookies, I would have died of obesity 100 times over. EU is bad at learning from its own mistakes and being proactive on rapid changes on the world stage, that's why it's share of global GDP has dropped by half in 20 years. EU is always reactive and then only when it's far too late and its actions are always limp-dicked("we are monitoring the situation"). See the rise of US tech, Russia's 2014 invasion of Ukraine, rise of Chinese EVs, etc
>Otherwise we are toast, though it might take some time to realise it.
We already are toast for the long run, we just ignore it via printing more money and going into more debt, while kicking the can down the road for future generations to deal with the fallout. EU's biggest economies are working around the clock on how to fund the ever growing pension and welfare deficits, how to beat Russia, and how to stop people from voting right wing, not on how to claw back and on-shore cutting edge semiconductor manufacturing.
>The concept of trying not-guaranteed things should not be so alien here on news.ycombinator.com I would think.
Yeah but someone still needs to pay for that and take a risk. And EU investors don't like risking billions of their money to try out new things that are in competition with Asia on manufacturing because we cannot compete there. Labor costs too high, regulations too high, energy costs too high, environmentalism too high, we miss critical know-how. That's why nobody is investing in EU fabs and instead in other things that guarantee higher returns like services, pharma and weapons.
But then people shouldn't moan that the EU is absent from the RAM manufacturing industry or pretend like it's something easy they could do on a whim if the EU suddenly wanted to.
>Are pharma and weapons really guaranteed?
There will always be sick people and people killing each other.
Any given drug or weapon can still fail or not make a profit. As well it could be said that computers will still need memory for the foreseeable future. You're not keeping a coherent position in this discussion, just replying with cool soundbites.
and waiting for 5-10 yrs for a lower price is a long wait for consumers.
If food prices were high, would you say to the starving person to wait for 5-10yrs for food?
If all consumer devices only shipped with 1gb of RAM maximum, we'd get over it remarkably quickly. Just about the only times large amounts of RAM is an actual requirement is AI, some data science / simulation, and editing video in 8k. And maybe 3d modelling. Lots of programs we run today are memory hogs for no good reason - like the rust compiler, cyberpunk 2077 and google chrome. But we could make those programs much more memory efficient if we really had to. Cyberpunk wouldn't look as pretty. But nobody would really care.
The economy won't die due to expensive RAM. Programmers will just adapt, like we've always done.
no, you should say that you personally wouldn't care, but that does not generalize.
People do care, just like people prefer eating better food than just bread and milk. And after having had a taste of the good stuff, people do not want to revert - loss aversion is real.
So if consumer devices regressed back to only having 1gb of ram, they will feel the loss, and they will complain if nothing else. The world of lean, efficient software that require little ram will not return. Programmers (read:companies selling products) will not adapt, but instead, the requirements for computing will become more exclusionary to those with the means.
Your assertion that a world of lean software won't return is backwards looking; that was all driven by hardware being cheaper than developer effort.
If we now enter a world of AI-enhanced developer effort being cheaper than hardware, perhaps we can have lean efficient software again.
You're wrong here. You don't need the most cutting edge ASML EUV machines to make RAM. Most RAM fabs still use standard DUV.
Ah. Please check that. Which types of DRAM can be made in a DUV fab? Obviously the older ones, but are those obsolete for new computers. This really matters.
Keep in mind that the high bandwidths of modern RAM modules aren’t really a property of the RAM cells so much as a property of the read and write circuitry and the DDR or HBM transceivers, and those are a large part of the IP but a small part of the die. There is no such thing as “double data rate” or “high bandwidth” DRAM cells. Even DRAM cells from the 1990s could be read in microseconds. Reading and streaming your fancy AI model weights is an embarrassingly parallel problem and even 1 TB/sec does not even come close to stressing the ability of the raw cells to be read. This in contrast to, say, modern tensor processors where the actual ALUs set a hard cap on throughput and everyone works hard to come closer to the cap.
Take a look at what makes a modern computer with good RAM performance work: it’s the interconnect between the RAM and processor.
Maybe some RAM chips don't need EUV lithography. but I suspect I'm still right about the economics.
100 million DUV machine is not your limiting factor when a whole fab costs 2-3 billion and requires specialized knowhow that few people in the world have in order to get good yields and be profitable. Otherwise everyone would be making chips if all you needed was to go out and buy a 100 million DUV machine then hit the "print" button to churn out chips like it's a Bambu 3d printer.
>I suspect if they were, we'd see a lot of cheap RAM hit the market.
Nobody spends 2-3 billion to open new fab just to make commodity low margin chips. New fabs are almost always built for the cutting edge, then once they pay off their investment costs, they slowly transition into making low margin chips as they age out of the cutting edge, but nobody builds fabs for legacy nodes that have a lot of competition and low profitability, except maybe if national security(the taxpayer) would subsidize those losses somehow.
>but I suspect I'm still right about the economics.
You are not.
Really the only way it could work is if the government declares it it a national security issue and will promise to subsidize it. Because in just a free market, it's most likely to flop.
This does not really help EU and US businesses to be competitive though, neither does it stop consumers going for the cheapest option...
Except EU and US tech giants also get massive government subsidies making such accusations hypocritical. Silicon Valley has its roots in cold war defense funding.
What the US and EU don't like it that China has beaten them at their own game using their own rules, so now they need to move the goalposts on why we shouldn't buy Chinese RAM and protect western DRAM monopolies making amazing margins.
Subsidize what? Copilot prompt in the Run dialog or Notepad? Is this what you think might be considered for subsidizing?
> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it
RAM isn’t some commodity that gets mined at a fixed rate and therefore costs more when people want large amounts of it. It’s a manufactured good, made from raw materials that are available in huge quantity, that was produced and sold at a profit at 2024 prices, even accounting for the capex needed to produce it.
Two things have changed. First, demand increased quickly. Second, big buyers sort of demonstrated that they’re willing to pay current prices, at least temporarily, so maybe the demand price elasticity has changed, or at least people’s perception of it has changed.
None of prevents the price from going back down. The high prices have made it economical for new manufacturers to invest more to compete — look at CXMT. And CXMT doesn’t have EUV machines, which doesn’t appear to be a showstopper for them.
2024 prices were at a historical low, so we can't be sure that this is correct. Regardless, when production capacity is short-term constant, new RAM does get "mined" at a constant rate, a bit like bitcoin with its mining ASICs.
The biggest problem is that the industry wants HBM, whereas consumers want DRAM. Until the need for HBM has been sufficiently satisfied, fabs will prefer being tooled for HBM because businesses can be squeezed much harder than consumers.
Then again, as consumer you don't really need DDR5 or even DDR4 so long as you aren't using an iGPU. Its all about being around CL15 timings.
you're missing the picture that it's not companIES - the crisis primarily was caused by only one company OpenAI buying out wafers
but even more than that - that wafer buyout is *an excuse* used by cartel - there are several mechanisms that could have eased out most of the problem (e.g. Samsung selling old equipment) that was not done to ride the money wave
(also said "hyperscalers and AI companies" existed in spring 2025 too, yet the price was low)
the winners will not be the ones who build new fabs - but ones who'll have enough money and government subsidies/import taxes to protect such investments after cartel decides to oversupply again, flushing the price down
This isn't right.
RAM prices (and most components) are very finely balanced between supply and demand. A small shortfall in supply leads to a large increase in price, and a large shortfall in supply leads to very large price increases.
It takes 2 years for an existing RAM supplier to build a new clean room factory to make RAM.
All the RAM manufactures saw this shortage coming 6 months ago.
If you follow the news, the existing manufactures are investing heavily. Here's Hynix annoucements:
Nov 25: Hynix plans 8-fold boost to cutting-edge DRAM production in 2026, https://overclock3d.net/news/memory/sk-hynix-plans-8-fold-bo...
Dec 25: Hynix investing $500B (I guess this is a mistranslation somewhere!!???) in new RAM factories, https://www.pcgamer.com/hardware/memory/hot-on-the-heels-of-...
Jan 26: Hynix to spend $13 billion on the world's largest HBM memory assembly plant, https://www.tomshardware.com/pc-components/dram/sk-hynix-to-...
The supply is being built to match the demand. Prices will stabilize, and the manufactures know there is lots of latent demand.
In 2 years time RAM prices for consumers will be normal again (not sure about GPU RAM though!)
Which is a good idea for when we don't have a dementia patient in charge of our country.
EU should get on that though.
You can't reshore domestic manufacturing without creating legions of desperate workers with no other choice but to accept minimum wage factory jobs.
this is the theory of those who are expecting the ram shortage to be short, yes
Which is exactly how you know it will always be nerfed. The last thing these guys want is to take their claws out of our data.
Another thread suggested that OpenAIs primary play is to get big enough that it's too big to fail, funny to think that it's not a funding runway or algorithmic moat, just a hardware vault and the longer you can stop boats crossing it the more chance you get your fingers in all the pies.
I'm not sure about that. When was the last time you have used Copilot prompt in Run dialog or Notepad?
Now try to really sincerely use copilot prompt in the Run dialog.
Having your own chips is a national security issue. Spreading out fabs across the world is a global resilience issue.
People forget quickly why we only have a handful of DRAM manufacturers today.
That being said, AI is not going to go away already. And AI is about as memory hungry as you can get.
The situation I'm worrying about is that these PC manufacturers could use this opportunity to push for a more locked-down design, such as soldered RAM or even SSD. My current ThinkPad already got soldered LPDDR5 RAM chips on it with no user-end RAM upgrade possible, so there's a reason to suspect they'll take more pagers from Apple's book if they can get away doing it, just like what they did when they pushed out those internally mounted unswappable batteries.
My personal guess is that the RAM price will fall down after this period of AI expansion is over and major players starts to consolidate. But it will not fall as much as we're hopping for, because the manufacturers could just reduce production to control the price.
This isn't some conspiracy, it's electrical reality.
BUT... a smart consumer would also recognize the other side of the story: do we really need HBM on consumer devices? We don't serve 1000 users at the same time, a slower, cheaper device is good enough for most use cases (including the professional ones), better if it's also somewhat future-proof. After all, smart people usually have better foresight.
Where can CXMT and other Chinese players export when Japan, South Korea, much of ASEAN, India, much of North America, the EU, the UK, Australia, NZ, and parts of the Gulf have enacted or begun enacting trade barriers against Chinese exports?
[0] - https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042...
Also, I don't think you've seen true consumer rage until the opposition in the EU would start pointing out the current parties are making the smartphones, laptops, TVs and whatnot consumers wanna buy much more expensive (or more crappy). Large parts of the EU are currently being crushed by one of the worst housing crises in the world, the economy seems to be wavering for young people especially, and tech / gadgets being cheap was one of the sole rays of light left.
Plus, even taking a low unemployment numbers at face value, the job quality has fallen a lot, with a lot of people still technically employed but not in great jobs, but in shitty jobs they do for survival, like fast food delivery.
The reality is that mass layoffs and SME bankruptcies are a current occurrence in many EU countries.
Those base stations are only security critical because mobile networks are deliberately insecure to enable government surveillance.
And I can image backdooring RAM. At least the controller part.
Huh?
Or their consumers will enjoy cheap PC part prices. With possible gray zone re-export market.
Of course we could see retreat from global markets to mercantilism, but that has yet to fully happen.
[0] - https://www.reuters.com/world/asia-pacific/xi-putin-hail-tie...
[1] - https://www.reuters.com/world/china/chinas-president-xi-meet...
[2] - https://www.reuters.com/world/china/china-calls-closer-defen...
[3] - https://www.reuters.com/world/china/eu-steps-up-efforts-cut-...
[4] - https://www.scmp.com/news/china/diplomacy/article/3316875/ch...
Who antagonized who first, again?
> and undermining EU institutions
If the Europeans had any common sense they'd be undermining EU institutions as well, those institutions have been disasters. They aren't doing a good job of keeping the peace, they aren't doing a good job of promoting prosperity and they've had successes like forcing Apple to switch from Lightening to USB ports. The CCP on the other hand have been so successful in the last few decades that they're making authoritarianism look good. If the EU focused on figuring out what good policy looked like then they that wouldn't be the case. Although I assume sooner or later the ideological issues will catch up with China.
Australia for example is a large and growing market for Chinese electric cars. China is the biggest export market for Australian raw materials so it doesn't just put random trade barriers up.
There's actually a free trade agreement between Australia and China.
People appreciated cheap YMTC 232-layers when that happened where I live.
It should. And it should enact the political reforms they would make large capital projects like fabs possible. The current confederacy is proving just as much a stepping stone for Europe as it was for America. I’m not saying a full united Europe should emerge. But a system of vetoes is barely a system at all.
You don't see their products in stores too often as they're focused on B2B - particularly the automotive sector.
That being said I have a 128GB memory stick from this manufacturer and I hope they make the most out of this windfall.
SK Hynix, Samsung, or Micron don't treat good people well enough to give them taxpayer money.
> but this is only if you do not take into account the bribes from Intel to specific officials and their relatives who make decisions about subsidizing Intel.
Bribes? Sheesh, HN has gone insane.Brandolini's Law is out of control here. You are making a bold fucking claim. From the tone of your post, it seems pointless to ask if you have any evidence. From and outsider's view, I would say the German political system is much less corrupted by lobbyists compared to the United States. Do you say the same about the CHIPS Act in the United States?
I highly doubt it. I'm certainly no expert on Germany, but has Germany's bureaucratic machine spent decades destroying its own energy sector to buy energy from Russia, funding the war machine of Putin's totalitarian dictatorship?
And not just by buying these resources, but by OVERPAYING for them many times over. I just opened a chart of the prices at which Germany bought natural gas from Russia before the war with Ukraine, and it's wild, it is several times more expensive than Germany is now paying for gas delivered from the other side of the globe on tiny ships. It was a direct subsidizing of this war.
And then you look at these high-ranking (and not so high-ranking) bureaucrats who made all these decisions... And literally all of their families got richer during the time these decisions were made, by tens (and sometimes hundreds) of millions. There's zero accountability, zero media coverage, and it's all being hushed up to such an extent that I can't think of any other explanation other than EVERYONE was taking the money. We are literally talking about the level of existence of a centralized totalitarian machine for the forceful silencing of anyone who tries to talk about this topic.
So do I say the same about the CHIPS Act in the US? Probably. But the level of corruption seen in Germany – pervasive, bloody, destructive – is simply unimaginable in the US.
For context, the German manufacturing sector is losing something like 15k jobs PER MONTH.
What are you talking about?
It's easy to build factories, much more difficult to train the engineers required to run them... and let's not even talk about all the crazy regulations & environmental rules at the EU level that make that task even more difficult, because yes, chip factories do pollute... a lot.
Countries like South Korea or Taiwan have adapted all their legislations and tax, environmental regulations to allow such factories to operate easily. The EU and EU countries will never do that... better outsource pollution and claim they care about the planet...
The reason is as you have described. We are getting close to where the numbers of people with practical experience working in, managing, and designing things like the work processes and factory layouts in industries that build physical products are disappearing. We're losing a lot of capable practical engineers with hands on experience. We can keep the universities going teaching the physical subjects but those lecturers wouldn't know even where to begin on designing and building efficient factories unfortunately.
We'd probably end up having to get Chinese and Taiwanese businesses to outsource their 'experts' back to us in order to actually do this and pay them a fortune - basically the reverse of what was happening in the manufacturing sector in the 80s and 90s!
So, we're looking at a decade-long project at least, even if everything goes as planned, and crazy fast, in the technical and administrative departments.
Excellent universities, overall. But results from primary and secondary schools are nose diving at a more than alarming rate in several EU countries. Literacy rates are falling, math grades are falling. There's IMO only so much time before universities begin to be affected as well.
Well, the EU has not manufactured a whole lot of chips in the last 30 years, where do you get the people with the professional experience to teach new engineers... Oh you mean you have to import the teachers from South Asia too? /s and it takes what, 5 years at the minimum to train an engineer? France and UK used to produce entire home computers... in the 80's...
This is not comparable to Taiwan or the Shenzen area, but it's definitely not nothing. Some local expertise exists, even though it may be not the most cutting-edge.
The same applies to your comment.
How?
Most foundries across Asia and the US are being given subsidizes that outstrip those that the EU is providing, with the only mega-foundry project in Europe was canceled by Intel last year [0].
Additionally, much of the backend work like OSAT and packaging is done in ASEAN (especially Malaysia), Taiwan, China, and India. As much of the work for memory chips is largely backend work (OSAT and packaging), this is a field the EU simply cannot compete in given that it has FTAs with the US, Japan, South Korea, India, and Vietnam so any EU attempt would be crushed well before imitating the process.
Furthermore, much of the IP in the memory space is owned by Korean, Japanese, Taiwanese, Chinese, and American champions who are largely investing either domestically or in Asia, as was seen with MUFG's announcement earlier today to create a dedicated end-to-end semiconductor fund specifically to unify Japan, Taiwan, and India into a single fab-to-fabless ecosystem [1]. SoftBank announced something similar to unify the US, Japan, Malaysia, and India into a similar end-to-end ecosystem as well a couple weeks ago [2]. Meanwhile, South Korea is trying to further shore up their domestic capacity [3] via subsidies and industrial policy.
When Japanese, Korean, and Taiwanese technology and capital partners are uninterested in investing in building European capacity, American technology and capital partners have pulled out of similar initiatives in Europe, and the EU working to ban Chinese players [4] what can the EU even do?
----
Edit: can't reply
> Why are you overlooking European semiconductor champions
Because they don't have the IP for the flash memory supply chain. And whatever capacity and IP they have in chip design, front-end fab, or back-end fab is domiciled in the US, ASEAN, and India.
> STMicroelectronics
Power electronics and legacy nodes (28nm and above) for IoT and embedded applications.
> Infineon
Power electronics and legacy nodes (28nm and above) for automotive applications.
> NXP
Power electronics and legacy nodes (28nm and above) for embedded applications.
> All of them are skilled enough to build and operate a DRAM fab in Europe. A bunch of EU dev banks can lend the monies to get it built.
They don't have the IP. Much of the IP for the memory space is owned by Japanese, American, Korean, Taiwanese and Chinese companies.
Additionally, most Asian funds own both the IP and capital (often with government backing), making European attempts futile.
Essentially, the EU would have to start from scratch and decades behind countries with whom the EU already has FTAs with that have expanded capacity well before the EU and thus would be able to crush any incipient European competitor.
[0] - https://www.it-daily.net/shortnews-en/intel-officially-cance...
[1] - https://www.digitimes.com/news/a20260224VL219/taiwan-talent-...
[2] - https://asia.nikkei.com/economy/trade-war/trump-tariffs/soft...
[3] - https://www.digitimes.com/news/a20251230PD220/semiconductor-...
[4] - https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042...
Champions at what? They pale in comparison to the likes of Samsung and TSM at IP and manufacturing.
> A bunch of EU dev banks can lend the monies to get it built.
Why would EU banks risk their money on a DRAM fab meant to compete with Asia that has lower wages, lower regulations, less environmentalism, etc?
Note that it won't help you if your workload makes use of all your RAM at once.
If you have a bunch of stuff running in the background it will help a lot.
I get 2 to 3 compression factor at all times with zstd. I calculated the utility to be as if I had 20GB extra RAM for what I do.
https://www.gnu.org/software/grub/manual/grub/html_node/badr...
Several times I set about trying to turn it on and found out a whole chip was fried and that means the 7th bit of every read was stuck, so nothing much you can do there.
Of course compression being now computationally cheap also helps.
It's since become the default in several distributions, including Fedora.
Resource usage has been on a hedonic treadmill at least since I came online in the 90s. Good things have come from that, of course, but there's also plenty of abstraction/waste that's permitted because "new computers can handle it."
With so many gaming devices based on the AMD Z1 Extreme platform (and its custom Valve corollaries) over the past few years, it'll be great to see that be the target/baseline for a while. Brings access to more players and staves of e-waste for longer.
I work in gamedev, so perhaps I'm a bit sensitive, and I understand that general purpose engines aren't as light on resources as the handcrafted ones that nobody can afford to make anymore... but we're not anywhere close to the layers of waste and abstraction that presents itself when using webtech for desktop apps by default.
So, the causal link is more: why would software makers need to optimize when it benefits them to pretend the user _needs_ more hardware. Especially in the games realm. Surely going from 60hz to 240hx refresh rate was a practical loss in benefits per hz halfway through. But it ate up hardware resources along the way.
Today's RTX 5060 has 8 GB for basically the same price that the 1070 did.
For $650 you can go up to 12 GB in the 5070, if you want 16 GB it's $1000 for the 5070 Ti, or hundreds more than that for the 5080.
I know there's inflation and $380 in 2016 was more money than it is today, but if you'd asked me 10 years ago I would've bet on VRAM capacity doing better than "the same money is worth less but still gets you exactly same amount of memory 10 years from now."
With prices going up, I half expect Nvidia to launch the RTX 6070 and tell everyone "It has 4 GB of memory and we think you're going to love it. $900." Or they'll just stop bothering with consumer GPUs entirely.
I sometimes have to disable graphical options but it's more the exception than the rule. On a lot of games, I can even play in 4K.
Of course as you can imagine, I don't game at 245 fps :D
Arguably the connotation has changed slightly, but AI slop caught on because it fit so well.
It's uncommon, and associated with old timey prisons and orphanages.
The word itself has existed for hundreds of years.
A) Programmers will get their shit together and start shipping lean software.
OR
B) New laptops will become neutered thin clients, and all the heavy lifting will be done by cloud service providers.
Which one seems more likely?
2. People will buy laptop with low RAM because that's only what they can afford (hopefully upgradable).
3. They'll crib Chrome being slow, and will be suggested to use lightweight apps.
Garbage collectors also use similar strategies. Collecting garbage is expensive, so just don't until you need to. The extra memory usage in this case isn't a downside, it's an upside. Your code runs faster.
That's how Java and dotnet are able to achieve insane performance times in some benchmarks, like within 50% of native. They're not collecting garbage, and their allocators are actually faster than malloc.
If you've ever run a Java program at consistent 90% heap usage, you'll notice it absolutely grinds to a halt. I'm talking orders of magnitude slower. Naturally, this isnt highlighted in benchmarks, but it illustrates the power of allocating more memory.
Apple could lead here. They sell feels not specs so they could down OS and Browser RAM requirements and sell lower RAM entry models.
Of course seems like local AI is more or less a flop in the consumer market at least?
But still IMHO even for general use macos with 8GB is almost unusable unless you use it like an Ipad.
On the flip side if you're buying a new computer in 2026 - it's going to be even harder to justify not getting a MacBook, the chips are already 2 years ahead of PC, the price of base models was super competitive, now that the ram is super expensive even the upgraded versions are competitive with the PC market. Oh and Windows is turning to an even larger pile of shit on a daily basis.
I'd buy a mac in a sec otherwise.
If Apple fully supported the Asahi Linux project, I 'll switch in a heartbeat.
Probably not quite, but I was pricing a Lenovo laptop last week and this is the first time the lenovo price for RAM upgrades was lower than 3rd party RAM.
Now, almost everything on the server side is a VM or a container. We have lots of neighbors who want to share the CPU and the RAM, and the RAM is the bigger constraint because the CPUs have 192 cores and each of those cores does a dozen times as much work as a decade ago. Heck, we used to have the memory controller on the motherboard and the last level of cache was a chip or module of SRAM outside the CPU.
We also have a situation now in which the multiple in speed of the CPU over RAM has skyrocketed, but the caches have gotten far larger and much smarter. Smaller things arranged differently in RAM make things run faster because they make better use of the cache.
Now that RAM is expensive, shared, and program and data size and arrangements are bound to cache behavior, optimization can lean heavily into optimizing for RAM again.
Some of these arguments hold true for desktop systems as well.
I have wondered for years when the time will come that instead of such huge and smart caches, someone will just put basically register-speed RAM on the chip and swap to motherboard RAM the way we swap to disk. HBM is somewhere close, being a substrate stacked in the package but not in the CPU die itself.
having been in the market for one, i did make some compromise for the build (single stick of 16 GB for now; a non "future-proof" GPU within the budget). however for a decent spec (last-gen x3D CPU, mid-range RTX) build, the GPU price reduction made up for the premium on RAM.
the sad reality since the turn of the decade is that the $1000 mark for a well-rounded (gaming) system has now bumped up to $1500-$2000.
crazy time to be alive, where on laptop side, macs are now a "decent" value. especially if you were going to get the higher spec to unlock the specific memory tier. thanks work for setting me up well with one!
We can't get any new chips. At all. We can't launch our new product because nobody could afford the memory even if we could get some.
Incredible.
It reminds me of the heady days of Thai floods when hard drives were inaccessible.
One thing that might support this is the fact AI companies are purchasing uncut wafers of DRAM. One use might be to hoard and stockpile them somewhere in a cave, so that no one else gets to them.
Another thing that might support this is that precisely the same strategy had been in use by software companies during the COVID hiring fever. Companies used to hire people for ridiculous pay with little actual work to perform so that among other things, competitors wouldn’t whisk those people away and be at an advantage.
This, of course, ended with massive layoffs once the reckoning came about, and I’m wondering about what is going to happen when (there’s no “if”) the reckoning comes for big AI, too.
Raise your hand if you have been there too! :-))
I think it ran at 433MHz, and I could overclock it to almost 700.
Those were the days!
1. Ram was relatively cheap between 1985 and 1987 hovering around $100-150 for 1MB using 256Kbit chips. Then 1987 anti dumping laws lined up with fabs upgrading to lower yielding new 1Mbit chips and things got crazy. In 1988 256Kbit chips went from $3.5 to $7 in less than a month. Some companies coped better than others. Atari was the first to offer computer shipping with 1MB below $1000 thanks to Tramiels little secret of smuggling ram from Japan and skirting anti dumping restrictions :) Even SUN Microsystems was caught buying that smuggled ram from Tramiel.
2. 4MB $150 January 1992, lowest in went would be $100 in December 1992, and back to $130 in December 1994.
3. September 1999 Jiji earthquake.
https://en.wikipedia.org/wiki/1999_Jiji_earthquake#Economic_...
https://www.edn.com/panic-buying-sets-dram-prices-on-wild-ri...
https://www.eetimes.com/dram-prices-rise-sharply-following-t...
128MB DIMM prices: May 1997 $300. July 1998 $150. July 1999 $99. September-December 1999 $300. May 2000 $89.
Then overproduction combined with dot-com boom liquidations started flooding the market and Feb 2001 $59, by Aug 2001 _256MB_ module was $49. Feb 2002 256MB $34. Finally April 2003 hit the absolute bottom with $39 _512MB_ DIMMs
Sadly now is not like any of those times. Its like the Jiji earthquake lasted couple of years straight.
Guess what's inside these chips and what equipment they're made on.
Unless there is a true breakthrough, beyond AGI into super intelligence on existing, or near term, hardware— I just don’t see how “trust me bro,” can keep its spending party going. Competition is incredibly stiff, and it’s pretty likely we’re at the point of diminishing returns without an absolute breakthrough.
The end result is going to be RAM prices tanking in 18-24 months. The only upside will be for consumers who will likely gain the ability to run much larger open source models locally.
2010s: so much memory, programmers used electron and chrome wrapping everything in js.
2026: so little memory, programmers have to optimize AI code to run properly.
Additonally, depending on which country you live in, telecom vendors reduce the upfront cost of the phone purchase and make up the difference via contracts.
> Most base level smartphones are loss leaders
Is this really true? Or rather, how can we know it is true? I tried to Google for some info, but I cannot find any reliable sources.https://www.theverge.com/tech/880812/ramageddon-ram-shortage...
They discussed it on the decoder podcast as well.
Behold, the RAM cost is being optimized with AI.
They also marketed the first webcam, and made emulators mainstream. Their PlayStation emulator is the basis for the case law that says emulators are fair use, decided as a result of a suit from Sony.
So why you’re saying is that it could be worse, but not by much?
Idk if the owner changed or what, but the website used to be more comical.
People is missing the point. Mega-corporations distort the market. This is not capitalism this is old aristocratic ruling by power. If all these monopolies were divided in smaller chunks and regulated to not allow them to abuse that power we will not be here.
This situation is not normal, big tech is currently above the law and above the market economy and if they fail their plan is to make us pay *AGAIN* for their bad decisions. All businesses and individuals are already paying higher prices for big tech folly, we will be left with the bill when the AI boom fails, too.
It denied this saying that the figures quoted were estimates only, that such massive RAM contracts would be easily obtainable public knowledge and that primarily the recent price increases were mostly cyclical in nature.
Any truth to this?
Edit to add: I am actually curious; I was under the impression that this 40% story going around was true and confirmed, rather than just hyperbole or speculation.