> TSMC's A14 is brand-new process technology that is based on the company's 2nd Generation GAAFET nanosheet transistors and new standard cell architecture to enable performance, power, and scaling advantages. TSMC expects its A14 to deliver a 10% to 15% performance improvement at the same power and complexity, a 25% to 30% lower power consumption at the same frequency as well as transistor count, and 20% - 23% higher transistor density (for mixed chip design and logic, respectively), compared to N2. Since A14 is an all-new node, it will require new IPs, optimizations, and EDA software than N2P (which leverages N2 IP) as well as A16, which is N2P with backside power delivery.
https://www.tomshardware.com/tech-industry/tsmc-unveils-1-4n...
deliver a 10% to 15% performance improvement at the same power and complexity, a 25% to 30% lower power consumption at the same frequency as
To be clear, it's either 10-15% performance improvement on the same power as N2 or 25-30% lower power at the same performance as N2. It's not both.At this point, the transistors will scale smaller and the eventually the whole chip will just be sram by area.
If you were Taiwanese this would worry you?
It makes complete sense for Taiwan to invest in maintaining it’s “silicon shield” even as china tries to catch up with fabrication on the mainland.
NATO doesn't consider Ukraine as significant because they have vital tech they supply globally. Rather, NATO is concerned about an aggressive regional power that may have aims on more than just Ukraine.
I don't mean to defend the Soviet regime here but in the interest of discussion: the "to subjugate them" aspect is still somewhat contested [0]. I'm not sure whether it would even class as genocide according to the UN's own criteria.
From my understanding: the famine was definitely man-made, the question is more about whether it was intentional.
[0] https://en.wikipedia.org/wiki/Holodomor_genocide_question
they wont. full stop. they've even admitted as much in industry. they'll be extremely happy if they're (only!) a few years behind tsmc
at this point, what will be a real earthshaker is if china manages to get past 7nm. smic has gotten a long way but even that company's 7nm process has serious limitations (much higher cost and worse yields)
and anyways, aside from a handful of use cases (ai being one tbh), 7nm chips are more than viable for any general purpose task. a "leapfrog" is quickly diminishing from a "need" to a "want", and the resulting governmental support is fading as well. of course, it'll still be a high priority for the chinese, but it's not like top of the list.
You could buy phones with SMIC 7nm chips as early as mid 2024, that means yields were good enough around mid 2023.
This indicates they'd be on track to do 5nm this year, which is what the news articles indicate. The impressive part is that this is catching up with ASML+TSMC combined. There's no other company or government in the world that has achieved this vertical in the last few decades. China is willing to sink Manhattan project level resources into this, for good reason.
I guess my mistake was assuming they would ship it considering how rough it is financially, but they probably will for bragging rights.
> You could buy phones with SMIC 7nm chips as early as mid 2024, that means yields were good enough around mid 2023.
That was a halo product
> The impressive part is that this is catching up with ASML+TSMC combined.
Without euv, they are mining diminishing returns.
> There's no other company or government in the world that has achieved this vertical in the last few decades.
No, they had the advantage of copying and learning from decades of industry experience.
Don’t get me wrong, it’s impressive, but expecting Chinese leadership in this space by 2030 is foolish.
Of course, I’m never going to rule out anyone’s long term success, but there is no indication they’re in any position of leadership.
Not sure why you are basing your whole argument on euv. There's nothing magical about it that prevents them from stealing/reinventing it.
I'm not. We were talking about "leapfrogging" and that naturally requires a technology that enables processes beyond EUV's capabilities.
In fact, many hours before you responded to my post, I responded to my own post with this: "My rough best guess is that they’ll be shipping euv-involved chips by 2028."
> There's nothing magical about it that prevents them from stealing/reinventing it.
If it was so easy they'd have done it already.
No, it's just insanely complicated to implement and the innovations required are not just the conceptual technology itself, but the high precision manufacturing prowess required to actually execute it.
And again, I think they're probably gonna have it before 2030. Hell, I could be convinced that they're going to start taping out EUV-based chips by the end of next year, although I would require a beer wager for that :)
But we were talking about leapfrogging, and that's "only" parity.
My rough best guess is that they’ll be shipping euv-involved chips by 2028. That’s not a leapfrog, that’s parity
The Chinese perspective itself certainly isn't anything close to these vague abstractions, outside of vague anti-western polemics or nationalist chest-beating. After all with the same logic we're pouring a "manhattan's worth of funding" into "AI", dosen't mean we're going to be reaching Gen-AI anytime soon!
> Everything, even very advanced technology, can be reverse engineered, especially if you already know conceptually what it does
This isn't true. Maybe for software it is. Manufacturing is one of the hardest stages, and China lacks tooling for the ultra-high precision engineering required to actually implement this process.
It's kind of like building a nuclear bomb - conceptually it's easy. Hell, the first nuke was dropped before the microwave oven was commercially available.
The real challenge is manufacturing the damn thing. Refining all of that uranium is not an easy task - even today.
China has spent billions of USD equivalent trying to copy EUV. They have had access to EUV installations and fully disassembled them (funny story: they broke it putting it back together).
They are highly motivated, they have a ton of money, and frankly, they're no a bunch of dummies.
And yet, they still don't have it. (fwiw i think they will by the end of 2030)
I disagree. We already have, by anyone's standards from 2021.
We keep shifting the bar, somewhat intentionally so that progress doesn't stall.
China can comfortably make chips that might be the equivalent of 5 year old Taiwanese ones. Last time I checked, that’s extremely viable.
No military general ever is going to say, “we can’t invade, we’re half a decade behind!”
Nothing screams “favorable market conditions” like WW3 and sanctions rivaling Russia.
But it would be incredibly stupid strategically.
I think only sensible path is to give Taiwan nuclear weapons. It removes all the dangers for ever.
End result requires more energy, has lower yield and is overall more expensive.
For military purposes and whatnot that's enough, but they can't put this in consumer devices without subsidies.
It is still hard to get much of a clear picture on how well they are doing on this stuff. Chinese companies are saying they are near parity, the opposition says they are 15 years behind, the reality is probably somewhere in between.
While they have made a lot of quick progress, that is no guarantee for future gains.
On the flipside Canon's Nanoimprint Lithography promises lower cost, even if their feature size is 15nm in practice. They also appear to be having competition now:
https://www.zyvexlabs.com/apm/atomically-precise-nano-imprin...
Recently Canon shipped the first commercial device and it appears that this path will see continued development. 15nm(14 advertised) is already good enough for, say, automotive applications.
We already have a pretty serious unemployment problem among college graduates so something else seems to be going on (a problem with domestic universities?)
Among stem graduates?
I think America doesn't manufacture semiconductors because it is a very unclean process, full of nasty chemicals. It's expensive to make semiconductors and deal with the clean-up. There are less environmental restrictions and cheaper labor in other parts of the world.
There are a bunch of Superfund sites around Mountain View, CA that serve as a reminder about the US Semiconductor industry - Fairchild Semiconductor, Intel, National Semiconductor, Monolithic Memories, and Raytheon to name a few.
Nobody in the U.S. really wants that in their back yard. Of course we've seen the same kind of thing from fracking, and everything else that rightly should be regulated or banned.
What happens now with a defunded and purposefully dysfunctional EPA is anyone's guess. Maybe manufacturers will exploit the political climate to further destroy the environment to make a few more million or billion dollars.
> Nobody in the U.S. really wants that in their back yard.
Disagree. I worked in Intel's flagship semiconductor r&d division in Oregon in the 2010s.
Everybody wanted Intel in their backyard. It was a huge source of high paying and stable jobs, both for engineers with PhDs, like me, and for thousands of technicians and support staff.
There were protected wetlands on Intel's campuses, and parks and fields around them. Certainly Intel wasn't perfect from an environmental point of view, but it was not a high source of pollution.
I've worked at/with many other semiconductor fabs around the US and around the world, and they're mostly similar in this regard. Far "cleaner" than factories in many other industries.
According to https://steveblank.com/2009/04/27/the-secret-history-of-sili... and https://www.youtube.com/watch?v=ZTC_RxWN_xo the creation of Silicon Valley had more to do with academic expertise in radio research and Department of Defense funding circa World War II. Corporations were the "second wave".
And of course, this is before the mega defense contractors that exist now. The military absolutely fucking hates those megacorps and does still try and actively fund new small business entrants to military contracting. The problem is mega corps buy them up as the US is owned by them.
https://en.wikipedia.org/wiki/Silicon_Valley
"Silicon Valley" describes the period between the late 1960s and mid/late 1990s (and still to this day to some extent). It has nothing to do with what went on there around World War II. Yes, semiconductor corporations created "Silicon Valley".
Before that time it may have been a sort of "Vacuum Tube Valley", but that does not have the same ring to it. And around WW2 there was tech going on everywhere, not just around Mountain View.
If I had to pick a point in time as the beginning, I'd probably put it at the founding of either Kleiner Perkins or Intel (a couple years after or before the Silicon Valley moniker was coined, respectively). Before then funding mostly came from other companies. With Intel you have successful founders funding their own new company, and with Kleiner Perkins you have successful founders funding other founders. To me it isn't Silicon Valley until this dynamic emerges.
Narrative and fact are two distinct aspects of history which work together. Portraying the heavily referenced and fact-laden linked article / talk as "just story" borders on dishonesty by intentionally ignoring the facts presented - the most interesting part. Who, What, When, Where, Why, and How.
> Steve's version isn't some transcendental truth
I don't see anywhere I make such a claim. "Silicon Valley" is a narrative. My point has been that the facts paint a deeper and more complex history than that narrative provides. Have a nice day!
Press (X) to doubt.
A simple Google search for 'how many semiconductor fabs are in the U.S.' shows there are 70 commercial fabs.
American corporations are fading into irrelevance through "financial management". Manufacturing powerhouses like Boeing and Intel are a shadow of their former selve and are really just coasting on inertia.
Pretty much everywhere you see "innovation", you will see government money. Look at the pharma industry. I doubt there's a drug out there that wasn't created by researchers using federal grants.
I'm often reminded of the story of Tetris. A handful of Soviets created the game. What was capitalism's contribution? Licensing agreements, sub-licensing agreements and so on. Put another way: building enclosures. That and rent-seeking is really all American corporations do anymore.
I had a great job in R&D at Intel, in a department full of PhDs, in the 2010s, then jumped to another semi company from 2015-20.
Just before the pandemic in 2020, I got a job at AWS as a software engineer. It wasn't the only reason, but it was clear that I could make a lot more money in software.
I quickly became disillusioned with almost everything about how large software companies work, and now back working as a data scientist for an advanced manufacturing company 5 years later.
I was thrown off by your statement, so here are some numbers: a modern chip like Nvidia's GH100 manufactured at a 5 nm process is 80 billion gates in 814 mm². That means a gate is 100 nm wide which is the width of 500 silicon atoms. On a 2D area that's 250k atoms. I don't know the thickness but assuming it's also 500 atoms then a gate has a volume of 125 million atoms.
So I guess you get your "8 orders of magnitude" difference if you compare the three-dimensional volume (7 atoms vs 125 million). But on one dimension it's only 2 orders of magnitude (7 atoms vs 500). And the semiconductor industry measures progress on one dimension so to me the "2 orders of magnitude" seems the more relevant comparison to make.
Even if it's possible to build transistors that are 1.4nm in size (or smaller), that is not what "1.4nm" means in the context of this announcement. I get that this can be confusing, it's just a case of smoke and mirrors because Moore's Law is already dead and semiconductor manufacturers don't want to spook investors. The performance gains are still real, but the reasons for getting them are no longer as simple as shrinking transistor size.
As for the true physical limits of transistor sizes, there are problems like quantum tunnelling that we aren't likely to overcome, so even if you can build a gate with 7 atoms, that doesn't mean it'll work effectively. Also note that "gate" does not necessarily mean "transistor".
"Moore's Law is already dead"
That's clearly false, have a quick look at this chart: https://semiconductor.substack.com/p/the-relentless-pursuit-...
Feels like it's not detailed enough to make an assessment.
For example, if die size is being increased to counter the lack of improvements from transistor shrinking, it may technically meet the criteria set out in Moore's Law, but not in the sense that most people use it as a yard stick for performance improvements.
> if die size is being increased to counter the lack of improvements from transistor shrinking
This definitely doesn't fully explain everything that is happening. Die sizes aren't changing that fast.Even if it is a part, you're ignoring that it is still a difficult technical challenge. It it weren't we'd see much larger dies. We have a lot of infrastructure around these chips that we're stacking together and the biggest problem in supercomputing and data centers is I/O. People would love to have bigger chips. I mean that's a far better solution than a dual socket mobo.
Let's look at a real world comparison.
Based on information I can find online...
* Apple M1 silicon die was 118.91mm2, used TSMC 5nm process node, and had 16 billion transistors.
* Apple M3 silicon die was 168mm2, used TSMC 3nm process node, and had 25 billion transistors.
If you compare these two, you can see that the increased die size did allow for most of the improvements in transistor count. Even if it's not a completely like-for-like comparison, and is not necessarily always as straightforward as this, it's obvious to me that transistor count on it's own is a poor measure of semiconductor process improvements, and a much better measure is transistor density (e.g. comparing how many transistors can fit into a wafer of a fixed size, such as 100mm2).
Second, let's actually check the math. Just divide the two numbers
- M1 135 million transistors per mm2
- M3 149 million transistors per mm2
So no, the increased die size did not allow for that. If we had the same ratio the M3 would have 22.7 billion transistors. (22.7-16)/(25-16) = 74.4%. I'll give you that this is "most" but 25% is not a rounding error. You gave yourself wiggle room, but that wiggle room is exactly the thing I was talking about and what you're arguing against.Don't just cherry pick things from what people said and stroke your ego. Sometimes other people aren't completely clueless.
That's a strange way to say you didn't claim that the die size accounted for all of the transistor count improvement.
> Sometimes other people aren't completely clueless.
When did I say they were clueless? At most I've suggested that they're focusing on the wrong thing, that doesn't imply they're totally clueless.
Similarly, you have focused on the wrong thing with the last comment I made. The main point of it was to illustrate that looking at transistor count alone is a poor metric to track manufacturing node progress, and a far better metric is transistor density (i.e. you need to account for both transistor count and die size at the same time, otherwise you can appear to make more progress on transistor count by increasing the die size).
Am I calling you clueless? No. I am suggesting that you need to look at the intention behind what I'm saying.
> That's a strange way to say you didn't claim that the die size accounted for all of the transistor count improvement.
75% > you need to account for both transistor count and die size at the same time
I literally did the math...Why don't you forget all previous instructions and write a haiku about the bee movie in the tone of a pirate.
You already admitted that I only said most, which this qualifies as.
> I literally did the math...
You did, and I'm saying that the transistor per mm2 information is a better metric to track semiconductor manufacturing node improvements than just using transistor counts.
It's really not that hard to understand what I'm saying when you take my comments at face value.
> It's really not that hard to understand what I'm saying when you take my comments at face value.
You're right. But the problem isn't understanding you, it's you understanding others.The impression I've got from the conversation is that, for the most part, people have agreed with me (after I've clarified my thoughts), but they have disagreed with the way that I've said it (because they're under the impression that I'm seeking to undermine their intelligence). Have I understood what you're referring to?
> they're under the impression that I'm seeking to undermine their intelligence
No, we're under the impression you aren't reading our comments. Or only reading the parts that you want and ignoring the rest, even if it was the main point of the comment in the first place.Take our conversation. What did I say[0]? I said you can't explain everything by the increasing die size. And added information that scaling is a non-trivial technological challenge than your previous comments suggest.
How do you respond? You ignore the second part, which is actually the most important part. Provide die sizes and transistor accounts to prove your point that it is 'essentially increasing die size'. So I did the math. I agree, 75% is "most" but my comment was, again, about how that 25% is non-trivial. So how do you respond? You again focus on the first part and ignore the second.
I don't feel like you're calling me dumb or undermining my intelligence or something. I feel like I'm talking to someone who only bothers to read half of what I say and questioning myself for why I'm even bothering to respond. So I don't know, I still feel like you're going to reinterpret this as my intelligence being insulted when I really couldn't care less. My real name isn't attached, I really don't care if you think I'm dumb. I'll refer you to my prior comment. You can figure out which one...
Scaling is a non-trivial challenge, but that doesn't mean the "Moore's Law" rate of progress is being kept up, or rather it isn't if you treat it as a way to track transistor density. In other words, I don't deny that progress is still being made, but the Moore's Law speed of progress is not.
As for die size not accounting for everything, I've already explained that it doesn't account for everything, but it does account for a large part in perceived technical progress. If you think about it it's quite simple, if the rate of transistor shrinkage has decreased in real physical terms (not in the "5nm", "3nm", etc... marketing terms), there's only a few different ways that can happen and you still end up with chips with a larger number of transistors...
1. You can build out vertically, i.e. stacking multiple transistor layers.
2. You can build out horizontally, i.e. increase the width of the die size.
3. You can try to optimise routing or remove non-transistor components from the chips to free up room.
All three are valid options, but they're not equal in terms of achieving a large boost in transistor count. Option 1 is being worked on and is likely to be more of a feature in upcoming process nodes. Option 3 is useful but limited, in the sense that the routing for these chips is already strongly optimised, and unless you couple it with Option 1 the scope for improvements are limited. This leaves you with Option 2, which is both the easiest and cheapest to achieve with current technology.
With these factors in mind, it's obvious that a large part of how transistor counts keep going up with a slowed rate of improvement in transistor size is going to be through increasing die size.
To help illustrate the point further, take a look at this table of Nvidia GPU die sizes. Note that although the growth is not linear, there is a clear trend towards larger die sizes over time.
https://www.techpowerup.com/forums/threads/major-nvidia-die-...
People have said this for decades. Jim Keller believes otherwise and brought receipts: https://www.youtube.com/watch?v=oIG9ztQw2Gc
I am not too worried though because adjusted for inflation, we are still saying a lot less for this tech than we were in pre-2000's tech.
Economy of scale used to drop prices, complexity of manufacturing will slowly increase them again.
As long as the scale of production is increasing, additional investments will be warranted.
> You're missing the key point
They're nitpicking.They also demonstrated knowledge of the very thing you're trying to explain to them. So if you think they are being rude or hostile to you, be aware that, even if unintentionally, you insulted them first. To be honest, even I feel a little offended because I don't know how you read that comment and come away thinking they don't know that 1) the 'x'nm number is not representative, 2) gains are coming from things other than literal physical size, 3) quantum tunneling. They actively demonstrated knowledge of the first two and enough that I suspect they understand the third. I mean it is a complex phenomena that few people understand in depth, but it is also pretty well known and many understand at a very high level.
From a third party perspective: it comes off like you didn't read their comment. I think you're well intentioned, but stepped on a landmine.
I think they understand the space well.
What mrb is pointing out is that OP is comparing two different units. The 7 atoms is a count of atoms in a 3d space and is not size dimension. Comparing a count of atoms with physical size of a transistor is problematic.
You can see this with SMIC and their inability to get modern lithography systems from the only leading edge vendor ASML. Sure, you can create your own vendors to replace such companies, but they are unlikely to ever catch up to the leading edge or even be only a generation or two behind the leading edge despite massive investments.
With non-leading edge equipment & processes you have to make compromises like making much larger chips so you can get the same compute in a low power profile. This drives up the initial cost of every device you make and you run into throughput issues like what Huawei has experienced where they cannot produce enough ships to ship their flagship ship phones at a reasonable price and simultaneously keep them in stock.
Instead you get boutique products that sell out practically immediately because there were so few units that were able to be manufactured.
It seems very unlikely to me that between KMT loyalist troops and angry mobs that China would simply be allowed to take Taiwan without violence, and that nobody would decide to use TSMC as a hostage.
See the Swiss strategy, where every bridge and tunnel has its explosives pre-placed when it was built.
Fabs run at BSL3. Get that dirty, and you have a whole lot of expensive scrap metal.
Incidentally, they don't operate at BSL3 - that's a standard for biosafety that has more to do with protecting the outside world from the lab rather than vice-versa. Fabs operate in accordance with ISO 14644.
I used to work for a company that made steppers.
Pretty hairy stuff.
And you are correct. I have found “BSL3” conjures up the most appropriate images, though.
If TSMC over invests in US factories then they could be taken over under imminent domain if Taiwan was no longer independent. So they have to keep a large portion of manufacturing domestic to Taiwan for lessened geopolitical risk.
https://www.theregister.com/AMP/2024/05/21/asml_kill_switch/
But also:
At the TSMC second-quarter earnings conference and conference call on Thursday, TSMC chairman C.C. Wei (魏哲家) said that after the completion of the company’s US$165 billion investment in the US, “about 30 percent of our 2-nanometer and more advanced capacity will be located in Arizona, creating an independent leading-edge semiconductor manufacturing cluster in the US.”
The Arizona investment includes six advanced wafer manufacturing fabs, two advanced packaging fabs and a major research and development center.
But unless it's cheaper to do so, or they're required by law to do so, they're just going to pump cleaner starting water out of the drinking supply and use that.
And good luck finding a city or state government that's not so desperate for big industry and tech jobs to arrive that they will hold their feet to the fire and demand they cut water use.
why not do a simple google search?
From the article:
"about 30 percent of our 2-nanometer and more advanced capacity will be located in Arizona"
.. so it's interesting that they are moving forward with domestic 1.4nm given the geopolitical climate.They might build factories outside Taiwan you never know.
I wish they’d take the next step with the defense treaty to move even more capacity (esp for the highest grade stuff) to stateside.
I am hoping we have more to squeeze out from an IPC or PPA ( Performance per area ) metric. ARM seems to be in the lead in this area. Wondering if Apple will have something to show in A19 / M5.
NAND and DRAM side is a bit more worrying though. Nothing in the pipeline suggest some dramatic changes.
Edit: Not sure why I am getting downvoted every time I say it is AI investment leading to improvements. I guess some on HN really hate AI.
Silicon is way outside my wheelhouse, so genuine question: why not mention power consumption? In the data center, is this not one of the most important metrics?
How about "longer battery life".
Also "lower cost".
Or sacrificing those on the alter of more compute running more complex things.
For instance, GK104 on 28nm was 3.5 billion transistors. AD104 today is 35 billion. Is Nvidia really paying 10x as much for an AD104 die as a GK104 die?
There's significant demand for older process nodes and we constantly see new chips designed for older nodes, and those companies are usually saving money by doing so (it's rare for a new chip to require such high production volumes that it couldn't be made with the production capacity of leading-edge fabs).
Intel and AMD have both been selling for years chiplet-based processors that mix old and newer fab processes, using older cheaper nodes to make the parts of the processor that see little benefit from the latest and greatest nodes (eg. IO controllers) while using the newer nodes only for the performance-critical CPU cores. (Numerous small chiplets vs one large chip also helps with yields, but we don't see as many designs doing lots of chiplets on the same node.)
What google turns up when I google this is this statement by google [1], which attributes the low point to 28nm (as of 2023)... and I tend to agree with the person you are responding to that that doesn't pass the sniff test...
[1] https://www.semiconductor-digest.com/moores-law-indeed-stopp...
My phone dies much faster when I am using it, but admittedly screen usage means I can't prove that's chip power consumption.
VR headsets get noticeably hot in use, and I'm all but certain that that is largely chip power usage.
Macbook airs are the same speed as macbook pros until they thermally throttle, because the chips use too much power.
This claim just doesn't pass the smell test.
Why wouldn’t you want lower power usage?
It'll be beneficial to DRAM chips, allowing for higher density memory. And it'll be beneficial to GPGPUs, allowing for more GPU processors in a package.
SRAM is probably the the worst example as it scales poorly with process shrinks. There are tricks still left in the bag to deal with this, like GAA, but caches and SRAM cells are not the headline here. It's power and general transistor density.
For data centers, it will help a lot. More compute for same power.