Intel either needs to focus or they need to be bold (and I’d actually prefer they be bold - they’ve started down some cool paths over time), but what they really need is to make up their goddamn minds and stop panicking every other quarter that their “ten-year bets” from last quarter haven’t paid off yet.
Intel really is good at certain kinds of software like compilers or MKL but my belief is that organizations like that have a belief in their "number oneness" that gets in their way of doing anything that it outside what they're good at. Maybe it is the people, processes, organization, values, etc. that gets in the way. Or maybe not having the flexibility to know that what is good at task A is not good at task B.
I had a different non-Intel WiFi card before where the driver literally permanently fried all occupied PCIe slots -- they never worked again and the problem happened right after installing the driver. I don't know how a driver such as this causes that but it looks like it did.
However, they somehow managed to bork e1000e driver in a way that certain older cards sometimes fail to initialize and require a reboot. I have been bitten by the bug, and the problem was fixed later by reverting the problematic patch in Debian.
I don't know current state of the driver since I passed the system on. Besides a couple of bad patches in their VGA drivers, their cards are reliable and works well.
From my experience, their open source driver quality does not depend on the process, but on specific people and their knowledge and love for what they do.
I don't like the aggressive Intel which undercuts everyone by shady tactics, but I don't want them to wither and die, either, but seems like their process, frequency and performance "tricks" are biting them now.
I have found bluez by far the hardest stack to use for Bluetooth Low Energy Peripherals. I have used iOS’s stack, suffered the evolution of the Android stack, used the ACI (ST’s layer), and finally done just straight python to the HCI on pi. Bluez is hands down my least favorite.
so the driver have little to screw up. but they still manage to! for example, the pci cards are all broken, when it's literary the same hardware as the USB ones.
Frequent releases, GitHub repo with good enough user interaction, examples, bug fixing and feedback.
> such as a graph processing toolkit
This is oddly specific. Can you share the exact Intel software toolkit? > "number oneness"
Why does this not affect NVidia, Amazon, Apple, or TSMC?It could possibly come to haunt NVidia or TSMC in decades to come.
Also, their latest consumer card launches are less then stellar, and the tricks they use to pump up performance numbers are borderline fraud.
As Gamers Nexus puts it "Fake prices for fake frames".
To me, the situation is similar with monitors. After we got the pixel density of 4K at 27 inches with 60Hz refresh rate (enough pixels, enough inches, enough refresh rate), how can it get any better for normies? Ok, maybe we can add HDR, but monitors are mostly finished, similar to mobile phones. Ah, one last one: I guess we can upgrade to OLED when the prices are not so scandalous. Still, for the corporate normies, who account for the lion's share of people siting in front of 1990s-style desktop PCs with a monitor, they are fine with 4K at 27 inches with 60Hz refresh rate forever.
However, I can talk about monitors. Yes, a 27" 4K@60 monitor is really, really good, but panel quality (lighting, uniformity and color correctness) goes a long way. After using Dell and HPs "business" monitors for so long, most "normal monitors for normies" look bad to me. Uncomfortable with harsh light and bad uniformity.
So, the monitor quality is not "finished" yet. I don't like OLEDs on big screens, because I tend to use what I buy for a very long time, and I don't wany my screen to age non-uniformly, esp. if I'm looking to it everyday and for long periods of time.
I'm old, i.e. "never buy ATI" is something that I've stuck to since the very early Nvidia days. I.e. switched from Matrox and Voodoo to Nvidia while commiserating and witnessing friend's and colleagues ATI woes for years.
The high end gaming days are long gone, even had a time of laptops where 3D graphics was of no concern whatsoever. I happened to have Intel chips and integrated graphics. Could even start up some gaming I missed out on during the years or replay old favourites just fine as even a business laptop Intel integrated graphics chip was fine for it.
And then I bought an AMD based laptop with integrated Radeon graphics because of all that negative stuff you hear about Intel and AMD itself is fine, sometimes even better, so I thought it was fair to give it a try.
Oh my was that a mistake. AMD Radeon graphics is still the old ATI in full blown problem glory. I guess it's going to be another 25 years until I might make that mistake again.
What kind of problems did you see on your laptop?
On the other hand I still think of Intel Integrated GPU in "that thing that screws up your web browser chrome of if you have a laptop with dedicated graphics"
AMD basically stopped supporting (including updating drivers) for GPUs before RDNA (in particular GCN), while such GPUs were still part of AMD's Zen 3 APU offerings.
I did think that given ATI was bought out by AMD and AMD itself is fine it should be OK. AMD always was. I've had systems with AMD CPUs and Nvidia GPUs back when it was an actual desktop tower gaming system I was building/upgrading myself. Heck my basement server is still an AMD CPU system with zero issues whatsoever. Of course it's got zero graphics duties.
On the laptop side, for a time I'd buy something with discrete Nvidia cards when I was still gaming more actively. But then life happened, so graphics was no longer important and I do keep my systems for a long time / buy non-latest gen. So by chance I've been with Intel for a long time and gaming came up again, casually. The Intel HD graphics were of course totally inadequate for any "real" current gaming. But I found that replaying some old favs and even "newer" games I had missed out on (new as in, playing a 2013 game for the very first time in 2023 type thing) was totally fine on an Intel iGPU.
So when I was getting to newer titles, the Intel HD graphics no longer cut it but I'm still not a "gamer" again, I looked at a more recent system and thought I'd be totally fine trying an AMD system. Exactly like another poster said, "post 2015 should be fine, right?! And then there's all this recent bad news about Intel, this is the time to switch!".
Still iGPU. I'm not going to shell out thousands of dollars here.
And then I get the system and I get into Windows and ... everything just looks way too bright, washed out, hard to look at. I doctored around, installed the latest AMD Adrenalin driver, played around with brightness, contract, HDR, color balance, tried to disable the Vari-Brightness I read was supposed to be the culprit etc. It does get worse once you get into a game. Like you're in Windows and it's bearable. Then you start a game and you might Alt-Tab back to do something and everything is just awfully weirdly bright and it doesn't go away when you shut down the game either.
I stuck with it and kept doctoring for over 6 months now.
I've had enough. I bought a new laptop, two generations behind with an Intel Iris Xe for the same amount of money as the ATI system. I open Windows and ... everything is entirely totally 150% fine, no need to adjust anything. It's comfortable, colors are fine, brightness and contrast are fine. And the performance is entirely adequately the same as with the AMD system. Again, still iGPU and that's fine and expected. It's the quality I'm concerned with, not the performance I'm paying for. I expect to be able to get proper quality software and hardware even if I pay for less performance than gamer kid me back when was willing to.
I've seen OEMs do that to an Intel+NVIDIA laptop, too. Whatever you imagine AMD's software incompetence to be, PC OEMs are worse.
Everything just reports it as "with Radeon graphics", including benchmarking software, so it's almost impossible to find anything about it online.
The only thing I found helped was GPU-Z. Maybe it's just one of the known bad ones and everything else is fine and "I bought the one lemon from a prime steak company" but that doesn't change that my first experience with the lemon company turned prime steak company is ... another lemon ;)
It's a Lucienne C2 apparently. And again, performance wise, absolute exactly as I expected. Graphics quality and AMD software? Unfortunately exactly what I expected from ATI :(
And I'm not alone when I look online and what you find online is not just all Lenovo. So I do doubt it's that. All and I mean all my laptops I'm talking about here were Lenovos. Including when they were called IBM ThinkPads and just built by Lenovo ;)
I think this a consequence of the laptop having HDR colour, and the vendor wanting to make it obvious. It's the blinding blue LED of the current day.
What I settled on for quite some time was manually adjusted color balance and contrast and turning the brightness down. That made it bearable but especially right next to another system, it's just "off" and still washed out.
If this was HDR and one can't get rid of it, then yeah agreed, it's just bad. I'm actually surprised you'd turn the brightness up. That was one of the worst things to do, to have the brightness too high. Felt like it was burning my eyes.
You don't like current AMD systems because one of them had an HDR screen? Nothing to do with CPU/GPU/APU?
My work Macbook on the other hand has zero issues with HDR and its display.
To be fair, you can still blame the OEM of course but as a user I have no way to distinguish that, especially in my specific situation.
Before I used that tool, I tried a few of the built-in colour profiles under the display settings, and that didn't help.
I had to turn the brightness up because when the display is in sRGB it gets dimmer. Everything is much more dim and muted, like a conventional laptop screen. But if I change it back to say, one of the DICOM profiles, then yeah, torch mode. (And if I turn the brightness down in that mode, bright colours are fine but dim colours are too dim and everything is still too saturated).
Also CUDA doesn't matter that much, Nvidia was powered by intense AGI FOMO but I think that frenzy is more or less done.
Nvidia is valuably precisely because the software, which is also why AMD is not so valuable. CUDA matters a lot (though that might become less true soon). And Nvidia's CUDA/software forward thinking most certainly predated AGI FOMO and that is the CAUSE of them doing so well with this "AI boom".
It's also not wildly overvalued, purely on a forward PE basis.*
I do wonder about the LLM focus, specifically whether we're designing hardware too much for LLM at the cost of other ML/scientific computing workflows, especially the focus on low precision ops.
But.. 1) I don't know how a company like Nvidia could feasibly not focus on designing for LLM in the midst of this craziness and not be sued by shareholders for negligence or something 2) they're able to roll out new architectures with great improvements, especially in memory, on a 2 year cycle! I obviously don't know the counterfactual, but I think without the LLM craze, the hypothetical generation of GPU/compute chips would be behind where they are now.
I think it's possible AMD is undervalued. I've been hoping forever they'd somehow catch up on software. They do very well in server business, and if Intel continues fucking up as much as they have been, AMD will own CPU/servers. I also think what deepseek has done may convince people it's worth it programming closer to the hardware, somewhat weakening Nvidias software moat.
*Of course, it's possible I'm not discounting enough for the geopolitical risk.
Once you start approaching a critical mass of sales, it's very difficult to keep growing it. Nvidia is being valued as though they'll reach a trillion dollars worth of sales per year. So nearly 10x growth.
You need to make a lot of assumptions to explain how they'll reach that, versus a ton of risk.
Risk #1: arbitrage principle aka. wherever there's profit to be made other players will move in. AMD has AI chips that are doing quite well, Amazon and Google both have their own AI chips, Apple has their own AI chips... IMO it's far more likely that we'll see commodification of AI chips than that the whole industry will do nothing and pay Nvidia's markup. Especially since TSMC is the one making the chips, not Nvidia.
Risk #2: AI is hitting a wall. VCs claim is isn't so but it's pretty obvious that it is. We went from "AGI in 2025" to AI companies essentially adding traditional AI elements to LLMs to make then useful. LLMs will never reach AGI, we need another technological breakthrough. Companies won't be willing to keep buying every generation of Nvidia chip for ever-diminishing returns.
Risk #3: Geopolitical, as you mentioned. Tariffs, China, etc...
Risk #4: CUDA isn't a moat. It was when no one else had the incentive to create an alternative and it gave everyone on Nvidia a head start. But now everything runs on AMD now too. Google and Amazon have obviously figured out something for their own accelerators.
The only way Nvidia reaches enough revenue to justify their market cap is if Jensen Huang's wild futuristic predictions become reality AND the Googles, Amazons, Apples, AMDs, Qualcomms, Mediateks and every other chip company all fail to catch up.
What I see right now is AI hitting a wall and the commodification of chip production.
I've used Linux exclusively for 15 years so probably why my experience is so positive. Both Intel and AMD are pretty much flawless on Linux, drivers for both are in the kernel nowadays, AMD just wins slightly with their iGPUs.
I had a Ryzen 2700u that was fully supported, latest OpenGL and Vulkan from day 1, hardware decoding, etc... but on Linux.
Back in the day, w/ AMD CPU and Nvidia GPU, I was gaming on Linux a lot. ATI was basically unusable on Linux while Nvidia (not with the nouveau driver of course), if you looked past the whole kernel driver controversy with GPL hardliners, was excellent quality and performance. It just worked and it performed.
I was playing World of Warcraft back in the mid 2000s via Wine on Linux and the experience was actually better than in Windows. And other titles like say Counter Strike 1.5, 1.6 and Q3 of course.
I have not tried that in a long time. I did hear exactly what you're saying here. Then again I heard the same about AMD buying ATI and things being OK now. My other reply(ies) elaborate on what exactly the experience has been if you're interested.
And I'm not even talking about the hassle of the nvidia drivers on Linux (which admittedly has become quite a bit better).
All that just for some negligible graphics power that I'm never using on the laptop.
For example, Uber hired a VP from Amazon. And the first thing he did was to hire most of his immediate reports at Amazon to Director/Senior Director positions at Uber.
At that level of management work gets done mostly through connections, favors and networking.
Of course they are obsessed with shrinking labor costs and resisting all downsizing until it reaches comical levels.
Take a company like health insurance that can't show a large dividend because it would be a public relations disaster. Filled to the gills with vice presidents to suck up extra earnings. Or medical devices.
Software is also very difficult for these hierarchies of overpaid management, because you need to pay labor well to get good software, and the only raison d'etre of these guys is wage suppression.
Leadership is hard for these managers because the primary thing rewarded is middle management machiavellianism, turf wars, and domain building, and any visionary leadership or inspiration is quashed.
It almost fascinates me that large company organizations basically are like Soviet style communism, Even though there are opportunities for internal competition. Like data centers and hosting and it groups. They always need to be centralized for" efficiency".
Meanwhile, they are like 20 data centers and if you had each of them compete for the company's internal business, they'd all run more efficiently.
> It almost fascinates me that large company organizations basically are like Soviet style communism, Even though there are opportunities for internal competition.
probably because continuous competition is inefficient within an organization and can cause division/animosity between teams?Are you aware of what goes on in middle management? This is the normal state of affairs between managers.
If what you are saying is true, then .......
Why is there competition in the open marketplace? You have just validated my suggestion that internally companies operate like communists.
> Why is there competition in the open marketplace? You have just validated my suggestion that internally companies operate like communists.
i am not an expert, but i think the theory of competition leading to better outcomes in a marketplace is the availability of alternatives if one company went bad (in addition to price competition etc)inside a company you are working for the same goal "against" the outside, so its probably more an artifact of how our economy is oriented
i'd guess if our economy was oriented around cooperation instead of "competition" (while keeping alternatives around) that dichotomy might go away...
just some random thoughts from an internet person
You got there in the end. You get the same outcome with the same corporate incentive.
Both Intel and Google prioritize {starting something new} over {growing an existing thing}, in terms of corporate promotions and rewards, and therefore employees and leaders self-optimize to produce the repeated behavior you see.
The way to fix this would be to decrease the rewards for starting a new thing and increase the rewards for evolving and growing an existing line of business.
> Both Intel and Google prioritize {starting something new} over {growing an existing thing}, in terms of corporate promotions and rewards, and therefore employees and leaders self-optimize to produce the repeated behavior you see.
I cannot speak for Intel, but Google has done very well by "growing an existing thing" in AdWords and YouTube. Both account for the lion's share of profits. They are absolute revenue giants. Many have tried, and failed to chip away at that lead, but Google has managed to adapt over and over again.It's really hard to fuck these things up. Which they have been trying hard, given the state of youtube and the search engine.
I can see why you have to be "special" to work at these places.
New feature attracts new users and allows for fancy press releases. Nobody cares about press releases about an existing product getting a bug fix are become more stable.
Our society is nothing but "ooh look, shiny!" type of short attention span
If they could not figure how to make it profitable, maybe somebody else should try. (Of course I don't think that the PE company is going to do just that.)
I wouldn't call that a roaring success. Funnily enough, Intel played a major role in running McAfee into the ground.
With proper leadership, McAfee could've ended up in the position CrowdStrike is now.
Trying not to piss off the Chinese government, and in particular its intelligence services (in order to sell chips) is unfortunately not a good model for an antimalware business.
Bonuses by juicing revenue numbers
Bigger next job by doing M&A and having really good-looking resume and interview story.
https://corporatefinanceinstitute.com/resources/valuation/mo...
Yeah, that's where my mind went. Executive and upper management salaries seem to be a function of revenue, not profit.
It is, however, a conflict of interest for you to be involved in company B's acquisition of company A (e.g. influencing company B to buy company A), and might even rise to the level of a breach of your fiduciary duty to company B.
I am beginning to think M&A are just some sort of ego thing for bored megacorp execs, rather than serious attempts to add efficiency and value to the marketplace. (Prove me wrong, bored megacorp execs. I'll wait.)
It was just senseless, Intel doesn't have real or imagined competition from a drone company, it wasn't even close to being in the same market. They just believed the hype about drones being the next big thing and when they found out they were too early they decided they didn't have the patience to wait for drones to become a thing and they killed it. There was no long term vision behind it.
I don't know about "real estate inspection", but another use case was for them to be used in oil rigs in the North Sea to inspect the structure of the rig itself. They had to be self-stabilizing under high winds and adverse weather conditions, and they had to carry a good enough camera to take detailed photos.
Unfortunately, while the technology was there, the market wasn't. Not many wanted to get a $35K drone to be able to sustain this business.
In real estate inspection, we had the same sort of concerns, can't fly too close to the object for safety reasons, and we need high resolution photos to determine quality of the masonry, paintwork and roofing etc..
EDIT I just noticed the “inspection” part. Maybe they wanted good zoom to spy on the tenants? (Or maybe that’s a really uncharitable take).
That's the face of it. Labor is a market as well. The impacts of these arrangements on our labor pool is extraordinary. It's a massive displaced cost of allowing these types of mergers to occur born out by the people who stand to gain the least from the merging of business assets.
It seems like a low risk effort to put a promising inexperienced exec in charge of a recent acquisition.
If they're a screw up and run it into the ground, imagine how much damage they could have done in a megacorp position of power.
Megacorp saved (at the cost of a smaller company)
Stock down again? Sell the company you bought 2 years ago.
From the top to the bottom the problem with late stage capitalism is misaligned incentives.
Edit: I wrote "the problem" and I should have written "among the many, many problems"
tick, tock
I don't think PE is responsible for that one.
https://datasheet.octopart.com/CUBIC-CYCLONIUM-Altera-datash...
I'm concerned about the future of FPGAs and wonder who will lead the way to fix these abhorrent toolchains these FPGA companies force upon developers.
Some FPGA vendors are contributing to and relying, partially or completely, on the open source stack (mainly yosys+nextpnr).
It is still perceived as not being "as good" as the universally hated proprietary tools, but it's getting there.
0. https://ir.quicklogic.com/press-releases/detail/657/quicklog...
1. https://www.designnews.com/semiconductors-chips/is-platypus-...
I think Xilinx did a fine job with their AI Engines and AMD decided to integrate a machine learning focused variant on their laptops as a result. The design of the intel NPU is nowhere near as good as AMD's. I have to say that AMD is not a software company though and while the hardware is interesting, their software support is nonexistent.
Also, if you're worried about FPGAs that doesn't really make much sense, since Effinix is killing it.
> FPGA is not a growing segment
What is replacing it? Single board computers? Or are APUs from ARM "good enough" and "cheap enough" now to replace FPGA?FPGAs are getting cheaper with each gen, expanding into low cost, high volume markets that were unthinkable for an FPGA 10 years ago. Lattice has an FPGA family specifically targeted to smartphones, and I've been consulting for a high end audio company that wanted to do some dsp, and a cheap FPGA was the best option in the market for the particular implementation that they wanted to do.
It's not sexy growth, but it's growth. Otherwise, we wouldn't had the explosion of the latest years in low end FPGA companies.
lookup sigmastudio dsp, dsp is insanely cheap todo, there is absolutely no need for fpga, what that guy was doing was either nonsense or it was in 1995. which are both irrelevant points, or rather you provided examples that show fpga are irrelevant, no growth market.
(how many audio devices were using TMS320 dsps even before and after ipod was a thing...)
If FPGAs are not a growing market, how come we have gone from 2 companies (we'll ignore niche space stuff) to ~10 in the last 20 years? Not many IC fields where there is a growth in manufacturers instead of consolidation...
https://www.achronix.com/blog/accelerating-llm-inferencing-f...
A Versal AI Edge FPGA has a theoretical performance of 0.7TFLOPs just from the DSPs alone, while consuming less power than a Raspberry Pi 5 and this is ignoring the AI Engines, which are exactly the ASICs that you are talking about. They are more power efficient than GPUs, because they don't need to pretend to run multiple threads each with their own register files or hide memory latency by swapping warps. Their 2D NOC plus cascaded connections allow them to have a really high internal memory bandwidth in-between the tiles at low power.
What they are missing is processing in memory, specifically LPDDR-PIM for GEMV acceleration. The memory controllers simply can't deliver a memory bandwidth that is competitive with what Nvidia has and I'm talking about boards like Jetson Orin here.
Honestly I've asked different hardware researchers this question and they all seem to give different answers.
There's been neural processing chips since before LLM craze [1].
[1]: https://en.wikipedia.org/wiki/Neural_processing_unit#History
[1] The FFT Strikes Back: An Efficient Alternative to Self-Attention (168 comments):
ETA they also paid out almost $10 in dividends.
Ouch - your work is so good we will pay 10x what it is worth, because we are not good enough to do it.
But you are not good enough for us. Maybe they couldn't a binary tree.
Intel soon discovered the obvious, which is that customers with applications well-suited to FPGAs already use FPGAs.
https://www.doc.ic.ac.uk/~wl/teachlocal/arch/papers/cacm19go...
And maybe this is/was a pipe dream - maybe there aren't enough people with the skills to have a "golden age of architecture". But MSFT was deploying FPGAs in the data center and there were certainly hopes and dreams this would become a big thing.
That's in no small part because the industry & tools seem to be stuck decades in the past. They never had their "GCC moment". But there's also inherent complexity in working at a very low level, having to pay attention to all sorts of details all the time that can't easily be abstracted away.
There's the added constraint that FPGA code is also not portable without a lot of extra effort. You have to pick some specific FPGA you want to target, and it can be highly non-trivial to port it to a different one.
And if you do go through all that trouble, you find out that running your code on a cloud FPGAs turns out to be pretty damn expensive.
So in terms of perf per dollar invested, adding SIMD to your hot loop, or using a GPU as an accelerator may have a lower ceiling, but it's much much more bang for the buck and involves a whole lot less pain along the way.
You can certainly pencil out FPGA or ASIC systems that which would attain high levels of efficient parallelism if there wasn't memory bandwidth or latency limits but there are. If you want to do math that GPUs are good at, you use GPUs. Historically some FPGAs have let you allocate bits in smaller slices so if you only need 6 bit math you can have 6 bit math but GPUs are muscling in on that for AI applications.
FPGAs really are good at bitwise operations used in cryptography. They beat CPUs at code cracking and bitcoin mining but in turn they get beat by ASICs. However there is some number of units (say N=10,000) where the economics of the ASIC plus the higher performance will drive you to ASIC -- for Bitcoin mining or for the NSA's codebreaking cluster. You might prototype this system on an FPGA before you get masks made for an ASIC though.
For something like the F-35 where you have N=1000 or so, could care less about costs, and might need to reconfigure it for tomorrow's threats, the FPGA looks good.
One strange low N case is that of display controllers for retrocomputers. Like it or not a display controller has one heck of a parts count to make out of discrete parts and ASIC display controllers were key to the third generation of home computers which were made with N=100,000 or so. Things like
and are already expensive compared to the Raspberry Pi so they tend to use either a microcontroller or FPGA, the microcontroller tends to win because an ESP32 which costs a few buck is, amazingly, fast enough to drive a A/D converter at VGA rates or push enough bits for HDMI!
Rapid product development. Got a project that needs to ship in 6-9 months and will be on the market for less than two years in small volume? Thats where FPGAs go. Medical, test and measurement, military, video effects, telepresence, etc.
The problem (for Intel) is that you don't sell billions of dollars of FPGAs into a mass market this way.
First off, clock rates on an FPGA run at about a tenth that of CPUs, which means you need a 10× parallelism speedup just to break-even, which can be a pretty tall order, even for a lot of embarrassingly parallel problems.
(This one is probably a little bit garbled) My understanding is that the design of FPGAs is such that they're intrinsically worse at getting you FLOP/memory bandwidth number than other designs, which also gives you a cap on expected perf boosts.
The programming model is also famously bad. FPGAs are notorious for taking forever to compile--and the end result of waiting half an hour or more might simply be "oops, your kernel is too large." Also, to a degree, a lot of the benefits of FPGA are in being able to, say, do a 4-bit computation instead of having to waste the logic on a full 8-bits, which means your code also needs to be tailored quite heavily for an FPGA, which makes it less accessible for most programmers.
Intel tried to get around this problem by having a common framework. So one compiler (based on clang) with multiple backends for their CPUs, FPGAs, and GPUs. But in practice it doesn't work. The architectures are too different.
We run such oscillators as dummy payloads for thermal tests while we are waiting for the real firmware to be written.
When a CMOS switches, it essentially creates a very brief short circuit between VCC and GND. That's part of normal dynamic power consumption, it's expected and entirely accounted for.
But I don't know how these cloud FPGAs could enforce that you don't violate setup and hold times all over the place. When you screw up your crossings and accidentally have a little bit of metastability, that CMOS will switch back and forth a little bit, burn some power, and settle one way or the other.
Now if you intentionally go out of your way to keep one cell metastable as long as possible while the neighbors are cold, that's going to be one hell of a localized hotspot. I wouldn't be surprised if thermal protection can't kick in fast enough.
It's just kibitzing though, I'm not particularly inclined to try with my own hardware
Yes, but pairing an FPGA somewhat tightly integrated with an actually powerful x86 CPU would have made an interesting alternative to the usual FPGA+some low end ARM combo that's common these days.
And maybe it would have lead somewhere. Perhaps. But they didn't.
Intel: Welcome, Altera. We'd like you to integrate your FPGA fabric onto our CPUs.
Altera: Sure thing, boss! Loads of our FPGAs get plugged into PCIe slots, or have hard or soft CPU cores, so we know what we're doing.
Intel: Great! Oh, by the way, we'll need the ability to run multiple FPGA 'programs' independently, at the same time.
Altera: Ummmm
Intel: The programs might belong to different users, they'll need an impenetrable security barrier between them. It needs full OS integration, so multi-user systems can let different users FPGA at the same time. Windows and Linux, naturally. And virtual machine support too, otherwise how will cloud vendors be able to use it?
Altera: Uh
Intel: We'll need run-time scaling, so large chips get fully utilised, but smaller chips still work. And it'll need to be dynamic, so a user can go from using the whole chip for one program to sharing it between two.
Intel: And of course indefinite backwards compatibility, that's the x86 promise. Don't do anything you can't support for at least 20 years.
Intel: Your toolchain must support protecting licensed IP blocks, but also be 100% acceptable to the open source community.
Intel: Also your current toolchain kinda sucks. It needs to be much easier to use. And stop charging for it.
Intel: You'll need a college outreach program. And a Coursera course. Of course students might not have our hardware, so we'll need a cloud offering of some sort, so they can actually do the exercises in the course.
Altera: I guess to start with we
Intel: Are you profitable yet? Why aren't you contributing to our bottom line?
As to why it didn't work, well, I'm not plugged into this space to have a high degree of certainty, but my best guess is "FPGAs just aren't that useful for that many things."
But they didn't actually sell it. At least not in any form anybody could buy. So, yeah, we get the OP claiming it was an obvious technological dead-end.
And if they included it on lower-end chips (the ones they sold just a few years after they brought Altera), we could have basically what the RasPI 2040 is nowadays. Just a decade earlier and controlled by them... On a second thought, maybe this was for the best.
So selling FPGA's was a bad move? Or was the purchase price just wildly out-of-line with the--checking...$9.8B annual market that's expected to rise to $23.3B by 2030?
That was before I learnt about the many and varied ways in which Intel sabotages itself, and released that Intel's underperformance has little to do with a lack of good technical ideas or talent.
I.e. I was young and naive. I am now considerably less young, and at least a little less naive.
This was the heyday at Intel. I left within a year because I noticed that the talent that was respected, compensated and influential at Intel was the sales engineers. I can't pretend to have known that would lead to the decline of the company, but I knew that as an engineer uninterested in sales, that it wasn't the place for me.
FPGAs are not ideal for raw parallel number crunching like in AI/LLMs. They are more appropriate for predictable real-time/ultra-low-latency parallel things like the the modulation and demodulation of signals in 5G base state stations.
Intel was an early player to so many massive industries (e.g. XScale, GPGPU, hybrid FPGA SoCs). Intel abandoned all of them prematurely and has been left playing catch-up every time. We might be having a very different discussion if literally any of them had succeeded.
Did AMD massively overpay, or has the FPGA market fundamentally shifted? Curious to see how this new benchmark ripples into AMD’s stock valuation.
My anecdotal example would be high end broadcast audio processors. These do quite a bit beyond the actual processing of audio, in particular, into baseband or even RF signal generation.
In any case these devices used to be fully analog, then when they first went digital were a combination of DSPs for processing and FPGAs for signal output. Later generations dropped the DSP and did everything in larger FPGAs as the larger FPGAs became available. Later generations dropped the whole stack and just run on an 8 core Intel processor using real time linux and some specialized real time signal processing software with custom designed signal generators.
The high core and high frequency CPUs became good enough and getting custom made chips became exceptionally cheap as well. FPGAs became rather pointless in this pipline.
The US military, for a time, had a next generation radio specification that specifically called for the use of FPGAs, as that would allow them to make manufacturer agnostic radios and custom software for them. That never panned out but it shows the peak use of FPGAs to manage the constraints of this time period.
I’ve said it before, Intel is where technology companies go to die. Fortunately while Altera is probably a mess of useless Intel drone MBAs, there’s a decent core that can be salvaged. Best of luck to them.
Selling now also makes sense. There was only one serious competitor in 2015. Now you got Tariffs both ways to the main place where everything is build, and said place has own homegrown vendors like GOWIN, Sipeed, Efinix. But the biggest reason is amount of stuff designed in the West/Taiwan is falling with China taking over actual product design.
https://itif.org/publications/2024/08/19/how-innovative-is-c...
>In 2015, China released its “Made in China 2025” (MIC 2025) strategy, which refined some of these targets, setting a goal of achieving 40 percent self-sufficiency in semiconductors by 2020 and 70 percent by 2025.
https://en.wikipedia.org/wiki/Made_in_China_2025
>In 2024, the majority of MIC 2025's goals were considered to be achieved, despite U.S. efforts to curb the program.
Products coming out of China no longer use STM microcontrollers, Vishay/Analog mosfets/diodes and Altera/Xilinx FPGAs. Its all Chinese semiconductor brands you never heard about. Good example is this teardown of Deye SUN-5K-SG04LP1 5kW hybrid solar inverter https://www.youtube.com/watch?v=n0_cTg36A2Q
Will we see an AMD-esque fab spin-off?
When AMD spun off their fabs into what became Global Foundries, it was difficult for many to see the upside. However, today, it seems not being tied to any particular fab/tech is one of AMD's biggest advantages.
Intel paid $16.7 billion in 2015 and sold it for $8.75 billion?! What about all the money dumped into Altera from 2015 to 2025? How much was that? Is Intel just handing over the FPGA market to AMD?
https://download.intel.com/newsroom/2021/archive/2015-12-28-...
> Is Intel just handing over the FPGA market to AMD?
Maybe? But who cares. From all of the comments above, I learned that the FPGA market is stalled or shrinking. Even AMD likely overpaid for Xilinx.Selling out to PE is a signal this company is about to get gutted and loaded to the tits with debt and management fees from PE.
>Intel flogs off majority stake in Altera to private equity for $4B
>Buy high, sell low: FPGA biz cost x86 giant $16B decade ago
for those not up on this stuff
Should have stuffed it under the bed instead…
I mean it's a pipe dream, but why not.
The SEC should investigate them, see whether there was any inside trading to benefit from this horrible value loss.
This criminal lack of performance needs to be brought up during the upcoming shareholders meeting. Responsible must pay the price.
Would you hire again the Intel CEOs, head of Intel Capital, any members of Intel’s board of directors after such abysmal performance?