Now the initial gcc implemented this saving to memory with a kind of duffs device, with a computed jump into a block of register saving instructions to only save the needed registers. There was no boundary check, so if the no argument register (RAX) was not initialized correctly it would jump randomly based on the junk, and cause very confusing bug reports.
This bit quite some software which didn't use correct prototypes, calling stdarg functions without indicating that in the prototype. On 32bit code which didn't use register arguments this wasn't a problem.
Later compiler versions switched to saving all registers unconditionally.
In an alternative universe without AMD64, Intel would have kept pushing Itanium while sorting out its issues, HP-UX was on it, and Windows XP as well.
People don't read comments before replying?
Or maybe, if the push came to shove, the desktops would switch to something entirely different like Alpha. Or ARM! Such event would likely force ARM to come up with their AArch64 several years sooner than it actually happened.
Eventually Intel gives up after motherboard/desktop/laptop makers can't build a proper market for it. Maybe Intel then decides to go back and do something similar to what AMD did with x86_64. Maybe Intel just gives up on 64-bit and tries to convince people it's not necessary, but then starts losing market share to other companies with viable 64-bit ISAs, like IMB's POWER64 or Sun's SPARC64 or whatever.
Obviously we can't know, but I think my scenario is at least as likely as yours.
Considering that PAE was a gimmick anyway.
With how long it took Intel to ship expensive, incompatible, so-so performance ia64 chips - your theory needs an alternate universe where Intel has no competitors, ever, to take advantage of the obvious market opportunity.
4gb of RAM existed but many many systems weren’t even close to it yet.
Without AMD, there was no alternative in the PC world, It was already the first 64 bit version of Windows XP.
Since we're providing suggestions in computing history, I assume you can follow the dates,
https://en.wikipedia.org/wiki/Windows_XP_editions#Windows_XP...
Perhaps? I don't know enough to judge whether one of the other companies working on IA-32 compatible processors could plausibly have stepped in -
https://en.wikipedia.org/wiki/List_of_former_IA-32_compatibl...
It's true that most of those would have lacked the resources to replicate AMD's feat with AMD64. OTOH, AMD itself had to buy out NexGen to produce their K6. Without AMD and/or AMD64, there'd be plenty of larger players who might decide to fill the void.
This doesn't apply to fake tech companies like AirBnB, Dropbox, and Stripe, and if you've spent your career at fake tech companies, your intuition is going to be "off" on this point.
But between conception and delivery, the web took over the world. Branchy integer code was now the dominant server workload & workstations were getting crowded out of their niche by the commodity economics of x86.
Now I can finally explain why some "tech" jobs feel like they're just not moving the needle.
Problems in operations research (like logistics) or fraud detection can be just as technical.
Operations research is technology, but Uber isn't Gurobi, which is a real tech company like Intel, however questionable their ethics may be.
This feels like a distinction without a difference based on whether kragen thinks something is hardcore enough to count?
Fraud detection can be (and is) extremely hardcore, but it isn't progressive in that way. It's largely tradecraft. Consequently its relationship to novelty and technical risk is fundamentally different.
Intel isn't ASML, either. They merely use their products. So what?
Presumably Gurobi doesn't write their own compilers or fab their own chips. It's turtles all the way down.
> Fraud detection is a Red Queen's race. If the amount of resources that goes into fraud detection and fraud commission grows by 10×, 100×, 1000×, the resulting increase in human capacities and improvement in human welfare will be nil. It may be technically challenging but it isn't technology.
By that logic no military anywhere uses any technology? Nor is there any technology in Formula 1 cars?
AirBnB isn't doing that; they're just booking hotel rooms. Their competitive moat consists of owning a two-sided marketplace and political maneuvering to legalize it. That's very valuable, but it's not the same kind of business as Intel or Gurobi.
Nuclear weapons are certainly a case that tests the category of "technology" and which, indeed, sparked widespread despair and abandonment of progressivism: they increase human capabilities, but probably don't improve human welfare. But I don't think that categories become meaningless simply because they have fuzzy edges.
First off, Itanium was definitely meant to be the 64-bit successor to x86 (that's why it's called IA-64 after all), and moving from 32-bit to 64-bit would absolutely have been a killer feature. It's basically only after the underwhelming launch of Itanium that AMD comes out with AMD64, which becomes the actual 64-bit version of x86; once that comes out, the 64-bitness of Itanium is no longer a differentiation.
Second... given that Itanium basically implements every weird architecture feature you've ever heard of, my guess is that they decided they had the resources to make all of this stuff work. And they got into a bubble where they just simply ignored any countervailing viewpoints anytime someone brought up a problem. (This does seem to be a particular specialty of Intel.)
Third, there's definitely a baseline assumption of a sufficiently-smart compiler. And my understanding is that the Intel compiler was actually halfway decent at Itanium, whereas gcc was absolute shit at it. So while some aspects of the design are necessarily inferior (a sufficiently-smart compiler will never be as good at hardware at scavenging ILP, hardware architects, so please stop trying to foist that job on us compiler writers), it actually did do reasonably well on performance in the HPC sector.
I don't know, most people don't care about the ISA being weird as long as the compiler produces reasonably fast code?
They did persuade SGI, DEC and HP to switch from their RISCs to it though. Which turned out to be rather good for business.
Which is nearly true 64 bit Intel chips did (mostly) kill RISC. But not their (and HP's) fun science project IA64, they had to copy AMD's "what if x86, but 64 bit?" idea instead.
Not that HP was the only one to lose their minds over Itanic (SGI in particular), but I thought they were the ones who walked away from the most.
1) Apple's financial firepower allowing them to book out SOTA process nodes
2) Apple being less cost-sensitive in their designs vs. Qualcomm or Intel. Since Apple sells devices, they can justify 'expensive' decisions like massive caches that require significantly more die area.
That’s much better than a decade of development with no product yet.
It did make a tiny bit of sense at the time. Java was ascendant and I think Intel assumed that JIT compiled languages were going to dominate the new century and that a really good compiler could unlock performance. It was not to be.
EPIC development at HP started in 01989, and the Intel collaboration was publicly announced in 01994. The planned ship date for Merced, the first Itanic, was 01998, and it was first floorplanned in 01996, the year Java was announced. Merced finally taped out in July 01999, three months after the first JIT option for the JVM shipped. Nobody was assuming that JIT compiled languages were going to dominate the new century at that time, although there were some promising signs from Self and Strongtalk that maybe they could be half as fast as C.
Or do you not count Merced as "shipping"?
JIT compilation was available before but became the default in Java1.3, released a year earlier to incredible hype.
Source: I was there, man.
Plus, DEC managed to move all of its VAX users to Alpha through the simple expedient of no longer making VAXen, so I wonder if HP (which by that point had swallowed what used to be DEC) thought it could repeat that trick and sunset x86, which Intel has wanted to do for very nearly as long as the x86 has existed. See also: Intel i860
The 80286 was a stop-gap solution until iAPX432 was ready.
The 80386 started as a stop-gap solution until iAPX432 was ready, until someone higher up finally decided to kill that one.
I'd never heard of it myself, and reading that Wikipedia page it seems to have been a collection of every possible technology that didn't pan out in IC-language-OS codesign.
Meanwhile, in Britain a few years later in 1985, a small company and a dedicated engineer, Sophie Wilson, decided that what they needed was a RISC processor that was as plain and straightforward as possible ...
https://devblogs.microsoft.com/oldnewthing/20040120-00/?p=40... "ia64 – misdeclaring near and far data"
Not that this matters to anyone anymore. IA64 utterly failed long ago.
... oh wait, on more than x86(64).
https://en.wikipedia.org/wiki/Explicitly_parallel_instructio...
https://en.wikipedia.org/wiki/Itanium
> In 2019, Intel announced that new orders for Itanium would be accepted until January 30, 2020, and shipments would cease by July 29, 2021.[1] This took place on schedule.[9]
VLIW architectures still live on in GPUs and special purpose (parallel) processors, where these sorts of constraints are more reasonable.
https://portal.cs.umbc.edu/help/architecture/aig.pdf
to discover at least two magical registers to hold up to 127 spilled registers worth of NaT bits. So they tried.
The NaT bits are truly bizarre and I’m really not convinced they worked well. I’m not sure what happens to bits that don’t fit in those magic registers. And it’s definitely a mistake to have registers where the register’s value cannot be reliably represented in the common in-memory form of the register. x87 FPU’s 80-bit registers that are usually stored in 64-bit words in memory are another example.
EDIT: to be fair to it, they carry it through to main memory too
So who am I to complain if CHERI pointers are even wider and have strange rules? At least you can write a pointer to memory and read it back again.
[0] I could be wrong. I’ve hacked on Linux’s v8086 support, but that’s virtual and I never really cared what its effect was in user mode so long as it worked.
[1] You can read and write them via SMM entry or using virtualization extensions.
Compilers used to use its 80-bit floating point registers for 64-bit float computations, but also might spill them to memory as 64-bit float numbers.
https://hal.science/hal-00128124v3/file/floating-point.pdf section 3 has some examples, including one where the assert can fail in:
int main (void) {
double x = 0x1p-1022, y = 0x1p100, z;
do_nothing(&y);
z = x / y;
if (z != 0) {
do_nothing(&z);
assert(z != 0);
}
}
with void do nothing (double *x) { }
in a different compilation unit.