I have always over specified the micro-controllers a little from that point, and kept a copy of the original dev environment, luckily all my projects are now EOL as I am retired.
I have done something similar, albeit in a different context, to fix the behaviour of a poorly performing SQL query embedded in a binary for which the source code was not easily available (as in: it turned out that the version in source control wasn't the version running in production and it would have been quite a lot of work to reverse engineer the production version and retrofit its changes back to the source - and, yes, this is as bad as you think it is).
When I initially suggesting monkey patching the binary there was all manner of screaming and objections from my colleagues but they were eventually forced to concede that it was the pragmatic and sensible thing to do.
When I started at my work, a previous software dev with practices more like a mechanic than a software dev didn't use tags and all binaries deployed to production were always the default version 1.0.0.0 of the C# project templates in Visual Studio. To make matters worse, variants of the software were just copy pasted in CVS with their core code checked in as binaries and not their original C# projects. Fun times finding out what actually ran on production, and patching anything in it!
I doubt that everything you ever worked on is end-of-life. Some of it is still out there...
If it it still running out there, it's runningin zombie state.
Casey Muratori would point out the debugger ran faster on hardware from the era than modern versions run on today's hardware, though I don't have a link to the side–by–side video comparison.
Edit: Casey Muratori showing off the speed of visual studio 6 on a Pentium something after ranting about it: Jump to 36:08 in https://youtu.be/GC-0tCy4P1U — earlier section of the video is how it is today (or when the video was made)
Software today is a horrible bloated mess on top of horrible bloated messes.
Funny thing is that at the time, I was lamenting how much slower VC6 was than VC4. Macro playback, for instance, got much slower in VC6. It's all relative.
Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?
Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.
Edit: Actually it looks like you could. But did they? https://www.delorie.com/djgpp/v2faq/faq22_9.html
There is also an interview of Dave Tayor explicitly mentioning compiling Quake on the Alpha in 20s (source: https://www.gamers.org/dhs/usavisit/dallas.html#:~:text=comp... I don't think he meant running qbsp or vis or light.
This is when they (or at least Carmack) was doing development on Next? So were those the DOS builds?
An incremental build of C (not C++) code is pretty fast, and was pretty fast back then too.
In q1source.zip this article links to is only 198k lines spread across 384 files. The largest file is 3391 lines. Though the linked q1source.zip is QW and WinQuake, so not exactly the DJGPP build. (quote the README: "The original dos version of Quake should also be buildable from these sources, but we didn't bother trying").
It's just not that big a codebase, even by 1990s standards. It was written by just a small team of amazing coders.
I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem. C is just really fast to build. Even back in, was it 1997, when the source code was found laying around on an ftp server or something: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...
And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.
This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.
> just run it from a network drive.
It still needs to be transferred to run.
> I know which system I would choose for compiles.
All else equal, perhaps. But were you actually a developer in the 90s?
It’s also not just a question of local builds for development — people wanted centralized build servers to produce canonical regular builds. Given the choice between a PC and large Sun, DEC, or SGI hardware, the only rational choice was the big iron.
To think that local builds were fast, and that networking was a problem, leads me to question either your memory, whether you were there, or if you simply had an extremely non-representative developer experience in the 90s.
Networking was a solved problem by the mid 90s, and moving the game executable and assets across the wire would have taken ~45 seconds on 10BaseT, and ~4 seconds on 100BaseT. Between Samba, NFS, and Netware, supporting DOS clients was trivial.
Large, multi-CPU systems — with PCI, gigabytes of RAM, and fast SCSI disks (often in striped RAID-0 configurations) — were not marginally faster than a desktop PC. The difference was night and day.
Did you actively work with big iron servers and ethernet deployments in the 90s? I ask because your recollection just does not remotely match my experience of that decade. My first job was deploying a campus-wide 10Base-T network and dual ISDN uplink in ~1993; by 1995 I was working as a software engineer at companies shipping for Solaris/IRIX/HP-UX/OpenServer/UnixWare/Digital UNIX/Windows NT/et al (and by the late 90s, Linux and FreeBSD).
> This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.
The network overhead was negligible. The gains were enormous.
> You keep claiming it somehow incurred substantial overhead
This is going nowhere. You keep putting words in my mouth. Final message.
Were you even a developer in the 90s? Are you trying to annoy people?
I never had cause to build quake, but my Linux kernel builds took something like 3-4 hours on an i486. It was a bit better on the dual socket pentium I had at work, but it was still painfully slow.
I specifically remember setting up gcc cross toolchains to build Linux binaries on our big iron ultrasparc machines because the performance difference was so huge — more CPUs, much faster disks, and lots more RAM.
That gap disappeared pretty quickly as we headed into the 2000s, but in 1997 it was still very large.
`gcc -pipe` worked best when you had gobs of RAM. Disk I/O was so slow, especially compared to DRAM, that the ability to bypass all those temp file steps was a god-send. So you'd always opt for the pipeline if you could fill memory.
`make -j` was the easiest parallel processing hack ever. As long as you had multiple CPUs or cores, `make -j` would fill them up and keep them all busy as much as possible. Now, you could place artificial limits such as `-j4` or `-j8` if you wanted to hold back some resources or keep interactivity. But the parallelism was another god-send when you had a big compile job.
It was often a standard but informal benchmark to see how fast your system could rebuild a Linux kernel, or a distro of XFree86.
From cold, or from modified config.h, sure. But also keep in mind that the Pentium came out in 1993.
I used it in the mid-90's and yes, it was eye opening. On the other hand, I was an Emacs user in uni, and by studying a bit the history of Emacs (especially Lucid Emacs) I came to understand that the concepts in Visual Studio were nothing new.
On the third hand, I hated customizing Emacs, which did not have "batteries included" for things like "jump to definition", not to mention a package manager. So the only times in the late-90s I got all the power of modern IDEs was when I was doing something that needed Windows and Visual Studio.
And I mean it doesn't seem super impressive, but it's something. Lol
Descent on the other hand...
It definitely was an amazing codebase for the time. You didn’t need to get hung up on architecture because it is very singular… it’s just a level, you, and the entities that were created when the level loaded.
There’s no pre-caching, no virtual textures, no shaders (there are materials for later quake 3), it’s just pure load -> set -> loop. The “client” renders, the “server” has the state.
Link: https://github.com/jnz/studio98
Can't fix the Electron sluggishness compared to VS6, but at least the syntax highlighting feels a bit like home.
there was another article where someone bootstrapped the very first version of gcc that had the i386 backend added to it, and it turns out there was a bug in the codegen. I'll try to find it...
EDIT: Found in, infact there was a HN discussion about an article referencing the original article:
There's also an easy fix: https://github.com/krystalgamer/spidey-decomp/blob/ad49c0f5f...
Hence why I eventually found refuge in XEmacs, and DDD, until IDEs like KDevelop and Sun Forte came to be.
I started with C on the Amiga and then went to UNIX and only later starting doing Windows coding on Windows 3.1.
Gonna warm that up when the kids get a bit older and we start doing LAN parties.
That and Quake World Team Fortress.
I bet there are still servers out there, at that.
That doesn't mean it's any good - which I admit is subjective. I'm sure they've put good devs doing a ton of work into making an IDE they believe in but having used it for a couple of years I don't enjoy the experience.
Nothing feels obvious or simple, and trying to work in Python or embedded C++ compared to using Jetbrains tools feels like I'm missing so much. I've gone back to pycharm community edition because IMO it's light-years ahead of vscode in usability.
I guess people say the same about emacs.
I maintain VC++ was a better experience than vscode; whoever is working on it.