In other words, if you need your software to live in the dirty world we live in, and not just in a pristine bubble, things are gonna rot.
Picking tools and libraries and languages that will rot less quickly however seems like a good idea. Which to me means not chaining myself to anything that hasn't been around for a decade at least.
I got royally screwed because 50-60% of my lifetime code output before 2018, and pretty much all the large libraries I had written, were in AS3. In a way, having so much code I would have maintained become forced abandonware was sort of liberating. But now, no more closed source and no more reliance on any libs I don't roll or branch and heavily modify myself.
Well, that's why I called BubbleOS BubbleOS: https://gitlab.com/kragen/bubbleos/
Not there yet, though...
I share your painful experience of losing my work to proprietary platforms.
That said, why the network layer doesn't run as a socks or http proxy or something? That would greatly simplify network logic. All network libraries provide old interfaces like streams and requests.
Out of curiosity, what kind of work did you do? Regarding our old AS3, did you have any luck with Haxe? I assume it would be a straightforward port.
Maybe I was just kinda dejected and could have solved it, but instead I moved over to TS and PixiJS, ported some necessary logic and started my whole stack over.
Still friends with the company owner of that code. So I've had a bit more insight into follow-up on code over 2 decades old that isn't so typical for anything else I've done.
Lasting centuries may or may not be preferable.
There are places where you want to cheaply rebuild from scratch. Your castle after tornado and flooding will be irreparably bad. Most castles suck badly by not taking advantage of new materials and I myself would not like to live in 100 yo building.
Same for software, there are pieces that should be build to last but there are applications that should be replaceable in short time.
As much as I am not fan of vibe coding I don’t believe all software should be built for decades to last.
https://www.sqlite.org/lts.html
People talk about SQLite's reliability but they should also mention its stability and longevity. It's first-class in both. This is what serious engineering looks like.
Not trying to be chide but it seems like with such a young industry we need better social tools to make sure this effort is preserved for future devs.
Churn has been endemic in our industry and it has probably held us back a good 20 years.
What they can do is renew the promise going forward; if in 2030 they again commit to 25 years of support, that would mean something to me. Claiming they can promise to be supporting it in 2075 or something right now is just not a sensible thing to do.
I'm curious how these plans would look and work in the context of software development. That was more what my question is about (also only being familiar with sqlite taking this seriously).
We've seen what lawyers can accomplish with their BAR associations and those were created over 200 years ago in the US! Lawyers also work with one of the clunkiest DSLs ever (legalese).
Imagine what they could accomplished if they used an actual language. :D
Do you think such a thing would have helped or hurt our industry?
I honestly think help.
I do.
Medieval guilds are another equivalent but they could not deal with the industrial revolution or colonialism, so they don't seem like something worth studying (outside of their failures) if it can't deal with societal change.
Always thought neovim should do something like this. How to recreate the base of neovim, or how to recreate popular plugins with minimal lua code.
Got to wonder how more sustainable that would be versus relying on donations.
How do we plan to make sure the lessons we've learned during development now will still be taught 300 years from now?
I'm not putting the onus on sqlite to solve this but they are also the only organization I know of that is taking the idea seriously.
Just more thinking in the open and seeing what other people are trying to solve similar problems (ensure teachings continue past their lives) outside the context of universities.
If any tooling fails in 25 years, you can at least write a new program to get your data back out. Then you can import it into the next hot thing.
Some people on the React team deciding in 2027 to change how everyone uses React again is NOT exciting, it's an exercise in tolerating senior amateurs and I hate it because it affects all of us down to the experience of speaking to under-qualified people in interview processes "um-ackchyually"-ing you when you forget to wrap some stupid function in some other stupid function.
Could you imagine how incredulous it would be if SQLite's C API changed every 2 years? But it doesn't. Because it was apparently designed by real professionals.
I think "boring software" is a useful term.
Exciting things are unpredictable. Predictable things aren't exciting. They're boring.
Stock car racing is interesting because it's unpredictable, and, as I understand it, it's okay for a race car to behave unpredictably, as long as it isn't the welds in the roll bars that are behaving unpredictably. But if your excitement is coming from some other source—a beautiful person has invited you to go on a ski trip with them, or your wife needs to get to the hospital within the next half hour—it's better to use a boring car that you know won't overheat halfway there.
Similarly, if a piece of software is a means to some other end, and that end is what's exciting you, it's better to use boring software to reach that end instead of exciting software.
This effect gets accelerated when teams or individuals make their code more magical or even just more different than other code at the company, which makes it harder for new maintainers to step in. Add to this that not all code has all the test coverage and monitoring it should... It shouldn't be too surprising there's always some incentives to kill, change, or otherwise stop supporting what we shipped 5 years ago.
For roads, brushes, wastewater, sewer, electricity - it's because these things are public utilities and ultimately there is accountability - from local government at least - that some of these things need to happen, and money is set aside specifically for it. An engineer can inspect a bridge and find cracks. They can inspect a water pipe and find rust or leaks.
It's much harder to see software showing lines of wear and tear, because most software problems are hard to observe and buried in the realities of Turing completeness making it hard to actually guarantee there aren't bugs; software is often easy to dismiss as good enough until it starts completely failing us.
A bridge is done when all the parts are in place and connected together. Much software is never really done because - it's rare to pay until we have nothing more to refactor - the software can only be as done as the requirements are detailed; if new details come to light or worse, change entirely, the software that previously looked done may not be. That would be insane for a bridge or road.
Because the software ecosystem is not static.
People want your software to have more features, be more secure, and be more performant. So you and every one of your competitors are on an update treadmill. If you ARE standing (aka being stable) on the treadmill, you'll fall off.
If you are on the treadmill you are accumulating code, features, and bug fixes, until you either get too big to maintain or a faster competitor emerges, and people flock to it.
Solving this is just as easy as proving all your code is exactly as people wanted AND making sure people don't want anything more ever.
Runners on treadmills don't actually move forward.
If I assume your point is true, wouldn't everyone then just switch to Paint for all 2D picture editing? I mean it's the fastest - opens instantly on my machine vs 3-4sec for Krita/Gimp/Photoshop. But it's also bare bones. So why isn't Paint universal used by everyone?
My assumption: what people want is to not waste their time. If a program is 3 seconds slower to start/edit, but saves you 45 minutes of fucking around in a less featureful editor, it's obvious which one is more expedient in the long run.
That's kind of my point. Even you eschew Paint's simplicity when you need a more complex transformation. Nothing you do in Paint.Net isn't impossible in Paint, given enough calculation and preparation. So a performance isn't the deciding factor. It's the speed of achieving thing X (of which startup/lag is a tiny cost).
Similarly in Paint.Net you could emulate many Photoshop features (e.g., non-destructive editing), but doing so would be tedious (duplicate layer, hide layer adjust copy layer, then edit until you get it to where you want it).
There are two things in play: just because it's a deciding factor for you doesn't mean it's a deciding factor for everyone else. Second, even for you, Paint isn't enough. You also got Paint.net. You can reproduce almost any effect in PS or Gimp or Krita in Paint/Imagemagick. Why not just use those two for everything.
It's the same thing as using an IDE vs notepad(++). Anything done in the IDE behemoth can be done in notepad. Albeit at a significant time penalty, and with way more CLI jousting.
> I use Ctrl+Z for non-destructive editing in paint
That's not really non-destructive editing - that's Undo. A non-destructive editing means you can edit, change things, save, close the program. Reopen file after X days amount of time, and change effect or thing you applied.
I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
The core issue, in my humble opinion, is that it's not doing the same thing. But from a thousand miles away it looks like that, because everyone uses 20% of functionality, but everyone in aggregate uses 100% of functionality.
> I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
I'd like to see some actual proof of (/thoughts on) this. Like if you didn't patch any security issues/bugs, or add any features, or fixed any dependencies, how is the code getting slower?
Like I understand some people care about performance, but I've seen many non-performant solutions (from Unity, to Photoshop to Rider, being prefered over custom C# engine, Paint, Notepad++) being used nearly universally, which leads me to believe there is more than one value in play.
And yes convenience, social adoption and flashy modern appearance are major factors in the decision making. Whether it contributes to obsolescence or is fast isn't a factor at all.
It's literally Electron based. Trackers are a rounding error in the ocean that is Chrome + Node.
Got one idling in memory on my Intel mac rn. Let's see. 341 MB or Slack renderer, 198 MB of Slack Helper GPU, 72 MB for Slack itself + 50MB for Slack Helper stuff. Literally eating 661 MB of memory doing practically nothing. Which means a huge web tracker (circa 10MB) is 1.5% of that.
Electron itself is the culprit, more than any tracker. And reason Electron is used is that: HTML/CSS/JS is the only cross platform GUI that looks similar on all platforms, has enough docs/tutorials, and has enough of frontend developers available.
source code is ascii text, and ascii text is not alive. it doesn't need to breathe, modulo dependencies, yes. but this attitude that "not active, must be dead and therefore: avoid" leads people to believing that the opposite: unproven and buggy new stuff, is always better.
silly counter-example: vim from 10 years ago is just as usable for the 90% case as the latest one
Using the Lindy Effect for guidance, I've built a stack/framework that works across 20 years of different versions of these languages, which increases the chances of it continuing to work without breaking changes for another 20 years.
Python's standard library is just fine for most tasks, I think. It's got loads of battle tested parsers for common formats. I use it for asset conversion pipelines in my game engines, and it has so far remained portable between windows, linux and mac systems with no maintenance on my part. The only unusual crate I depend on is Pillow, which is also decently well maintained.
It becomes significantly less ideal the more pip packages you add to your requirements.txt, but I think that applies to almost anything really. Dependencies suffer their own software rot and thus vastly increase the "attack surface" for this sort of thing.
I like Python as a language, but I would not use it for something that I want to be around 20+ years from now, unless I am ok doing the necessary maintenance work.
If it's me running it, that's fine. But if it's someone else that's trying to use installed software, that's not OK.
However, a decade ago, a coworker and I were tasked with creating some scripts to process data in the background, on a server that customers had access to. We were free to pick any tech we wanted, so long as it added zero attack surface and zero maintenance burden (aside from routine server OS updates). Which meant decidedly not the tech we work with all day every day which needs constant maintenance. We picked python because it was already on the server (even though my coworker hates it).
A decade later and those python scripts (some of which we had all but forgotten about) are still chugging along just fine. Now in a completely different environment, different server on a completely different hosting setup. To my knowledge we had to make one update about 8 years ago to add handling for a new field, and that was that.
Everything else we work with had to be substantially modified just to move to the new hosting. Never mind the routine maintenance every single sprint just to keep all the dependencies and junk up to date and deal with all the security updates. But those python scripts? Still plugging away exactly as they did in 2015. Just doing their job.
Mostly I recoiled in horror at bash specifically, which in addition to bash version, also ends up invisibly depending on a whole bunch of external environment stuff that is also updating constantly. That's sortof bash's job, so it's still arguably the right tool to write that sort of interface, but it ends up incredibly fragile as a result. Porting a complex bash script to a different distro is a giant pain.
Even for somebody that did not aim to have python programs for 20y, python is definitely not a good example of a "pdf for programs"
I've lately been pretty deep into 3d printing, and basically all the software has Python...and breaks quite easily. Whether because of a new version of Pip with some new packaging rule, forced venvs...I really don't like dealing with Python software.
A container has that already done, including all supporting libraries.
Edit: then ship the bash script in a container with a bash binary ;)
Entropy sucks.
These are different problems from the distribution/bundling piece, they won't be solved the same way.
Skill issue, plus what's the alternative? Python was close until the 3.x fiasco
Both tools in question are installed everywhere and get the job done. There isn't much to evaluate, and nothing to compare against
These are exactly the skill issues I meant! Git gud in evaluating and you'll be able to come up with many more sophisticated evaluation criteria than the primitive "it's installed everywhere"
Would I advocate writing my core business software in bash or perl? No, I'd hire and train for what was chosen. For small scripts I might need to share with coworkers? 100%
I end up spending often a couple weeks of my life on and off fixing things after every major release.
From my side I think it’s more useful to focus on surfacing issues early. We want to know about bugs, slowdowns, regressions before they hit users, so everything we write is written using TDD. But because unit tests are couple with the environment they "rot" together. So we usually set up monitoring, integration and black-box tests super early on and keep them running as long as the project is online.
We all know it now as dependency hell, but what it is in fact is just a lazy shortcut for the current development that will bite you down the path. The corporate software is not a problem, because the corporate users don't care as long as it works now, in the future they will still rely on paid solutions that will continue working for them. For me, I run a local mirror of arch linux, because I don't want to connect to internet all the time to download a library that I might need or some software that I may require. I like it all here, but since I haven't updated in a while I might see some destructive update if I were to choose to update now. This should never happen, another thing that should never happen is if I were to compile an old version of some software. Time and time again, I will find a useful piece of software on github and I will naturally try compiling it, it's never easy, I will have to hunt the dependency it requires, then try compiling old versions of various libraries. It's just stupid, I wish it were easier and built smarter. Yes sometimes I want to run old software, that has no reason not to work. When you look at windows, it all works magically, well it's not magic it's just done smart. On GNU+Linux smart thinking like this is not welcome, it never has been. Instead they rely on huge amounts of people that develop this software, to perpetually update their programs for no reason, but to satisfy a meaningless number of a dependency.
What you want to have (download software from the net and run it) is what most distro have been trying to avoid. Instead, they vet your code, build it, and add it to a reputable repo. Because no one wants to download postgres from some random sites.
Enough of the BS of "we're just volunteers" - it's fundamentally broken and the powers that be don't care. If multiple multibillion dollar entities who already contribute don't see the Linux desktop as having a future, if Linus Torvalds himself doesn't care enough to push the Foundation on it; honestly, you probably shouldn't care either. From their perspective, it's a toy, that's only maintained, to make it easier to develop the good stuff.
Desktop Linux is OK. And I think it’s all volunteer work.
My work computer with Windows on the other hand requires restarts every day or two after WSL inevitably gets into a state where vscode can't connect to it for some reason (when it gets into such a state it also usually pegs the CPU at 100% so even unlocking takes a minute or more, and usually I just do a hard power off).
I'm not a fan of bundling everything under the sun personally. But it could work if people had more discipline of adding a minimal number of dependencies that would be themselves lightweight. OR be big, common and maintain backwards compatibility so they can be deduplicated. So sort of the opposite of the culture of putting everything through HTTP APIs, deprecating stuff left and right every month, Electron (which puts the browser complexity into anything), and pulling whole trees of dependencies in dynamic languages.
This is probably one of the biggest pitfalls of Linux, saying this as someone to whom it's the sanest available OS despite this. But the root of the problem is wider, it's just the fact that we tend to dump the reduction of development costs onto all users in more resources usage. Unless some big corp cares to make stuff more economical, or the project is right for some mad hobbyist. As someone else said, corps don't really care about Linux desktop.
Too young to remember Windows 3.1 and “DLL hell?” That was worse.
Is software rot real? I'm sure, but it's not in the runtime. It's likely in the availability and compatibility of dependencies, and mainly the Node ecosystem.
People will laugh, but they should really look.
There's tons of old programs from the Windows 95-XP era that I haven't been able to get running. Just last week, I was trying to install and run point and click games from 2002 and the general advise online is to just install XP. There was a way (with some effort) to get them working on Windows 7. But, there's no way to get them to work that I've seen on 10/11.
The reason that Blender grew from being an inside joke to a real contender is the painful re-factoring it underwent between 2009 and 2011.
In contrast, I can feel the fact that the code in After Effects is now over 30 years old. Its native tracker is slow and ancient and not viable for anything but the most simple of tasks. Tracking was 'improved' by sub-contracting the task to a sub-licensed version of Mocha via a truly inelegant integration hack.
There is so much to be said for throwing everything away and starting again, like Apple successfully did with OSX (and Steve Job did to his own career when he left Apple to start Next). However, I also remember how Blackberry tried something similar and in the process lost most of their voodoo.
C# did just as big of a change by going from type-erased to reified generics. It broke the ecosystem in two (pre- and post- reified generics). No one talks about it, because the ecosystem was so, so tiny, no one encountered it.
"It" was fixed long before the 2.7 official sunset date. Even before the original planned date before it got extended, frankly.
Occasionally one would still encounter non-generic classes like this, when working with older frameworks/libraries, which cause a fair bit of impedence mismatch with modern styles of coding. (Also one of the causes of some people's complaints that C# has too many ways of doing things; old ways have to be retained for backwards compatibility, of course.)
† https://learn.microsoft.com/en-us/dotnet/api/system.collecti...
The paper that the other commentator was referring to might be this: https://www.microsoft.com/en-us/research/wp-content/uploads/...
Python, Ruby, etc. constantly get obsolete over time, packages get removed from the central repositories, ceasing to work.
Obviously shell scripts contain _many_ external dependencies, but overall the semantics are well-defined even if the actual definitions are loose against the shell script itself.
P.S: I have written couple of Bash-script projects that are still running (mostly deployment automation scripts) meanwhile, some of which I was being "smart" and wrote them in Python 2.7, unfortunately ceased to function, requiring upgrades...
Maybe that sounds like a dumb question because I am a beginner at coding, so I apologize, but it's something I've recently gotten into via an interest in microcontrollers. A low(er)-level language like C seems to be pretty universally sound, running on just about anything, and seems to have not changed much in a long, long time. But when I was dabbling in both Python and Ruby (on Rails) and .NET framework, I noticed what you mean, as I scooped up some old projects on github thinking I'd add to them and realize that it would be a chore to get them updated.
Same site has this article about "bedrock platforms" which resonate deeply with me https://permacomputing.net/bedrock_platform/
Software does not rot, the environment around software, the very foundation which owes its existence: the singular task of enabling the software, is what rots.
Let's look at any environment snapshot in time, the software keeps working like it always did.. Start updating the environment, and the software stops working, or rather, the software works fine, but the environment no longer works.
I'm not saying never to update software, but, only do it if it increases speed, decreases memory usage, and broadens compatibility.
I like things better the way they were.
I like things better now than how they will be tomorrow.
I can't remember the last time I saw a software update that didn't make it worse.
But if you build on a bog, then you've unnecessarily introduced a whole lot of new variables to contend with.
Have we already passed the era of DON'T BREAK USERSPACE when Linus would famously loudly berate anyone who did?
I suspect Win32 is still a good target for stability; I have various tiny utilities written decades ago that still work on Win11. With the continued degradation of Microsoft, at least there is WINE.
It's not direct breakage per-se (API's were generated from definition files and there was an encouragement to build new API versions when breaking API's), the issue will be that many third party things were to be manually installed from more or less obscure sources.
Your Office install probably introduced a bunch of COM objects. Third party software that depended on those objects might not handle them being missing.
I think I took some DOS-like shortcuts with some of my early DirectDraw (DirectX 3 level?) code, afaik it doesn't work in fullscreen past Windows Vista but _luckily_ I provided a "slow" windowed GDI fallback so the software still kinda runs at least.
What I wanted to point out is that Go also supports BSDs and other kernels out of the box that implement the POSIX standard, though with slightly different syscall tables and binary formats on those target platforms and architectures.
I was referring to POSIX as a standard specification, because it also includes not only Linux's syscall table, but also various other things that you can typically find in the binutils or coreutils package, which Go's stdlib relies on. See [2.1.3] and following pages.
I guess what I wanted to say: If I would bet on long term maintenance, I would bet on POSIX compatibility, and not on one specific implementation of it.
[2.1.3] https://pubs.opengroup.org/onlinepubs/9699919799.2018edition...
Basically (compile to windows?) seems like a good enough tradeoff to run anywhere, right?
But I prefer appimage or flatpak because of the overhead that wine might introduce I suppose
Nah. I think they share some effort and ReactOS team adds patches to WINE codebase, it is a separate thing.
It is possible to consult this wiki on port 80, that is to say using http:// instead of https://."
https://permacomputing.net/about/
"If you do not have access to git on your operating system, you can download a zip file that contains both the markdown source files and the generated HTML files, with the paths fixed. The zip file is generated once a week."
https://permacomputing.net/cloning/
http://permacomputing.net/permacomputing.net.zip
Would it be appropriate to include a digital signature, as is commonly found on mirrors
Thought experiment: If it was standard practice to offer a compressed archive then would websites still be hammered by unwanted crawlers
If answer is yes, then what if remove/deny access to online pages and only allow access to the compressed archive
Unpopular targets, platforms, languages, etc don't get changed and provide a much needed refuge. There are some interpreted languages like perl where a program written today could run on a perl from 2001 and a program from 2001 would run on perl today. And I'm not talking about in a container or with some special version. I'm talking about the system perl.
Some popular languages these days can lose forwards compatibility (gain features, etc) within just a few months that every dev will use within a few more months. In these cultures sofware rot is really fast.
Ah yes, Windows, some niche OS for hipsters and base-dwellers.
Cough cough vibing cough cough
I replaced a PLC a couple years ago. The software to program it wouldn't run on my laptop because it used the win16 API. It used LL-984 ladder logic, and most people who were experts in that have retired. It's got new shiny IEC-compliant code now, and next they're looking at replacing the Windows 2000 machines they control it with. Once that's done, it'll run with little to no change until probably 2050.
You seem to assume, browsers have stopped changing and will be more or less the same 75 years from now.
I think you are right that that code might run. But probably in some kind of emulator. In the same way we deal with IBM mainframes right now. Hardware and OS have long since gone the way of the dodo. But you can get stuff running on generic linux machines via emulation.
I think we'll start seeing a lot of AI driven code rot management pretty soon. As all the original software developers die off (they've long been retired); that might be the only way to keep these code bases alive. And it's also a potential path to migrating and modernizing code bases.
Maybe that will salvage a few still relevant but rotten to the core Javascript code bases.
There are some _very_ rare exceptions, but they're things like "support for subclassing TypedArrays", and even then this is only considered after careful analysis to ensure it's not breaking anyone.
(S3 better example than Dropbox. That will mostly be around forever.)
Rot is directly proportional to the amount of dependencies. Software made responsibly with long term thinking in mind has dramatically less issues over time.
On the one hand, software is like a living thing. Once you bring it into this world, you need to nurture it and care for it, because its needs, and the environment around it, and the people who use it, are constantly changing and evolving. This is a beautiful sentiment.
On the other hand, it's really nice to just be done with something. To have it completed, finished, move on to something else. And still be able to use the thing you built two or three decades later and have it work just fine.
The sheer drudgery of maintenance and porting and constant updates and incompatibilities sucks my will to live. I could be creating something new, building something else, improving something, instead, I'm stuck here doing CPR on everything that I have to keep alive.
I'm leaning more and more toward things that will stand on their own in the long-term. Stable. Done. Boring. Lasting. You can always come back and add or fix something if you want. But you don't have to lose sleep just keeping it alive. You can relax and go do other things.
I feel like we've put ourselves in a weird predicament with that.
I can't help but think of Super Star Trek, originally written in the 1970s on a mainframe, based on a late 1960s program (the original mainframe Star Trek), I think. It was ported to DOS in the 1990s and still runs fine today. There's not a new release every two weeks. Doesn't need to be. Just a typo or bugfix every few years. And they're not that big a deal. -- https://almy.us/sst.html
I think that's more what we should be striving for. If someone reports a rare bug after 50 years, sure, fix it and make a new release. The rest of your time, you can be doing other stuff.
In 20-30 years there’s a good chance that what you’ve written will be obsolete regardless - even if programs from 1995 ran perfectly on modern systems they’d have very few users because of changing tastes. A word processor wouldn’t have networked collaborative editing (fine for GRRM though), an image editor wouldn’t have PNG support, and they wouldn’t be optimised for modern hardware (who would foresee 4K screens and GPUs back then - who knows how we’ll use computers in 2055).
There are also always containers if the system needs those old versions.
1980s NES software is "easy" as in emulating a CPU and the associated hardware (naturally there are corner cases in emulation timing that makes it a lot harder, but it's still a limited system).
I used to make demos as mentioned in the article, the ones I did for DOS probably all work under DosBox. My early Windows demos on the other hand relied on a "bad" way of doing things with early DirectDraw versions that mimicked how we did things under DOS (ie, write to the framebuffer ourselves). For whatever reason the changes in Vista to the display driver model has made all of them impossible to run in fullscreen (luckily I wrote a GDI variant for windowed mode that still makes it possible to run).
Even worse is some stuff we handle at an enterprise customer, Crystal Reports was even endorsed by Microsoft and AFAIK included in Visual Studio installs. Nowadays abandoned by MS and almost by it's owner (SAP), we've tried to maintain an customized printing applications for a customer, relying on obscure DLL's (and even worse the SAP installer builder for some early 2000s install technology that hardly works with modern Visual Studio).
Both these examples depend on libraries being installed in a full system, sure one could containerize needed ones but looking at the problem with an archivist eyes, building custom Windows containers for thousands of pieces of software isn't going to be pretty (or even feasible in a legal sense both with copyright and activation systems).
Now you could complain about closed source software, but much of a tad more obscure *nix software has a tendency to exhibit a huge part of "works on my machine" mentality, configure scripts and Docker weren't invented in a vacuum.
o_O
It's a great feeling knowing any tool I write in Elisp will likely work for the rest of my life as is.
LLMs have also made reading and writing shell code much easier.
There's no reason such a "bedrock platform"♢ needs to be a shitty pain in the ass like the IBM PC or NES (the examples on https://permacomputing.net/bedrock_platform/). Those platforms were pragmatic tradeoffs for the existing hardware and fabrication technology in the market at the time, based on then-current knowledge. We know how to do much better tradeoffs now. The 8088 in the IBM PC was 29000 transistors, but the ARM 2 was only 27000 transistors†. Both could run at 8 MHz (the 8088 in its later 8088-2 and 80C88 incarnations), but the ARM 2 was a 32-bit processor that delivered about 4 VAX MIPS at that speed (assuming about ½DMIPS/MHz like the ARM3‡) while the 8088 would only deliver about 0.3 VAX MIPS (it was 0.04DMIPS/MHz). And programming for the 8088's segmented memory model was a huge pain in the ass, and it was crippled by only having 20 address lines. 8088 assembly is full of special-purpose registers that certain instructions have to use; ARM assembly is orthogonal and almost as high-level as C.
Same transistor count, same clock speed, dramatically better performance, dramatically better programming experience.
Similarly, Unix and Smalltalk came out about the same time as Nova RDOS and RT-11, and for literally the same machines, but the power Unix and Smalltalk put in the hands of users far outstripped that of those worse-designed systems.
So, let's put together a bedrock platform that we could actually use for our daily computing in practice. Unlike the NES.
______
♢ Stanislav Datskovskiy's term: http://www.loper-os.org/?p=55
† https://en.wikipedia.org/wiki/Transistor_count
‡ https://netlib.org/performance/html/dhrystone.data.col0.html but note that the ARM3 had a cache, so this depends on having RAM that can keep up with 8MHz. Both the ARM2 and ARM3 were mostly-1-instruction-per-clock pipelined RISCs with almost exactly the same instruction set. https://en.wikipedia.org/wiki/Acorn_Archimedes, https://www.onirom.fr/wiki/blog/21-04-2022_Acorn-Archimedes/, and its twin https://wardrome.com/acorn-archimedes-the-worlds-first-risc-... confirm the 8MHz and 4MIPS.
The AArch64 is wacky in its own, different, way. For example, loading a constant into a register, dealing with an offset to an index, etc. It also has special purpose registers, like the zero register.
The PDP-11 architecture remains the best gem of an orthogonal instruction set ever invented.
The PDP-11 seems pleasant and orthogonal, but I've never written a program for it, just helped to disassemble the original Tetris, written for a Soviet PDP-11 clone. The instruction set doesn't feel nearly as pleasant as the ARM: no conditional execution, no bit-shifted index registers, no bit-shifted addends, only 8 registers instead of 16, and you need multiple instructions for procedure prologues and epilogues if you have to save multiple registers. They share the pleasant attribute of keeping the stack pointer and program counter in general-purpose registers, and having postincrement and predecrement addressing modes, and even the same condition-code flags. (ARM has postdecrement and preincrement, too, including by variable distances determined by a third register.)
The PDP-11 also wasn't a speed demon the way the ARM was. I believe that speed trades off against everything, and I think you're on board with that from your language designs. According to the page I linked above, a PDP-11/34 was about the same speed as an IBM PC/XT.
Loading a constant into a register is still a problem on the ARM2, but it's a problem that the assembler mostly solves for you with constant pools. And ARM doesn't have indirect addressing (via a pointer in memory), but most of the time you don't need it because of the much larger register set.
The ARM2 and ARM3 kept the condition code in the high bits of the program counter, which meant that subroutine calls automatically preserved it. I thought that was a cool feature, but later ARMs removed it in order to support being able to execute code out of more than just the low 16 mebibytes of memory.
Here's an operating system I wrote in 32-bit ARM assembler. r10 is reserved for the current task pointer, which doesn't conform to the ARM procedure call standard. (I probably should have used r9.) It's five instructions:
.syntax unified
.thumb
.fpu fpv4-sp-d16
.cpu cortex-m4
.thumb_func
yield: push {r4-r9, r11, lr} @ save all callee-saved regs except r10
str sp, [r10], #4 @ save stack pointer in current task
ldr r10, [r10] @ load pointer to next task
ldr sp, [r10] @ switch to next task's stack
pop {r4-r9, r11, pc} @ return into yielded context there
http://canonical.org/~kragen/sw/dev3/monokokko.SThe -11 could do things like:
mov (PC)+,R0
where the PC+ addressing mode picked the constant out of the next 16 bits in the instruction scheme. It's just brilliant.Yeah, with (PC)+ (27), you didn't need a separate immediate addressing mode where you tried to stuff an operand such as 2 into the leftover bits in the instruction word; you could just put your full-word-sized immediate operands directly in the instruction stream, the way you did with subroutine parameters on the PDP-8. And there was a similar trick for @(PC)+ (37) where you could include the 16-bit address of the data you wanted to access instead of the literal data itself. But that kind of thing, plus the similarly powerful indexed addressing modes (6x and 7x), also meant that even the instruction decoder in a fast pipelined implementation of the PDP-11 instruction set would have been a lot more difficult, because it has to decode all the addressing modes—so, AFAIK, nobody ever tried to build one.
And different kinds of PC-relative addressing is basically the only benefit of making the PC a general-purpose register; it's really rare to want to XOR the PC, multiply it, or compare it to another register. And it cost you one of the only eight registers.
And you still can't do ARM things like
@ if (≥) r2 := mem[r0 + 4*r1]
ldrge r2, [r0, r1, lsl 2]
@ if (≤) { r2 := mem[r0]; r0 += 4*r1; }
ldrle r2, [r0], r1, lsl 2
@ store four words at r3 and increment it by 16
stmia r3!, {r0, r1, r7, r9}
@ load the first and third fields of the three-word
@ object at r3, incrementing r3 to point to the next object
ldr r1, [r3]
ldr r2, [r3, #8]!
A lot of the hairier combinations have been removed from Thumb and ARM64, including most of conditional execution and, in ARM64, ldm and stm. Those probably made sense as instructions when you didn't have an instruction cache to execute instructions out of, because a single stm can store theoretically 16 registers in 17 cycles, so you can get almost your full memory bandwidth for copying and in particular for procedure prologues and epilogues, instead of wasting half of it on instruction fetch. And they're very convenient, as you saw above. But nowadays you could call a millicode subroutine if you want the convenience.All these shenanigans (both PDP-11 and ARM) also make it tricky to restart instructions after a page fault, so AFAIK the only paged PDP-11 anyone ever built was the VAX. A single instruction can perform up to four memory accesses or modify up to two registers, which may be PC (with autoincrement and decrement), as well as modifying a memory location, which could have been one of the values you read from memory—or one of the pointers that told you where to read from memory, or where to write. Backing out all those state changes successfully to handle a fault seems like a dramatic amount of complexity and therefore slowness.
I'm aware that I'm talking about things I don't know very much about, though, because I've:
- never programmed a PDP-11;
- never programmed a PDP-8;
- never programmed in VAX assembly;
- never designed a pipelined CPU;
- never designed a CPU that could handle page faults.
So I could be wrong about even the objective factors—and of course no argument could ever take away your pleasure of programming in PDP-11 assembly.
https://github.com/DigitalMars/Empire-for-PDP-11
but I have little knowledge of how the CPU works internally. One could learn the -11 instruction set in a half hour, but learning the AArch64 is a never-ending quest. 2000 instructions!
If I see a library which is solving a simple problem but it uses a lot of dependencies, I usually don't use that library. Every dependency and sub-dependency is a major risk... If a library author doesn't understand this, I simply cannot trust them. I want the authors of my dependencies to demonstrate some kind of wisdom and care in the way they wrote and packaged their library.
I have several open source projects which have been going for over a decade and I rarely need to update them. I was careful about dependencies and also I was careful about what language features I used. Also, every time some dependency gave me too much trouble I replaced it... Now all my dependencies are highly stable and reliable.
My open source projects became a kind of a Darwinian selection environment for the best libraries. I think it's why I started recognizing the names of good library authors. They're not always super popular but good devs tend to produce consistent quality and usually gets better with time. So if I see a new library and I recognize the author's name, it's a strong positive signal.
It feels nice seeing familiar niche names come up when I'm searching for new libraries to use. It's a small secret club and we're in it.
Maybe software written in the age of DOS was relatively trivial compared to modern tools. Maybe there's a benefit to writing code in Rust rather than C89.
> while those written for e.g. Linux will likely cease working in a decade or two
there's nothing to support this claim in practice. linux is incredibly stable
Any time I tried to run it afterwards - I had to recompile it, and a few times I had to basically port it (because libparagui got abandoned and some stuff in libc changed so I couldn't just compile the old libparagui version with the new libc).
It's surprisingly hard to make linux binaries that will work even 5 years from now, never mind 10-20 years from now.
For comparison I still have games I've written for DOS and windows in 90s and the binaries still work (OK - I had to apply a patch for Turbo Pascal 7 200 MHz bug).
The assumptions around linux software is that it will be maintained by infinite number of free programmers so you can change anything and people will sort it out.
Of course with Apple ecosystem you need relatively recent hardware and have to pay the annual developer program fee, but the expectation is still the same: if you release something, you should keep it up-to-date and not just "fire and forget" like in Windows and expect it to work. Maybe Windows is the anomaly here.
i was pointing out that you and OP are conflating a particular dev environment with the stability of linux itself. gui/games in particular on linux is not an area that shares this same stability yet
Yeah, if you compile all the dependencies and dependencies of dependencies and so on statically. Which is a fun experience I've tried several times (have fun trying to statically compile gfx drivers or crypto libraries for example), wasted a few evenings, and then had to repeat anyway because the as-static-as-possible binaries do stop working sometimes.
There's a reason linux distributions invented package managers.
> i was pointing out that you and OP are conflating a particular dev environment with the stability of linux itself.
We're talking software rot, not uptime? It's exactly about software stopping working on newer systems.
The rot is real but we have a way to run Linux (and any really) software in 40 years. For example:
From alpine:3.14
...
Just as I can run thousands of games from the 80s on my vintage CRT arcade cab using a Pi and MAME (with a Pi2JAMMA adapter), I'll be able to run any OCI container in 30 years.The issue of running old software is solved: we've got emulators, VMs, containerization, etc.
Sure they may lack security upgrades but we'll always be able to run them in isolation.
The rot is not so much that the platform won't exist in the future or that old libs would mysteriously not be available anymore: the problem is that many software aren't islands. It's all the stuff many need to connect to that's often going to be the issue.
For games that'd be, say, a game server not available anymore. Or some server validating a program to make sure its license has been paid. Or some centralized protocol that's been "upgraded" or fell into irrelevancy. Or some new file format that's now (legitimately or not) all the shit.
Take VLC: there's not a world in which in 40 years I cannot run a VLC version from today. There's always going to be a VM or, heck, even just an old Linux version I can run on bare metal. But VLC ain't an island either: by then we'll have coolmovie-48bits.z277 where z277 is the latest shiny video encoding format.
At least that's where I see the problem when running old software.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.