My hypothesis is today's "modern" OS user interfaces are objectively worse from a usability perspective, obfuscating key functionality behind layers of confusing menus.
It reminds me of these "OS popularity since the 70s" time lapse views:
https://youtube.com/watch?v=cTKhqtll5cQ
The dominance of Windows is crazy, even today, Mac desktops and laptops are comparatively niche
There was no one who could help me when I got stuck, beyond maybe an instruction manual. I just had to figure it out, mostly by trial and error. I learned so much, eventually being able to replace hardware, install and upgrade drivers, re-install the entire OS and partition the hard drive, figure out networking and filesystems. It built confidence.
Now my kid sits infront of an OS (Windows, Mac, it doesn't really matter) and there's so much noise. Things popping up, demanding attention. Scary looking warnings. So much choice. There's so many ways to do simple things. Actions buried deep within menus. They have no hope of building up a mental model or understanding how the OS connects them to the foundations of computing.
Even I'm mostly lost now if there's a problem. I need to search the internet, find a useful source, filter out the things that are similar to my problem but not the same. It isn't rewarding any more, it's frustrating. How is a young child meant to navigate that by themselves?
This looks like a step in the right direction. I look forward to testing it out.
This is one of my biggest frustrations with modern GUI computing. It's especially bad with Windows and Office, but it happens on iOS and macOS too to an extent. Even though I've had Office installed for weeks I still get a "look over here at this new button!" pop-up while I'm in the middle of some Excel task. Pop-up here, pop-up there. It's insane the number of little bubbles and pop-ups and noise we experience in modern computing.
Definitely. Bitwarden does this ironically. It will pop-up "UPDATE AVAILABLE!" half way through typing a passphrase. Why not suppress the pop-up if the user is typing, or make it non-modal? Every few days I am interrupted just trying to unlock a vault.
For example, on Discord, I sometimes see people with unreads for every server, and dozens to hundreds of completely unread DMs, just because they don't know that you can turn notifications off for the stuff you don't care about. Instead of doing that they just learned to ignore everything, leading to a disorganized mess.
I'm somewhat familiar with what typically leads to this (usually something like ADHD), but when you let it go for so many years it's such a big task to fix it that the fixing never happens and you're just kind of screwed for eternity.
In Discord, I have no unread servers and no unread DMs, despite being at the server limit. This is because all my servers are completely silenced and all my DMs are read immediately. My only unread email is one I marked as such because I still plan to reply to it soon. I have the attention for every single notification because I aggressively optimize the notifications I receive to the point where they all are typically things I care about. Back in the day I would instantly report every email to SpamCop, typically in under one minute, but I eventually stopped doing that because there's no point.
I simultaneously do and don't understand people who just submit to a flood of irrelevant garbage. Control it!!
Leading to notifications being ignored, and not mattering at all.
A well curated set of notifications that only gives you the things you actually need is superb, but incredibly difficult to get right.
Learning how to ignore all of the noise is probably a more valuable skill for a future where control is slowly wrestled away from the user. A certain Black Mirror episode (Fifteen Million Merits) comes to mind.
You found my point! Some notifications may be important, but if one has learned to ignore all notifications, then they won't catch the important ones anymore.
When I get important notifications, I can act on them immediately, because I have not learned to ignore all my notifications, because I don't need to. I have taken care to block any notifications I don't care about, leaving only the ones that I do.
> A well curated set of notifications that only gives you the things you actually need is superb, but incredibly difficult to get right.
Ehh... maybe it's incredibly difficult to fix once you are already drowning in them, but since I immediately nuke anything I don't like from orbit, and have done so my whole life, there are a lot fewer things I don't like than if I hadn't to do that.
> Learning how to ignore all of the noise is probably a more valuable skill for a future where control is slowly wrestled away from the user. A certain Black Mirror episode (Fifteen Million Merits) comes to mind.
I have a really hard time ignoring noise. I think that is because of my autism. Ignoring noise would be a nice skill, I guess, but I feel significantly better when the noise is simply not there in the first place. I'd imagine most would, but for some reason I seem to break a lot more easily when uncomfortable.
> I have a really hard time ignoring noise. I think that is because of my autism. Ignoring noise would be a nice skill, I guess, but I feel significantly better when the noise is simply not there in the first place. I'd imagine most would, but for some reason I seem to break a lot more easily when uncomfortable.
I share this discomfort, but only in physical space. I struggle with visual and audible noise in the real world, but my screen I’ve become very adept at ignoring. Giving up on inbox-zero in my personal life and ending up with 2000+ unread emails in my personal mailbox is where that started.
I unfortunately don't share this ease. As an example, I have to use custom CSS to completely remove blocked messages from Discord, because once I know a blocked message exists I can not seem to ignore it. The only way to properly protect myself from blocked users is to ensure that I can never even know they're there.
I do a similar thing for muted DMs. Once I know that someone has sent a message, I cannot seem to avoid checking the message. The only way for me to properly take a break is to ensure that there's no way for me to accidentally discover the existence of new messages. So I have some JavaScript that completely removes muted DMs from the list so that I won't even see them jumping back up to the top every new message.
I don't know what causes this to happen or how to fix it, but I do know that ignoring things has always been nearly impossible for me. Whenever there is anything I need to protect myself from, I need to also protect myself from ever noticing any related activity.
Maybe this is some sort of OCD, I'm not sure. Pretty sure it would meet the criteria, I guess.
Again, it is a lot of work if you have to do it at once. If you've done it naturally over the course of your life, you would never have had a big pile of things to take care of at once.
> Alternatively, I can just ignore them all. I don’t care if there’s a red circle with a number in it. I don’t notice it and it doesn’t bother me.
Suit yourself. Personally, I care about responding promptly to certain things, like instant messages or emails, but I don't appreciate unwelcome distractions.
See: https://vxtwitter.com/AutisticCallum_/status/190533113360614... & https://vxtwitter.com/AutisticCallum_/status/190533113707064...
Sure, if I’m being paid to respond right away then I will. The “instant” in “instant message” describes the delivery time, not the required response time. Email, IM are things I all treat asynchronous communications. Anything urgent should be a phone call, ideally. If there’s a particular email address I consider important I’ll make a separate inbox for it.
Otherwise, I find sort of fun to see how big the number can get.
On my work email I just mark everything as read at the end of the day so the number starts over each day.
unnecessary notifications hurt us all. Wonder how many people have turned off the emergency/amber alerts because of non-critical use of the system?
Since the 90s we've found "better" ways at "curing" our boredom. Put this UI on a modern OS in front of a kid today and they would just download Steam, Chrome and Discord. And be assured, they're very proficient at the in-and-outs of those platforms.
Just some random thoughts I had, not sure any of it tracks...
Take that away so you have a standalone computer that you "run programs on" and it becomes simpler.
As a young teenager in the early-mid 2000s, I learned the hard way what the little standoffs are for by killing a motherboard by screwing it directly into the steel case :')
Never made that mistake again, that's for sure. And I share all the same experiences as yourself
The first Mobo I ever purchased with my own money was insta-fried exactly like that, it still hurts a little to think about that.
Maybe they are though in CS school that every layer of abstraction is better. I don't see other explanation for this level of stupidity.
These kinds of things almost always give me an uncanny-valley feeling. Here I'm looking at the screenshot and can’t help noticing that the taskbar buttons are too close to the taskbar’s edge, the window titles are too narrow, the folders are too yellow, and so on and so forth. (To its credit, Wine is the one exception that is not susceptible to this, even when configured to use a higher DPI value so the proportions aren’t actually the ones I’m used to.) I’m not so much criticizing the theme’s authors as wondering why this is so universal across the many replicas.
The problem is that the interfaces these bootleg skins draw "inspiration" from were designed on the back of millions of pre-inflationary dollars' R&D from only the best at Golden-Age IBM, Microsoft, Apple, etc.. BeOS, OS/2, Windows 95-2000 do not look the way they do because it looks good, they look the way they do because it works good, countless man hours went into ensuring that. Simply designing an interface that looks similar is not going to bring back the engineering prowess of those Old Masters.
As a (weak) counterpoint to supplicating ourselves to the old UI masters, I submit Raymond Chen’s observations from 2004[1] that the flat/3D/flat cycle is largely fashion, e.g. how the toolbars in Office 97 (and subsequent “coolbars”) had buttons that did not look like buttons until you hovered over them, in defiance of the Windows 95 UI standard. (Despite Chen’s characteristic confident tone, he doesn’t at all acknowledge the influence of the limited palettes of baseline graphics adapters on the pre-Win95 “flat” origins of that cycle.)
Also worth noting are the scathing critiques of some Windows 95 designs[2,3] in the Interface Hall of Shame (2000). I don’t necessarily agree with all of them (having spent the earlier part of my childhood with Norton Commander, the separate folder/file selectors in Windows 3.x felt contrived to me even at the time) but it helps clear up some of the fog of “it has always been this way” and remember some things that fit badly at first and never felt quite right (e.g. the faux clipboard in file management). And yes, it didn’t fail to mention the Office 97 UI, either[4,5]. (Did you realize Access, VB, Word, and IE used something like three or four different forks of the same UI toolkit, “Forms3”, among them—a toolkit that looked mostly native but was in fact unavailable outside of Microsoft?..)
None of that is meant to disagree with the point that submitting to the idea of UI as branding is where it all went wrong. (I’ll never get tired of mentioning that the futuristic UI of the in-game computers of the original Deus Ex, from 2000, supported not only Tab to go between controls and Enter and Esc to submit and dismiss, but also Alt accelerators, complete with underlined letters in the labels.)
[1] https://devblogs.microsoft.com/oldnewthing/20040728-00/?p=38...
[2] http://hallofshame.gp.co.at/file95.htm
[3] http://hallofshame.gp.co.at/explore.htm
It's right in the second sentence: "...Windows 1.0, which looked very flat because... color depth was practically non-existent."
I think that's because they used the stock UI toolkit of the original Unreal Engine, which also had all these things. If you recall, UT'99 actually had a UI more like a desktop app at the time, complete with a menu bar and tabbed dialogs:
http://hw-museum.cz/data/article/VGA-Benchmarks/Benchmark-VG...
In modern times telemetry can show how well new designs work. The industry never forgot how to measure and do user research for ui changes. We've only gotten better at it.
- With a full set of metrics, we're now designing toward the bottom half of the bell curve, ie, towards the users who struggle the most. Rather than building UIs which are very good, but must be learned, we're now building UIs which must suit the weakest users. This might seem like a good thing, but it's really not. It's a race to the bottom, and robs those novice users from ever having the chance of becoming experts.
- Worse, because UIs must always serve the interests of the bottom of the bell curve, this actually is why we have constant UI churn. What's worse than a bad UI? 1,000 bad UIs which each change every 1-6 months. No one can really learn the UIs if they're always churning, and the metrics and the novice users falsely encourage teams to constantly churn their UIs.
I strongly believe that you'd see better UIs either with far fewer metrics, or with products that have smaller, expert-level user bases.
1. Which design is most effective at steering the most users to the most lucrative actions
2. What looks good in screenshots, presentations, and marketing
The rest is tertiary or an afterthought at best. Lots of modern UI is actually pretty awful for those mentioned bottom of the bell curve users and not much better for anybody else in terms of being easy to use or serving the user’s needs.
Proper use of analytics might be of assistance here, but those are also primarily used to figure out the most profitable usage patterns, not what makes a program more pleasant or to easy to use. They’re also often twisted or misused to justify whatever course of action the PM in question wants to take, which is often to degrade the user experience in some way.
But product managers inside the large corporations can't get promoted for merely maintaining the status quo. So they push for "reimagining" projects, like Google's "Material Screw You" UI.
And we get a constant treadmill of UI updates that don't really make anything better.
The goal in 1995 might be "The user can launch the text editor, add three lines to a file, and save it from a fresh booted desktop within 2 minutes".
The goal in 2015 might be "we can get them from a bare desktop to signing up for a value-add service within 2 minutes"
I'd actually be interested if there's a lot of "regression testing" for usability-- if they re-run old tests on new user cohorts or if they assume "we solved XYZ UI problem in 1999" and don't revisit it in spite of changes around the problem.
The answer to your question is that these replicas are of low quality. This one looks like the whole thing was made by someone (or a committee of people) lacking attention to detail.
Still it is hopefully a nice introduction for some.
The whole UI as branding thing has utterly killed usability.
This is caused by a change in who is hired as UI/UX developers. In days past it was HCI experts and engineers, now it's graphic designers. "Pretty" is the order of the day, not "useful". "There are too many menu items" is now answered with "So let's hide them" when it used to be "How can we organize them in the UI us a simple, discoverable manner?" But then that "overflow" menu (really? Needed menu commands are now OVERFLOW?) gets crowded so they start just removing features so the UI is nice.
„UX/UI developers“ is a strange name for it.
In 2000s the web enabled more sophisticated presentation designs and there was a push from client-server to web-based applications using incredibly strange technologies for building UIs — HTML, CSS and JavaScript, which gave the rise to UX design as a interdisciplinary job (HCI+digital graphics design). By 2010 the internet of applications kicked off and in mid-2010s moved to mobile, dramatically increasing the demand for UX designers. By then it actually mattered more who is hiring designers, not who is hired. Since only relatively small fraction of hiring managers does understand the scope of this job even now, they even started calling it „UX/UI designers“ or „Product designers“ as if that name change could help, still judging design work by often-fake screenshots on Behance rather than by case studies in the portfolio. Even HCI professionals are often reduced to mere graphic designers by those managers who skip research and apply „taste“ to a science-based discipline. At the same time, since UX design is one of the most well-paid and less stressful creative jobs, a lot of people switched to it without proper education or experience, having no idea what is statistical significance or how to design a survey. And voila, we are here.
I agree. It's a UI Engineer. User Experience is just the fluffification of the title to something that sounds expansive and nebulous when it's actually pretty focused and critical.
Old style UI was developed with the findings of countless man-hours of UX research performed by field experts, while branded UI is typically whipped together purely based on trends and vibes in an evening by a visual designer who’s probably never performed an ounce of serious research or user trials. It’s natural that the latter is only going to be good at the most superficial of purposes. UI as branding is the McMansion of UX.
One of my favorite examples is tree-style lists (“outline views” in AppKit nomenclature). On macOS these have a very convenient functionality where holding down option while expanding/collapsing a section performs that action on all children as well, and it’s practically never implemented in custom-built tree widgets even in cases where the primary audience skews Mac-heavy.
Not so much anymore. The abandonment of any coherent organizing principle to UI layout in favor of pure aesthetics has been a massive regression. Reasonably complex software often doesn't even include menu bars anymore, for example.
https://www.goodreads.com/quotes/21810-it-is-difficult-to-ge...
Imagine if Active Desktop had taken over.
I eventually came up with a not-awful use for AD but that was a few years after it went away.
Eventually, I stopped using it because: 1- it was always annoying to send an screenshot to someone and have to explain that no, I wasn't using Windows 95, and why; 2- the grey-ish look of everything started to bother me over time; 3- I wanted a more integrated desktop experience and moved to KDE Plasma. Still, I configured my Plasma to work like old Windows: window titles on taskbar, zero to none animations, etc.
Same as you say, people have asked me a lot about it and even asked me if I could set it up for them. The theme is evangelizing Linux a little bit, and that is interesting. In the right hands, these UI principles could convert many people to some product.
P.S. You can now change the grey-ish look with Win95-style theming support. I've not used it, but here's more info: https://github.com/grassmunk/Chicago95/blob/master/Plus/READ...
I think this started with Vista, I remember watching a video criticizing the new love of glass effects on UI chrome as it got rid of or minimized the color/shading difference between focused/unfocused windows. The example the video used was 6 notepad windows and pick which one was focused, and the main cue you'd need to look for is that the window with focus had a close button colored red.
Thin borders and minimalist/hiding scrollbars is another one that annoys me, give me something graphical for my gaze to grasp.
> it was always annoying to send an screenshot to someone and have to explain that no, I wasn't using Windows 95
That's not a negative, that's a fringe benefit as an endless source of entertainment.
In my opinion, nothing beats the 35-year-old NeXTSTEP interface (which W95 is a weak imitation of):
I did like some Windows things, though, like the ribbon, and reconfigurable UIs. Today's UIs are more immutable, for the worse.
The replacement to the registry seems to half be "magic CLI incantations for settings which can't be found in the GUI for some reason" and half "here's a $4.99 app to 3 finger tap to close tabs".
Here are some KB articles to check out for context:
- https://helparchive.huntertur.net/document/105563
It’s also reasonable to back up plists and/or sync them between machines like some users do with their dotfiles, because they’re just files.
Which has direct equivalent in "reg" files, to be quite honest.
[1] Other than restoring time machine backup to another system or similar cloning setups
The problem is that way too many people approach macOS with the Windows way of doing things firmly planted in their minds as “correct”, which interferes with this process. For example, over the years I’ve encountered numerous posters complaining about how macOS can’t do X thing, after which I point out that X thing is right there as an easy to find top level menu item, but the poster in question never bothered to take a look around and just assumed the functionality didn’t exist since it wasn’t surfaced the same way as under Windows or KDE or whatever they were coming from.
Of course there are things macOS just doesn’t do, but there’s plenty that it does if users are willing to set their preconceptions aside for a moment.
I'd kill to do dev on a linux machine, but alas it's not company policy :(
Ironically, in the long run, it has proven to be an asset for the simple reason that any macOS app has to have a main menu with commands in it, if it doesn't want to look silly. So this whole modern trend of replacing everything with "hamburger" menus that don't have the functionality isn't killing UX quite so bad there.
Although some apps - Electron ones, especially - stick a few token options there, and then the rest still has to be pixel-hunted in the window. Some don't even put "Settings" where it's supposed to be (under the application menu). Ugh.
MacOS seems to _require_ some unergonomic combination of both from the get go. Some basic things are easy with the keyboard but hard/impossible with the mouse and vice versa. The Finder app doesn't even have a button to go 'up' a directory for god's sake.
I'd kill for this on Windows or any mainstream Linux DE.
For the case of the up button for example, prior to OS X the Finder was a spacial file manager where each folder had a single corresponding window that remembered its position on screen, allowing users to rely on spacial memory to quickly navigate filesystems. Its windows didn’t even have toolbars, because they weren’t navigator windows — every time you opened a folder you got a new window (unless that folder’s window was already open, in which case it was foregrounded).
So when OS X rolls around in ~2000 and switches the Finder to navigator windows, they’re looking at what existing users will find familiar. Back/forward is easy since most had used a web browser by that point and map cleanly to most people’s mental models (“I want to go back”), but up? That’s a lot more rare. A handful of folks who’d used an FTP client might’ve been familiar with the concept, but few outside of that few would’ve, and how “up” relates to a filesystem is not in any way obvious. And so, the Finder never got an up button, just a key shortcut because anybody advanced enough to be hunting down shortcuts is going to understand the notion of “up” in a filesystem.
If I recall correctly, when I got my first Macbook, I had to edit plist files or something similar in order to do basic things like permanently showing hidden files, showing the full path in Finder, show file extensions for all file types, increase the animation speed so the computer didn't feel slow as molasses, etc, etc.
Maybe these things are now easier to configure via GUI on macOS?
It still has "Services" as a hold over from Next that is completely broken and unused (but still present in every app for some reason). Now you also have the joy of diving deep into the Settings every time an app needs some sort of permission.
I'd say something about .DS_Store files, but that's not really UI.
I don't believe you. You have never, EVER, NOT ONCE run a terminal command to change an option on MacOS? I just refuse to believe anyone on HN hasn't altered preferences in the terminal on MacOS.
This is the Apple secret of success IMO. No 3rd-party drivers and hardware, means, it will just work and no one will blame you for stuff 3rd-parties messed up.
But its also like: There is only a red and blue t-shirt. Choose. No gray, no white, no yellow, no printings.
I think if you went back and actually tried to use these old UIs you would realize that one of the reasons stuff isn't hidden behind layers of menus is that in a lot of cases the 'hidden' features just didn't exist back then.
I have a harder and harder time navigating both iOS and Android as time goes, should be the opposite.
Same for Windows or MacOs.
I was actually surprised that macOS/Linux/ChromeOS together are >20% of all desktop/laptops. I would have expected Microsoft machines to be closer to 90% than 80%.
---
For desktop computers and laptops, Microsoft Windows has 71%, followed by Apple's macOS at 16%, unknown operating systems at 8%, desktop Linux at 4%, then Google's ChromeOS at 2%.[3][4]
---
[3] "Now more than ever, ChromeOS is Linux with Google's desktop environment". About Chromebooks. 1 August 2023. Retrieved 25 September 2024.
[4]"Desktop Operating System Market Share Worldwide". StatCounter Global Stats. Retrieved 9 March 2025.
It's a lot of time, effort, and expense to run user research, but the potential benefits to the users are big.
Windows 95 was not stable enough back then. I believe Windows 2000 was the first OS that was easy to use and relatively stable. XP and 7 are both solid options too.
I have used both MacOS X and modern MacOS (15) and the UI of X is definitely way better than the UI of 15. It is more clean cut.
In any case though, this particular attempt at giving a complete Windows 95 experience is quite cool.
But for whom and to do what? People around me, like my wife and mother in law, are happy with an Android phone. If they use a computer, the only thing they need is a browser.
I wouldn't be surprised if 90%+ of all Windows users would use Windows for one thing: click on a browser icon.
That's why after one malware too many I confiscated my mother-in-law's Windows laptop and got her a Chromebook.
I'd say that's why the shit UI/UX doesn't even matter anymore for most users: the only use a browser anyway.
I think its more about the change management, expectations. For example in Win XP you had the option to use the NT theme. As a user: "I can decide when to move on to the new design."
Usually around 50% of your users are conservative about change. You have to keep this in mind when u change design. On the other hand, if you sell a product with subscription, you have to introduce new feature, else user will move to another product. But, when you introduce new feature, UI gets more complicated and user will blame you for that.
Like making buttons auto-hide unless you mouse-over them. I don't remember when this came in, but the default PDF viewer in something did this, and I spent _weeks_ being baffled that some jerk made a PDF viewer that couldn't zoom in on the page, until I randomly waggled the mouse for some reason and the missing buttons magically appeared. I have no words for how upsetting this was.
Like having icons-only for many functions, with no text-and-icons or text-only option to replace them. I'm sure some people are fine with that, but other people can scan a screen for a desired word MUCH faster than they can scan for a desired icon, and removal of text labels is just an insult to that segment of the userbase.
Like no longer highlighting, or even having, hotkeys for many menus. I can alt-space or alt-menukey my way through a late-90s menu tree _way_ faster than I can mouse through it, even with today's better mice, but that simply doesn't work anymore in a great many programs.
It's one thing for people who've never known a different UI to just be slow in this one and that's all they've known, and that's fine for them I guess, if it's pretty and they prefer that, or if keyboards frighten them.
But for people who have DECADES of reflexes invested in these shortcuts to suddenly find that they don't work anymore, and we're forced to SLOW DOWN and be less productive than in the past, that's a high insult.
Microsoft spilled tankers of ink in the 90s talking about how their new GUI patterns would make people more productive by unifying these things across programs (which was true; in the DOS era every program made up its own shortcuts and ways to access them), and folks who learned them are now being punished for trusting MS with our loyalty.
"Basically nothing has changed" my ass.
check out altDrag if youre on windows (its discontinued now but i think i remember seeing newer forks)
it lets you hold down a key and then drag the cursor in one of the 4 quarters of a window to resize it.
a lot of the ubuntu based distros ive tried have had this feature built in for a while now and its far superior
But the problem with add-ons is that every machine you use will have a different combination of them installed. Maybe you're at work and don't have privileges to install them. Maybe you forgot to install one on your desktop even if it's on your laptop, etc.
And building it into "a lot" of Ubuntu-based distros lacks discoverability. I might have that now on the machine I'm typing this on, but it does me no good if I don't know it's there. (Everything in Linux has worse-than-terrible discoverability, but that's another rant entirely.)
MS's dominant position meant their defaults Just Worked everywhere, and when those defaults were good, they were really, really good, by virtue of their ubiquity. Then they fucked us by using their dominant position to just... I don't know... completely lose the plot? Aside from HiDPI fractional scaling and support for large monitor "maximize to a quadrant" and stuff, I can't point to a single MS UI improvement since the XP days. Everything else has just gotten worse, fragmented, and for no good reason.
1. touch the mouse (if not already) 2. move the mouse to the button 3. wait until the tool tip appears
I had forgotten that Chicago95 exists, but this might be exactly the right thing. They'd immediately find it familiar, and while the theme isn't the whole story, this would go a long way in easing the transition I think.
I miss this era of computing.
My parents use some tools and hardware that require a full OS so the tablet route isn't an option, but I'm starting to really like the idea of deploying a couple of these micro PCs.
ChromeOS seems to work really well though and is dead simple and intuitive. It used to be incredibly awesome with crouton but that's mostly dead. Crostini is acceptable though. I would absolutely recommend ageing people getting a chrome device for security and simplicity.
Plus running any android apps on desktop gives even more software options than any other desktop for most consumers.
Apparently, ubiquitous surveillance is acceptable to you. Not an uncommon stance, but I'd never recommend it to others. It's a disservice when freedom and privacy respecting choices, that are just as reliable, already exist.
Seems to work, the maintenance is also now super easy, ssh, update. Something wrong and she needs support? I ssh, open up a tunnel and connect via remina to her desktop to explain.
I had a situation once when Ubuntu did literally not go into the Desktop Environment anymore, but all I did was update and upgrade packages and it started working again.
Universal Blue[^2] has some spins that got a glow up, but their dev team gives a bit of the "everything old is bad" vibe.
OpenSUSE's MicroOS[^3] desktops aren't ready for nontechnical people, but their atomic upgrade strategy is much faster and simpler (btrfs snapshots). I'm keeping an eye on it.
^1: https://fedoraproject.org/atomic-desktops/
My daily driver is NixOS and part of me really wants that level of predictability and rollback for them. For a brief period, I had started thinking through what it might look like to remotely manage this for them. But my ultimately goal is to help them achieve autonomy, and only step in when necessary.
- Document scanning
- Label printing (my mom buys/sells stuff on eBay)
- My dad still works and writes proposals/manages invoices/does complex taxes
At a minimum, they need a full desktop environment. Most of these things have decent 1:1 Linux alternatives, but one or two might necessitate a single-purpose Windows VM when all else fails.
Two pretty decent used micro PCs will also cost less than a single iPad.
I think any type of pixel font authentic to a couple decades ago won't look good on a 4K monitor, unfortunately. It got to the point where I ordered a 1024x768 monitor just to play old games with a period system.
You could probably create a CRT-filter-based font for high resolution screens (though you'd probably still need to optimise for subpixel layout for accuracy, even on 4k monitors).
Yes, very early on, when people used TVs or cheap composite monitors as the display devices for their computers, there were blurry pixel edges, bloom effects, dot crawl, color artifacting, and all the rest.
But by the '90s, we had high-quality monitors designed for high-resolution graphics with fast refresh rates, with crisp pixel boundaries and minimal artifacting. CRT filters overcompensate for this a lot, and end up making SVGA-era graphics anachronistically look like they're being displayed on composite monitors.
People were typically using 640x480 or 800x600 in GUI enviroments, and most DOS games were at 320x200. 1600x1200 was incredibly uncommon, even where the video hardware and monitors supported it -- people were usually using 14" or 15" 4:3 displays, and that resolution was way too high to be usable on displays that size, and the necessarily lower refresh rates made flicker unbearable at higher resolutions.
At the common resolutions and with purpose-built CRT monitors, pixel boundaries were quite clear and distinguishable.
Being able to clearly resolve individual pixels (which I agree was a thing at resolutions like 640x480 or 800x600. 1024x768 is pushing it already though) is not the same as seeing "crisp" boundaries between them. The latter is what I was objecting to. 320x200 (sometimes also 320x240 or the like) is a special case since it was pixel-doubled on more modern VGA/SVGA display hardware, so that's the one case where a single pixel was genuinely seen as a small square with rather crisp boundaries, as opposed to a blurry dot.
If we're talking about the subjective experience of recreating "a child's bedroom computer" from the mid 90s-early 00s, a widescreen aspect ratio alone would be jarring, since my conception of a monitor for such a system is a 4:3 CRT. So for me, little else would reach that level except a system with the same aspect ratio and a similar DPI.
Not only that, but UI design itself has undergone many shifts since that era to account for the types of monitors those UIs are being designed for. There's not as much of a need for pixel-perfect design when vector-based web UIs dominate the desktop application space nowadays, relegating those that go back to older UI paradigms to enthusiasts who still remember earlier times. Or maybe people who develop for fantasy consoles.
I should mention while I'm at it that those sort of faux-pixel art shaders used in some games come off as quite jarring to me since I expect UIs to be meticulously laid out for the original screen size, not just for blowing up arbitrary content 2x or 4x on a huge widescreen monitor. I sometimes feel those are meant to represent a nostalgic feeling of some kind, being pixelated and all, but really it just makes me wish there were some alternate reality in which people still designed games and desktop applications for 800x600 or 1024x768 monitors again.
It's interesting at present how there's stuff like 4K and then there's the playdate with a relatively tiny handheld resolution, but relatively little interest for new content for those types of resolutions in-between.
Is that what this project is going for? I understood it to be attempting to apply design elements from that era to create a superior UI for a modern "child's bedroom computer".
- Windows 10/11. Especially in 11, it's easiest just to type the start of an app's name into the search box. As opposed to the two clicks it takes to get to the "traditional" menu where you still have to scroll to find it.
- Gnome (only on fresh Linux installs, usually replaced with Mate pretty soon). Has a smartphone-style app grid, but here, too, its quickest just to type the start of the app's name.
- Mate: Modern, but still has the Windows 95 paradigm (easy enough to collapse the two toolbars into just one bottom one). Still my favourite desktop environment.
Not all fancy graphic stuff is good. And don't even get me started on how hard it is to drag an app window to another screen these days - on Windows. You really have to find the 2% or so of the top bar that's still draggable and not cluttered up by other stuff.
You can do the same tweak by editing the registry [2] if you don't want to download an app for it (though the app includes a lot of additional useful tweaks).
[1] https://winaerotweaker.com/
[2] https://www.tomshardware.com/how-to/disable-windows-web-sear...
Whenever I see screenshots of the old menu I get pangs of nostalgia.
I read all the Windows 8 development blogs, and everything they wrote about seemed absolutely justified. Then you actually use the thing and it was a nightmare.
Same with their approach to hardware, the Duke Xbox controller tested really well, but then someone with daintier hands went to use it and uh actually its great for like 15% of the user base.
https://open-shell.github.io/Open-Shell-Menu/
https://github.com/valinet/ExplorerPatcher
Without these I would probably give up on computers and go live under a bridge.
It would be fun to pair this with Gambas[0], a free VB6 clone that works with GTK.
Windows 95, NT, System 7 and System 8, BeOS, and NextSTEP all had really clear UX. You always knew where to drag a window, what was clickable, where to find settings, etc.
For those versions, a good bulk of the “system” isn’t part of the system proper but instead implemented by way of extensions and control panels loaded at startup. The OS itself is extremely minimal, basically just enough to provide a barebones desktop and act as a substrate for apps to run on. Everything else, including “core” functionality like audio and networking, was implemented in an extension.
This meant that you could pare back the OS to be extremely lean and only have the exact functionality you personally needed at that precise moment and nothing else, and doing so didn’t require you to recompile a kernel or anything like that — just create an extension set that only loaded what you needed. This was excellent for use cases like games and emulators where you wanted every last ounce of resources to go to those, and nice for single purpose machines too (no point in loading game controller support on a box that only ever runs Photoshop and Illustrator).
Of course the way it was implemented is awful by modern standards, but the concept is golden and I think there should be OS projects trying to replicate it.
There's a good reason the third-party extension manager was called "Conflict Catcher," but the power and flexibility such a system grants users is unmatched.
It has (to my surprise, initially) been my experience that "kids these days" are more novice at (desktop) computer-usage than the people of the 90s
It's ok for people to waste time building stuff they think is cool. Did it need to be a distro? No but it also didn't need to exist. I'm glad it exists though, I think it really whips the llamas ass!
That 'base' is one issue I've been thinking about with linux, I have similar concerns about the cost of everyone being able to make their own distro for their own slight variation on something else. It's not that I think it's a bad thing to pathfind in new areas, but the replication in building/supporting it all, getting users to pick between 4 similar variants of the same thing, and accounting for "you're using KustomLinux, which is 2 steps removed from CommonLinux" and all the little differences between them. It's an interesting contrast against standardization, but I can't help wondering how it would change the approachability of linux if the starting point was limited to one of the big distros and then variants are layered on top of that.
> You don't see Windows creating a new OS for a single theme. You don't see macOS do it either.
I'd also consider the behavior of both Windows and OSX to be a warning to avoid, and not an example to emulate.
But the line doesn't always have to go up with every breath everyone takes. It's ok to do stuff just because it's fun. Not every single action needs to increase market share.
perhaps a shell where root is mapped to C:\
Would be interesting to see what a modern version of Windows 95 would look like, or what general design lessons can be learned from it's niceties.
There are some pretty good desktop environments for Linux which emulate the Windows desktop, so that old Windows users would feel at home immediately.
But I've never seen them emulate the filesystem, which is what took most old Windows users the biggest effort to understand. And the Linux filesystem raises it to a new level of complexity, which makes every old Windows users want to go back to Windows immediately.
With "old" users I don't mean experienced users.
Is there some kind of overlay which does all this `C:\User\afidel\Desktop` mapping for those users?
Or maybe it's not actually a real problem for users if these paths change.
The repo literally adds nothing, just a name change.
Do you mean reuse/copy/redistribute? Which is what the GPL-3.0+/MIT license that the theme uses is meant for?
> Based on Fedora Atomic Xfce with the Chicago95 theme.
The readme has a direct link out to it in the 2nd sentence, that's pretty clear credit
Turns out they do have Windows XP: https://github.com/winblues/bluexp
I believe this is the reason you cannot find a proper windows 2000 theme for xfce.
https://www.youtube.com/watch?v=xFKx8nCl1Vw
It is hard for me to distinguish between the functional simplicity of desktop computing in that era, with the overall excitement that the explosion of connectivity brought to the world. The internet was a lot of fun and had so many surprising corners. Practically all of it was personal, niche, or experimental content for awhile.
I wonder if Windows 9X was really all that exceptional, or if it was just what people remember driving with as they navigated the new world.
The best modern equivalent to that desktop paradigm I've found is LXQt, although when I use it I find I kind of miss some of the accouterments of the modern desktops.
I'm assuming the PC will be mostly used for "educational software" (games), which you would want to run on XP. What benefit is there to running Fedora?
It's just Linux with a Win95-lookalike theme.
Linux Benefits: Security patches (safety), software updates (convenience), hardware support, freedom, and privacy. Not to mention a modern browser, terminal, TLS, and filesystems like exfat.
Fedora runs Wine as well. Maybe not every 95/XP program will run, but I'd guess a lot of them do. You could also run the others in Virtual Box when needed.
Why would you want to mess around trying to get programs to half work on Wine when you can just have the real thing?
If you want your kid to have a web browser for educational purposes, I feel like you had probably might as well just hand them the iPad.
Connect the dots reading https://www.marginalia.nu/log/99_context/ and seeing the ui change in old vs new screencasts https://www.youtube.com/@ViktorLofgren/videos
I tried this, but documentation is often a huge problem. Increasing amounts of it are primarily online and not particularly straightforward to mirror locally.
I learned heaps from treating a REPL as an (offline) escape room, in terms of how to get inline help, variable introspection and debugging tricks. Not every language offers a convenient one though.
I feel so old now…
I kind of get the appeal, but it's also unnecessarily skeumorphic/fake-3d and there were some UX things that made little sense especially lumping all the window controls all together (including the destructive "close" X) where MacOS smartly separated them.
It’s like there’s always just a little extra brain power and attention being used by modern flat UIs, and you get to shut that off when you look at a depth-enhanced UI.
Windows, for instance, has dozens of ways to do that, and you can find parts of Windows that use an archeological version of the controls. Nobody, it seems, bothered to reimplement the older APIs on top of the new ones.
Interesting that modern macOS now have them next to each other, like Windows.
You'd be hard pressed to call the Window 95 UI pretty, but it is really functional. I'm still a firm believe that the majority of the work we do with computers today could be done within the Windows 95 shell. We need 64bit, more memory, faster CPUs, GPUs all that, but the modern UI aren't really "better", if anything many of them are more confusing. I think a lot of office works would be happy to just have kept the Window 95 era UI for Windows and Office.
Wasn't that one of the ideas behind SerenityOS?
Ironically, the Windows 95 look seems a lot like a copy of the NeXT look, which is the OS all modern Macs are kind of running.
EDIT: Sun's OpenLook is the other one from that era that was fantastic
Windows3 and Motif hid this stuff under a menu, so wasn't a huge concern.
But then Windows95, and then (oddly) MacOS through this away in favour of throwing them all together.
Awareness of spatial patterns / frequency of use seems to have been higher among early UX/UI designers than after. I guess maybe because mice became more accurate?
Someone wrote chess for it.
[aside] It was a Sinclair ZX-81, and I was 11 at the time. My parents bought the kit and a second-hand black & white TV with a dial-tuner (no pushbuttons to change the channel) as an Xmas present ...
I loved the TV, it was my TV when we only had one other in the house. I watched everything on that TV (even snooker and swore I could tell which ball was which)... After a couple of months, my dad started to get annoyed I'd not bothered to build the computer, so I was dispatched to the shed to build it.
A few days later (hey, I was in school), the thing worked and I was working my way through the (rather excellent) manual that came with it, getting to know it. One of the logic chapters had an example:
[P]RINT 1+1=2
(It was tokenised input, so you just pressed P and PRINT would come up in the built-in BASIC). Anyone here can see that the answer would be logical-true because 1+1 does equal 2, and indeed the computer printed "1" on the next line.
Anyway, flush with this futuristic knowledge, I set it all up using the family TV in the lounge, and we went through the same thing, just to prove to everyone that it worked...
[P]RINT 1+1=2
1
"I knew it. You've buggered it", said my dad in disgust as he got up and walked out the room. I tried to explain the (new to me) concept of logical truth to him and how the computer represented it, but I don't think he ever really believed me...
[/aside]
Anyway, that Sinclair ZX81 fundamentally changed my life. Computers and computing opened up a whole new world. Some 45 years later I'm about to retire from Apple as one of their most senior engineers, having been here for the last 20 years. Anyone with any Apple device is running some of the software I've written over the years which is kind of cool, but it's time to bow out.
Occasionally I boot up Windows 3.1 in a VM and play a game of Solitaire, for old time’s sake - I can run Windows 95 in a VM too, but it just doesn’t have the same pull.
I for one would welcome a set of deeply integrated ui improvements in a Mac that included a better file manager, better window management, better desktop search, a contact manager just that worked, a messaging client that just worked, audio and camera controls that just worked, a calculator that didn’t suck, etc.
I’d pay at least $100 a year for that tool set.
this already is my stack, tho I need to get over to the immutable variety. i'm on fedora 41 i3-spin, using chicago95. been using it for years, with the plymouth bit that gives you a proper win95 startup background during boot
Every time I try desktop linux it feels like it's 30 years behind in usability.
Several times they've been ahead of the curve and I've been surprised to learn that Windows or Mac were only just getting a feature that I'd had access to for years or more.
Of course at present I only use i3-alikes. I'd characterize those as a return to the sanity of the past.
I’ve been thinking lots about the YouTube algorithm, how it’s based on engagement, not educational value, and how it’s such a huge missed opportunity for our kids.
So I’ve been building my own YouTube exploration app, for my own kids.
Email me if you’re interested in testing.
jim.jones1@gmail.com
Also:
- Pan
- Sylpheed
- Audacious with the WinAMP theme
- Hexchat kinda has a MIRC vibe
- Parole looks like WMP from < v9 releases
- You can't simulate a dialer, but with trickle you can mimic a 56k/ISDN connection pretty well
- SyncTERM for BBS's
- ScummVM, with just a bilinear filter, because I played tons of adventures
- There's an SDL2 reimplementation of Space Cadet Pinball at github.
- Trigger Rally would look like a great shareware game
- Pidgin, hands down. Either you were an AOL user in America, or a MSN user in Europe. It has emoticons, not emojis. Add that annoying notification theme with a sound and that would be the very late 90's/early 00's (my early teen years)
Maybe _you_ can help.