But yes, Windows 95 to Windows 2000 were a huge jump in usability. From Windows 8 on and the “Metro” interface they threw it all away.
I get the same feeling from Google's android and Pixels. Lots of neat features keep getting added, but the SW and HW issues that end up in the final product make it seem like an incredibly amateurish effort for such a wealthy company hiring top talent.
I remember being pulled into user surveys and usability studies while wandering the mall back in the day and being given series of tasks to accomplish on various iterations of a windows GUI (in the Windows 9x era) while they observed, and then paid $100 for my time for each one I participated in.
The problem, in my experience, is that when some manager really wants to do something, they'll find a way to justify that with a study. Pretty much every UX decision that ended up being universally panned later had some study or another backing it as the best thing since sliced bread.
In my tests it was 6-7x slower than on Linux (in VirtualBox on Windows). I assume by better you mean more features?
On a related note I used one of those system event monitor programs (I forget the name) and ran a 1 line Hello World C program, the result was windows doing hundreds of registry reads and writes before and after running my one line of code.
Granted it doesn't take much time but there's this recurring thing of "my computer being forced to do things I do not want it to do."
I also — and this is my favorite, or perhaps least favorite one — ran Windows XP inside VirtualBox (on Windows 10). When you press Win+E in XP, an Explorer window is shown to you. It is shown instantly, fully rendered, in the next video frame. There is no delay. Meanwhile on the host OS (10), there is about half a second of delay, at which point a window is drawn, but then you can enjoy a little old school powerpoint animation as you watch each UI control being painted one by one.
(Don't get me started on the start menu!)
Twenty years of progress!
You were probably using Sysinternals process monitor. https://learn.microsoft.com/en-us/sysinternals/downloads/pro...
Windows does a tonne of things in the background, yes. If I run that and let it monitor everything, things will happen even if I do nothing. It is an OS and complex.
>It is shown instantly, fully rendered, in the next video frame. There is no delay
THIS is true and also crazy to me. I forgot how fast XP was. Especially on modern hardware. I TeamViewer into a laptop with an i5 CPU and Windows XP (medical clients...) and it felt faster than my more powerful local machine!!
I have set my sysdm.cpl performance settings to this, and it does help a bit to get rid of the animations and crap.
Yea... I like the 10 start menu but they destroyed it in 11...
I would guess better on current Windows than on Windows 95. I don't know about faster, but NTFS is most probably more reliable than FAT32. And also more features of course, and fewer limitations. At least the file size limit (4 Gb) and ownership / rights metadata (ACL).
If you are handling a stupendous number of small files (say doing an npm build) then metadata operations are terribly expensive on Windows because it is expensive to look up security credentials.
Maven is not too different from npm in how it works except instead of installing 70,000 small files it installs 70 moderate sized JAR files that are really ZIP files that encase the little files. It works better in Windows than npm. Npm got popular and they had to face down the problem that people would try building the Linux kernel under WSL and get awful times.
Microsoft knows it has to win the hearts and minds of developers and they believe in JS, TS and Visual Studio code so they’ve tried all sorts of half-baked things to speed up file I/O for developers.
There's a reason https://www.sqlite.org/fasterthanfs.html , SquashFS, etc. are a thing, or why even Europe's fastestest supercomputer's admins admonish against lots of small files. https://docs.lumi-supercomputer.eu/storage/#about-the-number...
Which shows that even if you don't want to call any filesystem great here, they differ vastly in just how bad they handle small files. Windows' filesystems (and more importantly it's virtual filesystem layer including filters) are on the extremely slow end of this spectrum.
For instance, the 70,000 files in your node_modules do not need separate owner and group fields (they all belong to the same user under normal conditions) and are all +rw for the files and +rwx for the directories. If you have access to one you have access to all and if your OS is access checking each file you are paying in terms of battery life, time, etc.
On Windows it is the same story except the access control system is much more complex and is slower. (I hope that Linux is not wasting a lot of time processing the POSIX ACLs you aren’t using!)
Well, it does have more ports normally open and starts connecting to MS as soon as possible, so yes, it is a progress. /s
The original description of the file uploaded to Wikipedia read [2]:
Microsoft Excel 2.1 included a run-time version of Windows 2.1
This was a stripped-down version of Windows that had no shell and could run just the four applications shown here in the "Run..." dialog.
The spreadsheets shown are the sample data included with Excel.
[1]: https://web.archive.org/web/20090831110358/http://en.wikiped...
[2]: https://web.archive.org/web/20081013141728/http://en.wikiped...
> Excel 2.0 was released a month before Windows 2.0, and the installed base of Windows was so low at that point in 1987 that Microsoft had to bundle a runtime version of Windows 1.0 with Excel 2.0.
"Until May 1987, the initial Windows release was bundled with a full version of Windows 1.0.3; after that date, a "Windows-runtime" without task-switching capabilities was included"
I actually thought it was cut-down, but it only had task-switching disabled.
And of course the subject of so many BSOD photos…
Those are probably CE and not PE?
https://en.wikipedia.org/wiki/Windows_Embedded_Compact
or Embed based on CE
WinPE is the Windows Preinstallation Environment, used as the basis for Windows installation and recovery, and available for custom builds as an add-on to the Windows ADK[1], but AFAIK not intended or licensed for embedded use.
[1] https://learn.microsoft.com/en-us/windows-hardware/manufactu...
It was probably embedded standard based on NT/XP/7
You could probably build a really nice UI atop of it if one were so inclined. To prevent people from doing this as a way to bypass Windows licensing, there is a timer that will cause WinPE to periodically reboot itself if you leave it running.
It was probably embedded standard based on NT/XP/7
Yes it needed DOS because pre-3.11 Windows versions actually used the DOS kernel for all file access. When 32-bit file access was introduced in WfW 3.11, that was no longer true-but it was an optional feature you could turn off. In all pre-NT Windows versions, Windows is deeply integrated with DOS, even though in 9x/Me that integration is largely for backward compatibility and mostly unused when running 32-bit apps - but still so deeply ingrained into the system that it can’t work without it.
IIRC, Microsoft tried to sell the same stripped down single-app-only Windows version to other vendors, but found few takers. The cut-down Windows 3.x version used by Windows 95 Setup is essentially the 3.x version of the same thing. Digital Research likewise offered a single app version of their GEM GUI to ISVs, and that saw somewhat greater uptake.
Way to go Mr. Raymond!
Microsoft underestimated the inertia of the applications market. NT 3.51 was fine if you used it as a pure 32-bit operating system. You could even configure it without DOS compatibility. Few did.
Microsoft had the resources and expertise to make excellent DOS compat on NT. They just didn't. The reasons are many: they just didn't want the expense, "binning" (Windows 9x for consumers, Windows NT for professionals and enterprises), plus Windows NT was a memory hog at the time and just wouldn't run on grandma's PC.
I mean, i don't think there is anything "right" involved from the users' perspective when all they get is the programs they want to use their computer with becoming broken :-P.
In general people do not use computers for the sake of their noise nor OSes for the sake of clicking around (subjectively) pretty bitmaps, they use computers and OSes to run the programs they want, anything beneath the programs are a means not an end.
(and often the programs themselves aren't an end either - though exceptions, like entertainment software/games, do exist - but a means too, after all people don't use -say- Word to click on the (subjectively again) pretty icons, they use it to write documents)
That did not happen. 16-bit applications hung on for a decade.
This.
Absolute backwards compatibility is why Windows (particularly Win32) and x86 continue to dominate the desktop market. Users want to run their software and get stuff done, and they aren't taking "your software is too old" for an answer.
Of course that's mainly possible because of how modular the Linux desktop stack is.
I finally abandoned CorelPHOTO-PAINT 3.0 only when I moved to x64 Vista in 2008.
I honestly tried to use GIMP multiple times but it's always felt.. unnatural.
NB: but IrfanView is still my goto picture viewer.
Linux is another matter entirely, if your binaries run at all from one distribution release to the next you're doing well.
I would imagine most desktop linux users rely on maintainers to compile and distribute binaries for their particular flavor.
Which is less ideal than just having general binaries that work.
Systems like Solaris are a lot more restricted what sets of libraries they provide (not "package up everything in the world" as some linux distros) but what they provide they keep working. (I haven't touched an Solaris system in a long time, but assume they didn't start massive "innovation" since then)
Considering that desktop apps nowadays rely on web counterparts to be functional, most commerecial apps will stop running after some time, regardless of whether operating systems keep compatibility or not.
Personally I want to keep GuitarPro 6 alive (There's no newer version for Linux because binary software distribution on Linux wasn't worth the trouble) and Quartus 13.1 (because I still write cores for a CycloneIII-based device and 13.1 is the last version to support that chip.)
Seems like the way that this is "fixed" is by using containers. But it feels so...bloated.
Linux binary compatibility is actually pretty good. As is glibc's an that of many other low level system libraries. The problem is only programs that depend on other random installed libraries that don't make any compatibility guarantees instead of shipping their own versions. That approach is also not going to lead to great future compatibility in Windows either. The only difference is that on Windows the base system labrary set is bigger while on Linux it is more limited to what you can't provide yourself.
Adding drivers is also painful, since it doesn't really support all of the regular windows drivers. To sideload drivers you have to do it twice, once for the install image and once for the winpe image.
*My knowledge of this stuff is about 7 years outdated, so it's possible they've improved it since then... Unlikely but possible.
https://en.wikipedia.org/wiki/Windows_Preinstallation_Enviro...
https://learn.microsoft.com/en-us/windows-hardware/manufactu...
That is presumably why Microsoft doesn't put much engineering effort into the install-from-empty-disk case.
Windows Vista and newer launch a more substantial version of the OS with the graphics system and Win32 services running, but they never intermix versions. Windows 10's DVD loads Windows 10 to run the installer. That they haven't updated the pre-baked Aero graphics since Vista is a laziness problem, not indicative of being "actually Vista/7" :)
And yet, you can use the Windows 11 installer to install Windows 7 and have it be significantly faster because of that.
I'm not surprised that you can mix up versions by modifying your installation image, since the installation method hasn't changed since Vista. As Microsoft ships them, however, you boot Windows 11 to install Windows 11. :)
https://learn.microsoft.com/en-us/windows-hardware/manufactu...
https://en.wikipedia.org/wiki/Windows_Preinstallation_Enviro...
It’s not uncommon to do something that lands me on a dialog box I still remember from Windows NT 3.1. The upside is that they take backwards compatibility very seriously, probably only second to IBM.
Interesting to learn the real MCP was nearly as hostile as its fictional namesake.
macOS Sequoia is version 15, whoever reaches 20 first wins right!?
20 years ago, we thought eventually either GNOME or KDE would win, instead became even more fragmented, across all layers.
https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_vers...
It would be pretty easy to set up automation to have a bunch of disk image of old copies of windows (both clean copies, and customers disk images full of installed applications and upgraded many times), and then automatically upgrade them to the newest release and run all the integration tests to check everything still works.
I feel strange about hating on MS after the 2000s
And a fresh SSD that you can set up with a traditional MBR layout instead of GPT layout. Since GPT doesn't work with BIOS, GPT requires UEFI to boot which is prohibitive for DOS.
You install DOS on your first partition after formatting it FAT32.
Just the same as getting ready to install W9x next.
The DOS from W98 is ideal for partitions up to 32GB.
After you install W9x next, it's easy to drop back to a DOS console within Windows, or reboot to clean DOS on bare metal, which is what people would often do to run older DOS games. DOS is still fully intact on the FAT32 volume which it shares with W9x as designed. It's basically a factory-multiboot arrangement from the beginning
The only problem is, W98 won't install easily if there is more than 2GB of memory.
DOS can handle huge memory though (by just neglecting it) so might as well skip W98 unless you have the proper hardware.
Very few people would put any form of Windows NT on a FAT32 volume, and IIRC W2k doesn't handle it, but XP performs much better on FAT32 than NTFS, if you can live without the permissions and stuff.
Do it anyway, when you install XP intentionally to that spacious (pre-formatted, fully operational) FAT32 volume which already contains DOS, the NT5 Setup routine makes a copy of your DOS/W9x bootsector and autosaves it in the root of your C: volume in a file known not surprisingly as bootsect.dos. And then it replaces the DOS boot routine with a working NT bootsector and uses NTLDR to boot going forward.
Most people who migrated from DOS/W9x to WXP started out with XP on NTFS, so users lost FAT32 overnight, and the idea was for DOS, W9x and FAT32 to be taken away from consumers forever. None of the NT stuff would directly boot from DOS bootsectors any more, you ended up with an NT bootsector and NTLDR instead. DOS and W9x don't recognize NTFS anyway, they were supposed to be toast.
So if you install both DOS & WXP side-by-side on the same FAT32 partition like this, you can still dual-boot back to DOS any time you want using the built-in NT multiboot menu (which never appears unless there is more than one option to display) where the DOS/W9x bootentry simply points to bootsect.dos and then DOS carries on like normal after that since it's the same FAT32 volume it was before, other than it's new NT bootsector.
Too bad there aren't any graphics drivers that XP can use on recent motherboards, so what, say hello to generic low-resolution.
I realize nobody ever wanted to skip Windows Vista, sorry to disappoint.
Next comes W7. But you would be expected to install from DVD and there's not really drivers for recent PC's.
What you would do is boot to the W7 Setup DVD, NOT to upgrade your XP, instead to create a second 32GB partition, format that second partition as NTFS and install W7 to there. The NT6 Setup routine will replace the NT5 bootsector on the FAT32 volume with an NT6 bootsector, and add a new BOOT folder right there beside the NTLDR files on the FAT32 volume. The built in NT6 multiboot menu may not appear unless you manually add a bootentry for NTLDR (which is easy to do), then you would be able to multiboot the factory way to any of the previous OS's since they were still there.
W10 is the better choice for a recent PC, do it by booting to the W10 Setup USB.
In this case the very first NTFS experience for this lucky SSD is going to be W10, quick while it's not yet obsolete ;)
I know people didn't want to skip W8 either :)
But if you had W7 or something on your second partition already, you would make a third partition for W10 and it would install like it does for W7. Except there would already be an NT6 BOOT folder in your FAT32 volume, with its associated bootmgr files. You would choose the third partition to format as NTFS and direct the W10 install to No.3. The NT6 Setup routine will end up automatically adding a bootentry for the new W10 into the same NT6 BOOT folder that was there from W7. So you can choose either of the NT6 versions which exist in their own separate partitions, from the factory multiboot menu if you want anything other than the OS that you have chosen as default at the time. As well as the OS's still existing on the FAT32 volume.
Oh yeah, once the NT6 BOOT folder is present, you can manually add a bootentry for the DOS you have residing on your FAT32 volume, then you can boot directly to DOS from the NT6 bootmenu without having to drop back to the (previous if present) NTLDR way to boot DOS. Using the same old bootsect.dos file which NT5 had in mind the entire time
Now this is vaguely reminiscent of how UEFI boots a GPT-layout SSD from its required FAT32 ESP volume.
However UEFI uses only an EFI folder and (unlike Linux) doesn't pay any attention to a BOOT folder if there is one present. But if there is an EFI folder present on an accessible FAT32 volume, UEFI is supposed to go forward using it even if the SSD is MBR-layout and not GPT.
You would have to carefully craft an EFI folder for this, but then the same SSD would be capable of booting to NT6 whether you had a CSM enabled or were booting through UEFI. Using the specific appropriate boot folder, either BOOT or EFI depending on motherboard settings. DOS would only be bootable when you have a working CSM option, not depending solely on crummy bare-bones UEFI.
You just can't boot to an MBR SSD with SecureBoot enabled.
You may even have an extra 64GB of space left over for W11, unfortunately the W11 Setup routine is the one that finally chokes on an MBR SSD.
You would have to do W11 some other way, and add it to your NT6 boot menu manually.
For extra credit.
DOS until v7 (Win95) didn't support FAT32. It also doesn't support weird FAT(12/16/16B) formats such as only one FAT copy.
Yet i wonder : how can we relive through those enjoyments ? It was a bit similar when mobile phones started, but it settled much quickly, without leaving any but 2 survivors for the last decade. What makes it so hard today to release new hardware+software for the mainstream market ?
Microsoft absolutely played their cards expertly well in the nascent days of the microcomputer, all the way through to the new millennium. They also adopted bully tactics to prevent upstarts (like Be for example) from upsetting the apple cart and had ironclad contracts with OEMs when distributing Microsoft Windows.
It also helps that the IBM PC became such a pervasive standard. The competition, such as Commodore, Atari, Acorn, you name it, couldn’t help but find ways to blow their own feet off any time they had an opportunity to make an impact. Heck… even Apple came very close to self destruction in the late 90s, with Steve Jobs and NeXT being their Hail Mary pass.
In short, now that we’ve had personal computers for decades, it’s very difficult to break into this market with a unique offering due to this inertia. You have to go outside normal form factors and such to find any interesting players anymore, such as the Raspberry Pi and GNU/Linux.
As an aside, I kinda wish Atari Corporation was run by less unscrupulous folks like the Tramiels. Had they realized their vision for the Atari platform better and hadn’t taken their “we rule with an iron fist and treat our partners like garbage” mentality with them from Commodore, they might have stuck around as a viable third player.
I thought it was Raymond Chen's blog but I haven't been able to find it. I thought someone here might recall and have a pointer.
This is a recurring theme featured in multiple of Raymond Chen's blog posts, though. This is just the earliest one I can remember.
Here's a few more: https://www.google.com/search?q=%22two+programs%22+site%253A...
The "what if two programs did this" mantra I learned from these blog posts helped me greatly in judging whether certain feature requests from users make any sense at all. As soon as it involves touching system configuration, even if there is an API for it, it's probably a very bad idea.
The Macintosh screen never dropped you into a text-mode console, no matter what. Everything on the screen was graphics-mode, always -- and there weren't glaring design changes between system versions like in Windows (except at the Mac OS X introduction, which was entirely new).
Installing Macintosh system software onto a HDD was literally as easy as copying the System Folder. System installer programs did exist, but in principle all that was happening was optionally formatting the target drive and then copying System Folder contents. So simple. Of course there were problems and shortcomings, but the uncompromising design esthetic is noteworthy and admirable.
This hasn't really been the case for more than 10 years now. EFI based systems will boot without changing display modes. Some hobby custom PCs might have compatibility modes enabled, but any laptop or prebult system is going to go from logo to login without flickering.
However Microsoft values compatibility, which probably is in conflict with requiring more.
I was always astonished going to friends' houses and watching them have to use DOS or Windows 3.1 and weird 5" disks. Just looked like it was from the past. Even Windows 95 looked terrible on boot with all the wonky graphics and walls of console text. I was convinced Windows would never catch on and Amiga or Acorn was the future as they were so much better.
Windows itself would generally assume the lowest supported hardware, so e.g. for Win95 the boot screen used VGA graphics mode (since the minimum requirement for Win95 UI itself was the VGA 640x480 16-color mode). BIOS had to assume less since it might have to find itself dealing with something much more ancient.
Though Windows 95 was arguably similar running atop “DOS 7” it actually imposes its own 32-bit environment with its own “protected mode” drivers once booted. Dropping to DOS reverted to “real mode”.
What that did is use DOS as a first stage boot loader, then switched into protected mode and created a v86 task which took over the state of DOS. The protected mode code than finished booting.
v86 mode was a mode for creating a virtual machine that looked like it was running on a real mode 8086 but was actually running in a virtual address space using the 80386 paging VM system.
When you ran DOS programs they ran in v86 mode. If a DOS program tried to make a BIOS call it was trapped and handled by the 32-bit protected mode code.
v86 mode tasks could be given the ability to directly issue I/O instructions to designated devices, so if you had a device that didn't have a 32-bit protected mode driver a DOS driver in a v86 task could be used.
For devices that did have a 32-bit protected mode they would not give v86 mode direct access. Instead they would trap on direct access attempts and handle those in 32-bit protected mode code.
I wish Linux had adopted a similar use of v86 mode. I spent a while on some Linux development mailing list trying to convince them to add a simplified version of that. Just virtualize the BIOS in a v86 task, and if you've got a disk that you don't have a native driver use that v86 virtualized BIOS to access the disk.
Eventually someone (I think it may have been Linus himself) told me to shut up and write the code if I wanted it.
My answer was I couldn't do that because none of my PCs could run Linux because there were no Linux drivers for my SCSI host adaptors. I wanted the feature so that I could run Linux in order to write drivers for my host adaptors.
OS/2 did the v86 virtualized BIOS thing, and that was how I was able to write OS/2 drivers for my SCSI host adaptors.
EDIT it’s coming back to me. Windows 3.1 did have a a subsystem for running 32 bit apps called Win32 I think that’s what you mean. This was very much in the application space though.
It still used cooperative multitasking and Win 95 introduced preemptive.
https://lunduke.locals.com/post/4037306/myth-windows-3-1-was...
It’s backed up by another Old New Thing article at https://devblogs.microsoft.com/oldnewthing/20100517-00/?p=14...
The TL;DR is that Windows 3.1 effectively replaced DOS and acted as a hypervisor for it, while drivers could be written for Windows (and many were) or DOS (and presumably many more of those were actually distributed). The latter category was run in hypervised DOS and the results bridged to Windows callers.
(Edited after submission for accuracy and to add the Old New Thing link.)
And games needed to talk directly to the video card and sound card if you wanted anything more than PC speaker beeps and non-scrolling screens on one of the default BIOS graphics modes.
One of the major selling points of Windows 1.0 was a unified 2D graphics API, for both screen and printing. The graphics vendor would supply a driver and any windows application could use its full resolution and color capabilities needing to be explicitly coded for that graphics card. This rendering API also supported 2D accelerators, so expensive graphics card could accelerate rendering. 2D accelerators were often known as Windows accelerators.
Windows 3.1 still relied on DOS for disk and file IO, but everything was can be done by VXD drivers, and should never need to call back to DOS or the BIOS (which was slow, especially on a 286)
With Windows 95, Disk/File IO were moved into VXD drivers, and it was finally possible to do everything without ever leaving protected mode (though, DOS drivers were still supported).
Read more about the history of Device drivers here: http://www.summitsoftconsulting.com/WinDDHistory.htm
And I really enjoyed this documentary about the development of Windows 1.0: https://www.youtube.com/watch?v=vqt94b8bNVc
It had just enough parts of the API implemented to be able to run Quake 2 in DOS.
”Win32s lacked a number of Windows NT functions, including multi-threading, asynchronous I/O, newer serial port functions and many GDI extensions. This generally limited it to "Win32s applications" which were specifically designed for the Win32s platform,[4] although some standard Win32 programs would work correctly”
Looking back, Microsoft were clearly in an incredibly complicated transitioning phase, with very little margin for error (no patching over the Internet!)
So I guess there would have been a time in 1994 where many people were forced to retire their 286es. Though Mosaic was quickly replaced by Netscape Navigator in late 1994 which worked on Win16.
And then Windows 95 came along, and it really needed a 486 with 4MB of ram, ideally 8MB.
The raise Windows you’d type “win” and if you wanted to “boot to windows” you would call “win” from your autoexec.bat
As a recall I had to in order to play Commander Keen.
Try running that under 32-bit Windows 10, I never tried it myself but I have a feeling it should work.
That Windows Vista and newer look better, even if their design languages are mixed (old and new configuration panels mixed depending on which ones they could be bothered to recreate, for example) and are more complex, I don't think anyone argues. I've never heard of anyone using the simpler classic theme for reasons other than nostalgia or performance