Microsoft's code quality might not be at its peak right now, but blaming them for what's most likely a hardware fault isn't very productive IMO.
From the article:
> It won’t get past the Snapdragon boot logo before rebooting or powering off… again, seemingly at random.
Random freezing at different points of the boot process suggests a hardware failure, not something broken in the software boot chain.
Power issues all day long. It'll be fine until the SoC enables enough peripherals for one of the rails to sag down.
That being said, it's a hell of a coincidence that it failed exactly when a software update failed.
Remember the very early Raspberry Pis that had the polyfuses that dropped a little too much voltage from the "5V" supply, so a combination of shitty phone charger, shitty charging cable, and everything just being a little too warm/cold/wrong kind of moonlight would just make them not boot at all?
They were losing usb once in a while if ran 24/7. Had to make them self reboot every couple hours. Fortunately it didn't matter for what we were doing with them.
Exactly. Did you notice the one comment on his blog? It's a Linux zealot saying "Linux".
It would be entirely unsurprising to me if this trashed UEFI for this particular ARM device, from firmware corruption.
My guess would've been SSD failure, which would make sense to seem to appear after lots of writes. In the olden days I used to cross my fingers when rebooting spinning disk servers with very long uptimes because it was known there was a chance they wouldn't come back up even though they were running fine.
Normally he would leave his work machine turned on but locked when leaving the office.
Office was having electrical work done and asked that all employees unplug their machines over the weekend just in case of a surge or something.
On the Monday my brother plugged in machine and it wouldn’t turn on. Initially the IT guy remarked that my brother didn’t follow the instructions to unplug it.
He later retracted the comment after it was determined the power supply capacitors had gone bad a while back, but the issue with them was not apparent until they had a chance to cool down.
HA! Not just me then!
I still have an uneasy feeling in my guts doing reboots, especially on AM5 where the initial memory timing can take 30s or so.
I think most of my "huh, its broken now?" experiences as a youth were probably the actual install getting wonky though, rather than the few rare "it exploded" hardware failures after reboot, though that definitely happened.
I'd like to add my reasoning for a similar failure of an HP Proliant server I encountered.
Sometimes hardware can fail during long uptime and not become a problem until the next reboot. Consider a piece of hardware with 100 features. During typical use, the hardware may only use 50 of those features. Imagine one of the unused features has failed. This would not cause a catastrophic failure during typical use, but on startup (which rarely occurs) that feature is necessary and the system will not boot without it. If it could, it could still perform it's task... because the damaged feature is not needed. But it can't get past the boot phase, where the feature is required.
Tl;dr the system actually failed months ago and the user didn't notice because the missing feature was not needed again until the next reboot.
They involve heavy CPU use, stress the whole system completely unnecessary, the system easily sees the highest temperature the device had ever seen during these stress tests. If during that strain something fails or gets corrupted, it's a system-level corruption...
Incidentally, Linux kernel upgrades are not better. During DKMS updates the CPU load skyrockets and then a reboot is always sketchy. There's no guarantee that something would not go wrong, a secure boot issue after a kernel upgrade in particular could be a nightmare.
In the case of Linux DKMS updates: DKMS is re-compiling your installed kernel modules to match the new kernel. Sometimes a kernel update will also update the system compiler. In that instance it can be beneficial for performance or stability to have all your existing modules recompiled with the new version of the compiler. The new kernel comes with a new build environment, which DKMS uses to recompile existing kernel modules to ensure stability and consistency with that new kernel and build system.
Also, kernel modules and drivers may have many code paths that should only be run on specific kernel versions. This is called 'conditional compilation' and it is a technique programmers use to develop cross platform software. Think of this as one set of source code files that generates wildly different binaries depending on the machine that compiled it. By recompiling the source code after the new kernel is installed, the resulting binary may be drastically different than the one compiled by the previous kernel. Source code compiled on a 10 year old kernel might contain different code paths and routines than the same source code that was compiled on the latest kernel.
Compiling source code is incredibly taxing on the CPU and takes significantly longer when CPU usage is throttled. Compiling large modules on extremely slow systems could take hours. Managing hardware health and temperatures is mostly a hardware level decision controlled by firmware on the hardware itself. That is usually abstracted away from software developers who need to be able to be certain that the machine running their code is functional and stable enough to run it. This is why we have "minimum hardware requirements."
Imagine if every piece of software contained code to monitor and manage CPU cooling. You would have software fighting each other over hardware priorities. You would have different systems for control, with some more effective and secure than others. Instead the hardware is designed to do this job intrinsically, and developers are free to focus on the output of their code on a healthy, stable system. If a particular system is not stable, that falls on the administrator of that system. By separating the responsibility between software, hardware, and implementation we have clear boundaries between who cares about what, and a cohesive operating environment.
Imagine you are driving a car and from time ro time, without any warning, it suddenly starts accelerating and decelerating aggressively. Your powertrain, engine, breaks are getting tear and wear, oh and at random that car also spins out and rolls, killing everyone inside (data loss).
This is roughly how current unattended upgrades work.
Kind of big doubt. This was probably not slamming the hardware.
Of course, it’s possible that the windows update was a factor, when combined with other conditions.
Say you have a system that has been online for 5 years continuously until a power outage knocks it out. When power is restored, the system doesn't boot to a working system. How far back do you have to go to in your backups to find a known good system? And this isn't just about hardware failure, it's an issue of configuration changes, too.
When it then completely falls apart on reboot, they spend several hours trying to fix it and completely forget the "early warning signs" that motivated them to reboot in the first place.
I've think the same applies to updates. I know the time I'm most likely to think about installing updates is when my computer is playing up.
If I reboot it and it starts working again, then I haven't fixed it at all.
Whatever the initial problem was is likely to still present after reboot -- and it will tend will pop up again later even if things temporarily seem to be working OK.
You only know this after the reboot. Reboot to fix the issue and if it comes back then you know you have to dig deeper. Why sink hours of effort into fixing a random bit flip? I'll take the opposite position and say that especially for consumer devices most issues are caused by some random event resulting in a soft error. They're very common and if they happen you don't "troubleshoot" that.
https://www.pcgamer.com/amazon-new-world-killing-rtx-3090-gp...
Come to think of it, maybe it was me. I might have trashed the MBR? I remember the error, though, "Non system disk or disk error".
If I recall correctly, he ended up scrapping the drive.
Hardware is more likely to fail under load than at idle.
Blaming the last thing that was happening before hardware failed isn't a good conclusion, especially when the failure mode manifests as random startup failures instead of a predictable stop at some software stage.
Happens quite often
Sometimes it boots fine, sometimes the spinning dial disappears and it gets hung on the black screen, sometimes it hangs during the spinning dial and freezes, and very occasionally blue screens with a DPC watchdog violation. Oddly, it can happen during Safe Mode boots as well.
I would think hardware, but RAM has been replaced and all is well once it boots up. I can redline the CPU and GPU at the same time with no issues.
I recently had a water cooler pump die during a Windows update. The pump was going out, but the unthrottled update getting stuck on a monster CPU finished it off.
I swear I was doing just fine with it booting reliably until I decided to try flashing it over the SWD interface. But wouldn't you know it, soldering a resistor fixed it. Mostly.
Similarly, I'm constantly hearing about Qualcomm's renewed interest in Linux and this and that and how the X2 Elite will be fully supported but I have never known them to be like this. A decade or so ago we were trying to work for a school project on one of their dev kits and the documentation was so sparse.
Then I see that the Snapdragon X Elite comes in this Ideacentre stuff but looking online no one has gotten Linux anywhere close to as good as Linux is on a Mac M2. That, for me, is the marker. If a Mac can run Linux better than whatever chipset you've released, it's just not hardware worth buying. If you're not Apple, you have to support Linux. Otherwise, to borrow Internet lingo, you're "deeply unserious".
Almost certainly a soft hardware failure, likely the SSD.
I've run into a similar situation - except the culprit was Linux not Windows. Tossed the machine in a closet for a few months, when it miraculously started working again. Until it broke again a day and a half later. It's disk or RAM corruption.
Give it up dude, it's the hardware, but let not an opportunity to smash Microsoft go unfulfilled.
> I opened the system and reseated everything, including the SSD. No change. I even tested the SSD in another machine to rule it out, and it’s fine too.
But that doesn't mean it's not bad RAM, a bad SSD controller, who knows what... there are only a few of these boxes in the wild regardless, so it's unlikely it can be debugged :(
Laptops seem particularly susceptible to whatever (anti) magic Microsoft utilise for their update rollback process, but it happens to every device class seemingly at random. Besides the run of the mill "corrupt files at random in System32", which is common and simple enough to fix with a clean install, I've had a few cases where it appears an attempt at rolling back a BIOS update has been interrupted by the rollback manager and left those machines hard bricked. They could only be recovered by flashing a clean BIOS image with an external programmer and clip (or hand soldering leads), after which they ran without issue.
As much as it's valid to question the unconditional anti-Microsoft mentality, they are still far from infallible and from my experience they are getting notably more unreliable in recent years.
If you actually read the article, you'd know it wasn't. Besides, Windows updates can and do deliver firmware/bios updates.
https://canonical.com/blog/ubuntu-now-officially-supports-nv...
So there is at least one ARM devkit with long term Linux support.
I would just completely disable Windows Update, act as if the computer is already compromised, and only do work where security is not an issue. That's the most "reliable" way to keep it working.
Of course, hindsight something something...
I haven't run windows update in like 20 years
I would replace your ram sticks. I had a similar mysterious issue on an old Intel nuc. Got some new sticks off Amazon and never had the problem again
The car would run fine once started, but the car just wouldn't start sometimes (quite modified so I knew the systems well). The started would turn as that was a simple relay, but all ECU controlled devices wouldn't trigger. Plugging into the ECU, no error codes and all looked normal.
Eventually we tracked the issue to some corruption in the ROM that was only getting read in certain circumstances, since the ECU stores maps for engine parameters based on things like pressure and temperature you might only hit the corrupted bits of a table in very specific circumstances.
Reflashed the ROM and all was good afterwards. The suspected cause of corruption was intermittent power supply that had been fixed a while earlier.
Security is not fluids. It doesn't naturally evaporate. So don't try to add like they're washer fluids.
Those low-level software and associated hardware don't take software overwrites very well, even today. They might have total cumulative max overwrites, or manufacturer supplied update codes can still be dubious. It's (not)okay if you are meaning it to be a tool for your planned obsolescence strategy, otherwise, just don't touch it for the sake of doing it.
You can also try to live boot into Ubuntu 25.04 arm64 since that iso has experimental snapdragon elite support and has some built-in drivers for storage and network - you can extract firmware from the windows drivers with qcom-firmware-extract - they recommend doing this from a windows partition which you should have (albeit possibly corrupted).
If that still fails - you have a ram issue as others have pointed out. I've had the exact same symptoms (hardware instability after windows update) and it was nvme ssd (an early samsung one) and ram, in both instances.
Not saying the windows update didn't also come with some junk firmware that got loaded into some of your devices, but that would be a distant diagnosis from ssd/ram (and many others would have seen the exact same thing during their update if it was that).
But, that said, it saddens me we've normalised "oh well" when it comes to kit. even dev kit. If MS can't manage release engineering to keep dev/test things alive, then it's not helpful to the belief they can do it for production things either.
I inherited an IBM PC/RT back in the 90s. It was well outside what most people would consider its support lifetime. IBM could not have been more helpful working out how to keep it alive. I suspect this influences why when I later had some financial authority I was happier to buy thinkpad, than any other hardware we had available: I knew from experience they stood behind their maintenance guarantees. The device was configured to run BSD, not the IBM supported OS of the day, made no difference. It was end of life product line, made no difference.
This was before Lenovo of course. But the point stands: people with positive support stories, keep that vendor in their top-set
I trust Microsoft 0% to keep developing Windows for it.
Given the symptoms (random crashes not right away at boot), and given that qcom is anal about secure boot, my guess is that it's unlikely that it's a firmware (in SPI-NOR or wherever) corruption that initially caused this. Firmware is checked each boot.
Might be as simple as degraded capacitor, or something similar.
And I can imagine that it's not hard to destroy this kind of HW physically with a SW update. PMICs can often produce voltages way higher than Vmax of connected components. But it's unlikely that if bug like that happened, that it would only affect one devkit out there, and not a whole range of devices.
My ROG Ally ran fine on Windows 11 at the beginning, but a year later always randomly crashed, even when idle, on a fresh OS install. After switching to SteamOS it runs stable again.
Either way, may the memroy of your Snapdragon Dev Kit be a blessing.
Ref:
- https://www.youtube.com/watch?v=XrA2Xe9f7e8 - https://www.jeffgeerling.com/blog/2024/qualcomm-snapdragon-d...
There are ARM laptops out there from multiple manufacturers, and there is a SnapDragon 2 on the horizon.
Then I knew Windows ARM probably wasn't going to make it. Why any technical person would want a PC( not including Macs)that explicitly can't run Linux I'll never know.
Some of us like the experience of Visual Studio, being able to do graphics development with modern graphics APIs that don't require a bazillion of code lines, with debuggers, not having to spend weekends trying to understand why yet again YouTube videos are not being hardware accelerated, scout for hardware that is supposed to work and then fails because the new firmaware update is no longer compatible,....
PCs aren't vertically integrated from a single vendor, and thus it isn't as if Microsoft alone can drag a whole ecosystem into ARM, even if the emulation would work out great.
Windows NT was also multi-architecture, and eventually all variants died, because x86 was good enough, and when Itanium came to be, AMD got a workaround to keep x86 going forward.
Even gaming doesn't work that great on Windows ARM.
They have the Surface line and own tons of game studios.
Where are the Gamepass games with Arm ?
Microsoft if they wanted to fund it right could get popular 3rd party software ported.
In retrospect it was hopelessly naive, but I even emailed Qualcomm asking if I could have a dev kit in exchange for porting one of my hobbyist games. They basically said thank you for asking but we don't have a program for this.
Now hypothetically let's say there was a Qualcomm Snapdragon Linux laptop. I could just port the code myself for most applications I actually need
https://www.theverge.com/news/758828/microsoft-windows-on-ar...
>Microsoft if they wanted to fund it right could get popular 3rd party software ported.
https://www.windowscentral.com/microsoft/windows-11/your-win...
These devkits are old and have already been released to consumer laptops over a year ago. So if you want to you can pick up pretty much any CoPilot+ PC. I'm not sure what your problem here is though.
But with an x86 device you can run Windows and Linux. With an Windows Arm device it's probably only going to work with Windows.
It's not clear what real advantages Arm gives you here.
Huh? https://www.phoronix.com/review/snapdragon-x1e-september
TL;DR: It runs, but not well, and performance has regressed since the last published benchmark.
But it's been the same story with ARM on windows now for at least a decade. The manufacturers just... do not give a single fuck. ARM is not comparable to x86 and will never be if ARM manufacturers continue to sabotage their own platform. It's not just Linux, either, these things are barely supported on Windows, run a fraction of the software, and don't run for very long. Ask anyone burned by ARM on windows attempts 1-100.
Then why would I pay money for a Qualcomm device just for more suffering? Unless I personally like tinkering or I am contributing to an open source project specifically for this, there is no way I would purchase a Qualcomm PC.
Which is what the original comment is about.
If someone wants to provide a link to a Linux iso that works with the Snapdragon Plus laptops( these are cheaper, but the experimental Ubuntu ISO is only for the elites) I'll go buy a Snapdragon Plus laptop next month. This would be awesome if the support was there.