I was going to buy this day 1, then I found out Linux support wasn't ready and I went with a new AMD 365 laptop instead. This was it's own adventure, with Ubuntu not working out of the box, so I had to go with a rolling release distro.
It's been months and I think Ubuntu might boot on the Thinkpad 14s. It's definitely not supported as a daily driver, or on any other hardware.
Based on my experience, more recent kernel versions have more fixes/better support for recent hardware(or sometimes not so recent). LTS doesn't backport everything from later releases for obvious reasons.
Curse of Android, Qualcomm lived for so long with permanently forked kernel, that there is no pressure on code quality, only thing they know is how to sling minimally viable platform code at their unfortunate customers. No one cares about maintainability of it, because even before code has a chance to stabilise, there is already new model to be released, new hardware to be supported. At this point no one cares about old hardware.
Does Linux still require specific images with patches to be built for every SBC/device?
https://www.newegg.com/asrock-rack-altrad8ud-1l2t-q64-22-amp...
If your SBC requires things not in mainline linux (which is common for new ARM SoCs), then you will need a custom kernel.
Otherwise it can use a stock kernel, but might need a custom version of uboot, and will definitely need a board-specific device-tree.
However, device trees are far better than ACPI. The only reason anyone likes it is that there's a lot of generous people dedicating their time to patching over broken ACPI tables so that users don't have to see how utterly terrible it is. Device trees make that pain into very clear and obvious errors instead of giving you a subtly broken system.
(generally I think this is an area which just hasn't become mainstream enough for this standardisation to be useful: when most of your users are hacking around with the SBC and are happy to just download the vendor's SD card image, there's very little impetus to make something plug-and-play)
I think it's a more subtle problem of embedded software engineering culture being a decade or more behind the rest of software development, especially at any IC company that aren't NVIDIA/Intel/AMD. IME most SBC users are commercial, not users hacking with it (I don't know about this specific dev kit though), and every professional EE I know hates dealing with poorly supported non-mainline linux SBCs. If your company/clients don't commit to a single platform like iMX for years at a time, managing embedded linux is a pain in the ass. That's why RPi was so successful with its compute modules, the community handled a lot of that pain instead of depending on the vendor.
(I do know the pain of non-mainline linux SBCs, and I make some effort to avoid it now: even if it means using old chips, I'll very strongly prefer one with mainline support, note that raspberry pi is also pretty behind on getting their support upstream, even the pi 4 is still lagging, and the pi 5 isn't even usable. They just have some vaguely decent support for their fork)
I'm not sure, because putting the bootloader in EPROM or NOR has been done on embedded boards for years; it's just rare on mass-manufactured general-use SBCs. I don't know if that's because of BOM, convenience for commercial SBC customers, or just to make it harder to brick the board.
Noone would accept an x86 pc to not know itself well enough to hand you a list of its parts, but for some reason having to find the right mix of kernel, u-boot and DTB is considered "fine" for ARM boards, which I don't really understand.
It is a kludge for sure, but a necessary one in the current ecosystem. =3
What I'm really asking is if a sufficiently determined person couldn't have avoided all these headaches by building and/or standardizing equivalent technologies. (And if it's possible why hasn't it happened yet?)
ARM doesn’t make chips you can buy and plug into devices (they don’t make chips at all). You get the IP for the core and you then typically integrate it into your SoC. There is/was so much custom development that there was no benefit in adding an abstraction for the sake of adding an abstraction when the only ARM SoC a device would ever see is the one that shipped from the factory with it, and the tight coupling between firmware and hardware developers for phones and tablets meant you would just hardcode the values and behaviors and peripherals you expected to find.
There is a level of abstraction at the code level with svd files that let you more easily code new firmware against new chips reusing existing logic, but it’s like the equivalent of a some mediocre api compatibility without abi compatibility - you still need new blobs for the end user.
The era of PCs and laptops using ARM running generic operating systems came ages into the existence and growth of the ARM ecosystem. Compromises such as device trees were found, but it’s nothing like BIOS and ACPI.
(Now we’ll get the typical replies stating “yes but no one does BIOS and ACPI correctly so I’d rather have to wait for actually correct device trees blobs from the manufacturer than a buggy ACPI implementation” to offer the alternative viewpoint.)
The reason that this worked out in x86-land is that the entire industry spawned from cloning and making extensions for a single product, the IBM PC.
It’s not the top-line SoC, but for the same price as the DevKit it comes with a free battery and AMOLED screen:
https://www.bestbuy.com/site/samsung-galaxy-book4-edge-copil...
It has OpenBSD support in 7.6, and there is a devicetree patch floating around for Linux.
But have you encountered any serious issues so far (I expect to put Linux on it, and I build my own kernels).
On Linux, I successfully built a kernel but I have not gotten the firmware working. My WIP is here and I am very much a novice in this area: https://github.com/conradev/x1e-nixos-config
The documentation and well maintained kernel are kinder to new devs. =)
In general, most reasonable ARM64 machines are not yet value competitive with the same priced intel+Nvidia_GPU laptops.
Application support (MacOS or Win11 on ARM) is usually what hits hard in edge-cases people take for granted as working most of the time.
Best of luck, =)
The porting issues are not as spotty as a few years back, but one should check your use-case won't be hit with 3D/codec issues or missing services. Recall qemu does ARM64 just fine... =3
I know they have tons of proprietary blobs but I suspect they could still just deliver a linux ISO with all the binaries included.
It just smells like someone at Qualcomm did not see the value and cut the budget to the bone. I had my fears around long term software support of those chips. This news makes me even less confident that I trust them to support these chips long term.
That's been the theme of the last two years. Qualcomm laid off 1200 last year and a few hundred last month (and I'm sure there's been more if I look deeper), so it's not doing anything special compared to the rest of the industry.
It should be pretty clear by now that most Windows developers do not care about ARM. It's not that I think they are dismissive of the platform, but they are waiting for Microsoft, Dell and Lenovo to deliver a finished product. They aren't going to spend time on yet another failed Microsoft hardware experiment.
Windows developers aren't going to switch to ARM in large numbers until Microsoft and partners can deliver a platform with a decade or two of longevity. They are completely accustom to Microsoft preserving backwards compatibility and they'll expect the same to be turn for a architecture switch.
2019-2020 - Microsoft partners with Qualcomm to launch Windows ARM... again
2024-???? - Microsoft partners with Qualcomm... again, to launch Windows ARM... again
Fourth times the charm?
And the one before that.
Again.
Like every single challenger that has come before.
Perhaps it is prudent to not challenge Win32.
(This comment was submitted on an ARM-based Linux Desktop)
If you want a review: it's fine if your goal is to just experiment with an ARM machine or a Linux tablet, but GUI apps are pretty slow. If you switch to a console VT it's fine. I haven't tried any benchmarks but I don't imagine they'll get a very good score at all.
And unlike with Apple, on Windows the x86 won't go away. So there's two architectures that need to be supported. And I'm not so sure everybody is interested to put the effort for Arm.
I should know, I owned both a Windows ARM PC and a Tesla with “FSD”.
Now Windows 11 feels as exciting as sponge cake and nobody cares too much about PCs anymore
But each of the copy-Apple things aren't that good. When Apple copies, the product is approached from the top. When these guys copy, the product is approached from the bottom. I suppose that's because of strong brand value allowing for higher pricing allowing for premium pricing in the former case.
There's also the Steam Deck, which introduced a completely new PC form factor.
Windows on ARM works on the RPi4. I would have to think other ARM license holders could match whatever x86 emu optimizations Qualcomm had come up with without too much trouble, and MS may even get a license to continue to use them?
I'm looking forward to learn what MS's future plans are.
Arm, the architecture that forever remains inaccessible. It's been well over a decade since AMD announced project Seattle, an ARM chip, which took many years to eventually make the A1100 Opteron which was still basically u purchaseable & slow. ARM is just endless quagmires & failures. I don't know what it is with this microarchitecture being so ubiquitous yet so failing to come to market again and again and again.
Not-x86 chips keep grabbing headlines because it's exciting to think of what competition could look like. But then none of them do the hard work to make an entire platform/ecosystem that would be required to make a compelling product that has an upgrade path as x86 was destined to be from the beginning. Well, RPI hasn't done bad though.
Unless these other technologies standardize, x86 can't expect much competition in existing markets. I guess Android was an exception in that phones were already a market of throwaway garbage devices so there was no expectation that good support would occur.
Would be happy to be proven wrong. Got some good examples?
The main difference between x86 and ARM is that x86 is slightly harder to decode because instructions can be variable-width. But I have never heard of instruction decoding complexity being a particularly important bottleneck.
The instruction set has only moderate impact on the chip’s frontend, and no impact on the backend. Most design decisions are unconstrained by the choice of instruction set.
Cause of that, if one is willing and able to go for a well supported Linux distro, they can in my experience get good battery life regardless of ISA. Essentially, whether on a Macbook with Asahi or a modern x86 notebook with either Linux or an aggressively managed/fixed Windows install, you'll do well. Personally lack the skill for the latter though, my Surface remains scolding hot whilst sleeping even after a fresh install, but c'est la vie...
[0] https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...
Great perf-to-watt, but absolutely locked down so you can't do anything meaningful in the embedded space. I'm not over here mounting a max specced Mac Mini just to get that perf-to-watt.
We're using low TDP x86-64 when we need that.
16 core Ryzen Threadripper 2950X: 9min
M3 macbook pro: 4min
Pretty meaningful when your life revolves around building Ardour.
A scratch, optimized build of Ardour on a Ryzen 9 9950X takes 1m51s.
anyway, the point was about the obsolesence or other of the ISA. It was about the claim that various versions of apple silicon are no good for performance, only for perf-per-watt or some related variance.
But it's certainly true that a lot of their prestige just comes down to brilliant marketing. What else could get businesses to buy laptops with non-replaceable batteries and storage and non-upgradeable RAM that need to be trashed every 3-6 years?
Most PC laptop vendors? The part you’re missing is that businesses usually plan to refresh devices every few years anyway, and since the service life on a Mac is 5-10 years this just isn’t a limitation most people hit whereas weight, lower performance, and worse battery life are something you notice every day.
The Docker experience on macOS is also marred by the fact that Docker Inc. appears to have limited interest in their Docker Desktop offering, whereas a third-party alternative like Orbstack provides a much better experience.
Most people don't care about that (if you look at it the other way around running macOS on PCs these days isn't that easy either and it will only get worse). Just don't get a Mac if you want to use Linux?
Docker is pretty painful certainly, though. Windows seem like a much better option if need to develop for Linux.
> I wonder if anyone at Microsoft [...]
Qualcomms, as far as I can tell, imperfect [0] efforts are independent of this discussion.
[0] https://www.phoronix.com/news/Linux-Disabling-X-Elite-GPU
They were asking what MSFT was doing in response to the previous commenter talking about how Apple is doing nothing to help with Linux support. Apple is both the OS maker and the device manufacturer in that case. In the case of the Snapdragon chips, the manufacturers are upstreaming support into the mainline kernel.
Recently, Docker was exposed to many bugs with Arm64
Dang Intel CPUs, terrible battery performance when paired with an OS that just randomly cranks the CPU up every couple seconds for no user-serving reason.
For me, I like a convertible laptop which I cannot buy from Apple for any amount of money as far as I know. If I could have gotten a fully unlocked iPad I probably would have gone for that, but they don’t seem to exist. Yet, at least! Here’s hoping.
It actually would be nice if Microsoft would figure out their part of the mess, because that would give people the ability to trade money for battery life.