At least in the first generation of UML, the guest processes are in fact host processes. The guest kernel (a userland process) essentially runs them under ptrace() and catches all of the system calls made by the guest process and rewires them so they do operations inside of the guest kernel. They otherwise run like host processes on host CPU, though.
Completing the illusion, however, the guest kernel also skillfully rewires the guest ptrace() calls so you can still use strace or gdb inside of the guest!
It's good enough that you can go deeper and run UML inside of UML.
> What’s the real-world utility here? Is UML suitable for running isolated workloads? My educated guess is: probably not for most production scenarios.
Back in the day there were hosts offering UML VMs for rent. This is actually how Linode got its start!
The host kernel patch for skas was never merged, probably for good reason, but that and Xen/VM hardware support meant UML stopped making sense.
They didn't entirely. It is still maintained, developed even.
> It would seem to be a potentially useful middle ground between docker containers and KVM VMs
Back in the day I actually used it that way for running “VM”s and some firms even sold VPS accounts based on UML. Back then other virtualisation options were not nearly as mature as they soon became, or cost proper money (IIRC VMWare was good by that point but there were no free or reliable OSS options yet), and UML offered better isolation (a full environment including its own root) than simply chrooting a process tree (fuller containers were not a thing back then either, so all users fully existed on the host and you couldn't give out root access net.).
These days things like KVM and more advanced containerisation solve the problems most people want UML for and do so much more efficiently (UML performs badly, compared to other options, where there is a lot of kernel interaction, including any filesystem or network access).
UML is still very useful for its original intent though: testing and debugging certain kernel level items like filesystems (FUSE is competition here in many, but not all, cases), network drivers & filters, and so forth. When things go wrong you can trace into it in ways you can not (as easily) with VMS and containers.
I worked for a hosting company that sold UML-based virtual machines, while we trialed Xen as the successor, before moving to use KVM instead.
But also KVM supported things like live-migration and virtio drivers which made custom interfaces and portability easier to deal with.
It's testing. Using timetravel mode you can skip sleeps and speedup your unit tests massively.
FreeBSD Jails: https://docs.freebsd.org/en/books/handbook/jails/
And then panicked, because it had no root. But hey, I've got a root filesystem right here!
So the second time I typed "linux root=/dev/hda1" (because we had parallel ATA drives back then).
It booted, mounted root, and of course that was the root filesystem the host was booted off.
Anyway it recovered after a power cycle and I didn't need to reinstall, and most importantly I learned not to do THAT again, which is often the important thing to learn.
Wonder if it's hard to make it SMP, if too many places use something like #ifdef CONFIG_ARCH_IS_UM to tell whether it is single CPU, it might be hard.
That’s giving very firecracker vibes
That was addressed in the first few sentences.
Hell I wish someone made something that could build dockerfiles and immediately start them as VMs in emulation using just the normal socket api to emulate network.