I'm not sure I really 'get' what's particularly new or useful about 'immutable Linux' distros

Jeff S

Ars Tribunus Angusticlavius
8,765
Subscriptor++
I've done some reading about immutable Linux distros and, well, I guess I don't get what all the fuss is about. The idea seems to be to 'harden' a system by making it hard to modify, but. . .

Isn't that already supposed to be the case for non-root Linux users on any distro? - the filesystem is already functionally immutable for everyone but root, because of user/group/acl permissions restrictions on files and dir, outside of their own home dir (unless they are given write access for some reason, either via group membership, ACL, or being made the owner of a dir). An immutable home dir would be pretty useless, so I presume that even on immutable distros, you can still write to your home dir to your heart's content.

So what is actually new? I guess the idea is that even if, say some malware manages to get elevated to root through some exploit, it still couldn't change the system, but. . . well, I don't think long term that will work, because at some point, at the end of the day, root needs to be able to install new apps and make other changes to the system (e.g. changing config files under /etc), so there must still be a mechanism for root to make such changes, and so, whatever that mechanism is, what's to stop malware from, after elevating itself to root, using that mechanism to make changes?
 

VividVerism

Ars Praefectus
6,728
Subscriptor
I don't know how they normally work for consumer distributions, but for embedded systems the answer can be that even root can't make changes. Changes come from outside the system, whether they get flashed, or loaded as a complete system image that is then verified by the bootloader, or stored on swappable physical media, or some other method: the system cannot be modified from within the system.
 

Jeff S

Ars Tribunus Angusticlavius
8,765
Subscriptor++
I don't know how they normally work for consumer distributions, but for embedded systems the answer can be that even root can't make changes. Changes come from outside the system, whether they get flashed, or loaded as a complete system image that is then verified by the bootloader, or stored on swappable physical media, or some other method: the system cannot be modified from within the system.
I guess that could make sense for server images.
 

koala

Ars Tribunus Angusticlavius
7,579
I don't think the main factor is protection, really. It's a system were rollbacks are trivial (normally they provide A/B booting). Plus many things are simplified- like for instance apparently the Universal Blue project uses container images for the OS, so apparently it's fairly issue to create and update presets of installed software.

Also the accompanying tools are interesting (and they can be used in traditional distros). Toolbx/distrobox are quite useful.
 
  • Like
Reactions: VividVerism

koala

Ars Tribunus Angusticlavius
7,579
I actually think either something like Silverblue or NixOS is going to be the future of Linux distros (or OSS operating systems).

NixOS is too futuristic and bleeding edge right now- every time I sit down to experiment with it, I run away scared. Two colleagues are using it as their desktop, and right now it's looking like early Linux distros where you had to fiddle a lot.

But truly declarative configuration of your operating system and user environment are amazing.
 

Burn24

Smack-Fu Master, in training
53
It's meant to reduce the maintenance burden from major upgrades, system updates and the like by keeping the boot environment as static as possible, from a hopefully known good reference point, and also provide quick recovery/rollback like koala said. It also looks great for 'fleet management' of a large number of end user devices, significantly reducing variability on end user devices. This view was popularised by the 'containerisation paradigm' for individual services/programs from the server sphere, and I assume from that people wanted those same benefits for their end-user device.

You ran into the common issue, where in a sense, energy or entropy can't be destroyed or whatnot. The maintenance burden for these distros, compared to traditional OS installs, has been moved from regular maintenance to the initial configuration and deployment and update deployments of the environment, and there are also changes in use/thinking required to effectively use immutable distros as an end user.

Moving as much of the maintenance burden out of 'the field' and more smoothly into internal IT service management is a big win in time saved and reduced incidents.

As you can probably tell by now from my language, there are classes of IT problems these distros appear to be trying to solve, and they are very different problems than of a single user installing an OS on their computer. I like the idea in theory, but in practise, I want to use my device today with as little friction as possible. Currently the changes and learning required to use these distros effectively is a large hurdle to overcome (queue naysaying). I just want to use my end-user device with as little friction as possible, like most people.

I think these immutable distros are really cool and probably part of our IT future, but in the meantime I only have so many hours to tinker. Instead I have been more interested in things like Fedora CoreOS for my personal small-scale services on EC2. Instead of having to hand-roll updates in the cloud like the old I am, you can redirect that maintenance burden to CI/CD workflows, and redeploy your environment as needed with updates or changes. Note the really useful resume-driven development synergy.

Until immutable distro desktops are easier, or they're forced on us, I assume most other *nix heads like me just try to be good at backing up $HOME and accepting we have to get our hands dirty sometimes fixing or upgrading things. Although it is not for me, and I think the Nix guys are kinda weird, I like their work and think it's valuable for our future and I think before too long they'll solve whatever problems there are for us.
 
  • Like
Reactions: continuum

koala

Ars Tribunus Angusticlavius
7,579
I think there are a few very interesting factors at play here.

I think the traditional distribution model is going through a crisis. Debian tries to be the universal operating system. In theory, any software in the world can be packaged for Debian (although maybe in nonfree), so in theory, you could apt install anything.

This actually worked quite well with C software. Dependencies moved slowly and the fact that the easiest way to get those dependencies was through your package manager, made things work well.

However, Python, Go, Rust, Javascript... are very different from C. In these languages, developers use the language's tools to fetch dependencies. With traditional Linux packaging, any dependency needs to be packaged individually, and there's a significantly greater overhead to keep stuff packaged, to maintain parallel dependency versions, etc.

Some distributions, like Arch, have adopted looser packaging- even if it's not the mainstream option. In the AUR, you can find Go packages compiled, but their dependencies are not separately packaged. This has many drawbacks, but it allows the AUR to contain more software and more up to date than Debian.

Some software, now you can fetch binaries directly from the software maintainers (with all dependencies embedded), and it just works. And this does not require root.

Flatpak, Snap, but more interestingly, https://justine.lol/cosmo3/ , are similar to distro package managers... but distro-independent. In the case of Cosmo, even operating system independent (a single Cosmo binary runs in Windows, Linux, macOS, and others; across different archs). And interestingly, they don't require root.

Universal Blue distros have adopted brew as an alternative (which I don't like, because it's quite root-y).

But it's easier to deliver a static binary, a Flatpak, a Cosmo binary, or a brew thing, than having multiple distro packages.

(With Cosmo, the value proposition is even stronger. I wish it would be easy to make Flatpaks that work on macOS and Windows.)

So I see that distros can provide a stable core of software- even a big one! But you can use alternate package management on top to get an increased variety of packages, or more up-to-date packages.

And this plays very well with immutable distros. You should really not install anything in Universal Blue distros to the root container. In fact, Universal Blue tries to deliver as many images as necessary so that you don't have to do that. And they push you to alternate methods to install everything else.

I could see this replacing traditional distributions as the mainstream OSS operating systems. Shipping software for Linux is quite difficult, and one of the major obstacles to its adoption.
 
  • Like
Reactions: m0nckywrench

Jeff S

Ars Tribunus Angusticlavius
8,765
Subscriptor++
I actually think either something like Silverblue or NixOS is going to be the future of Linux distros (or OSS operating systems).

NixOS is too futuristic and bleeding edge right now- every time I sit down to experiment with it, I run away scared. Two colleagues are using it as their desktop, and right now it's looking like early Linux distros where you had to fiddle a lot.

But truly declarative configuration of your operating system and user environment are amazing.
I'm sure declarative configuration could pair quite well with immutable Linux, but I think those are really two different things, yes?
 

Jeff S

Ars Tribunus Angusticlavius
8,765
Subscriptor++
I don't think the main factor is protection, really. It's a system were rollbacks are trivial (normally they provide A/B booting). Plus many things are simplified- like for instance apparently the Universal Blue project uses container images for the OS, so apparently it's fairly issue to create and update presets of installed software.

Also the accompanying tools are interesting (and they can be used in traditional distros). Toolbx/distrobox are quite useful.
I remember playing around with this, with SuSE on Btrfs. I remember it was definitely a cool idea, but also, Btrfs was kind of slow (it is, apparently, especially terrible for apps like RDMBS servers that do very high block i/o because of the CoW overhead on systems that constantly write changes).

So, is immutable Linux something similar to copy-on-write snapshots, but more performant?

Although, I would also say that one could have the root filesystem on ZFS, Btrfs, bcachefs, etc (that is, CoW filesystems), and then use a different, non-CoW filesystem for RDBMS, MongoDb, etc?
 

koala

Ars Tribunus Angusticlavius
7,579
I'm sure declarative configuration could pair quite well with immutable Linux, but I think those are really two different things, yes?
Yes, two different things.

However, there's another interesting angle. If you adopt the Universal Blue approach, then you build the immutable image with a procedure, like writing a Containerfile. That makes your configuration automatically declarative.

If you work with regular Silverblue and use rpm-ostree install to add packages, you lose that; your system is still stateful.

(NixOS is similar to the Universal Blue approach, but even more hardcore.)
I remember playing around with this, with SuSE on Btrfs. I remember it was definitely a cool idea, but also, Btrfs was kind of slow (it is, apparently, especially terrible for apps like RDMBS servers that do very high block i/o because of the CoW overhead on systems that constantly write changes).

So, is immutable Linux something similar to copy-on-write snapshots, but more performant?

Although, I would also say that one could have the root filesystem on ZFS, Btrfs, bcachefs, etc (that is, CoW filesystems), and then use a different, non-CoW filesystem for RDBMS, MongoDb, etc?
It's easier. With immutable Linux you can do A/B booting. You have two system partitions, and boot off one. Then updates just deploy a new image to the other partition, and switch the bootloader. No snapshots required. All your data lives in a separate partition.

IMHO, systems must be designed with rollbacks in mind or else it's quite hard to support rollbacks properly.
 
  • Like
Reactions: VividVerism

Jeff S

Ars Tribunus Angusticlavius
8,765
Subscriptor++
Yes, two different things.

However, there's another interesting angle. If you adopt the Universal Blue approach, then you build the immutable image with a procedure, like writing a Containerfile. That makes your configuration automatically declarative.

If you work with regular Silverblue and use rpm-ostree install to add packages, you lose that; your system is still stateful.

(NixOS is similar to the Universal Blue approach, but even more hardcore.)

It's easier. With immutable Linux you can do A/B booting. You have two system partitions, and boot off one. Then updates just deploy a new image to the other partition, and switch the bootloader. No snapshots required. All your data lives in a separate partition.

IMHO, systems must be designed with rollbacks in mind or else it's quite hard to support rollbacks properly.
Copy on Write is sort of like an immutable system, since the snaphshots are immutable once created. Snapshots also have the advantage that everything that is the same takes up almost no additional disk space (multiple snaps will point to the same blocks of data on disk), and you aren't limited to A/B - you can have a whole history of snapshots you can roll back to. CoW has the other advantage that you can use regular system tools to modify things, instead of some 'build system' - although build systems have advantages too, especially for making things declarative as previously mentioned, which I agree has a lot of value.

Just different approaches to the same end point, it seems like?

For some reason, CoW filesystems never really took off, and I kind of don't understand why. People often complain in discussions about Btrfs having some issues in RAID5/6 configurations, but how common are those, anyhow? I do wish that the Btrfs devs could get that fixed, though - but Btrfs could still be useful on a single-disk install, or a RAID 0 or RAID 0+1, which from what I understand, Btrfs has no problems in those configurations, an in fact, I think I recall an Ars article, probably a Jim Salter piece, that strongly argued that for most users who aren't enterprise users, RAID 0+1 is far better than RAID5/6 anyhow.

I would think you could deploy something like NixOS or Guix on top of Btrfs, still have declarative system configuration, but also get snapshots. I might try that at some point.
 
Last edited:

koala

Ars Tribunus Angusticlavius
7,579
Well, I think ZFS is pretty important (see TrueNAS, Proxmox), Fedora 33 adopted BTRFS on desktop...

However, NixOS filesystem layout is designed in a way... which really goes farther than CoW. Every version of an installed package is self-contained into a directory named from a hash of the package. Old versions of packages are kept on the system, but subject to garbage collection. So you can rollback the entire version of your Nix configuration or even just individual changes, just by changing which directories are used.

Really, the design of NixOS is worth knowing about. I think if at some point some enterprise Linux vendor decides to copy it and provide a solid LTS with commercial support, but still OSS (so there can be OSS clones)...
 

Jeff S

Ars Tribunus Angusticlavius
8,765
Subscriptor++
Well, I think ZFS is pretty important (see TrueNAS, Proxmox), Fedora 33 adopted BTRFS on desktop...

However, NixOS filesystem layout is designed in a way... which really goes farther than CoW. Every version of an installed package is self-contained into a directory named from a hash of the package. Old versions of packages are kept on the system, but subject to garbage collection. So you can rollback the entire version of your Nix configuration or even just individual changes, just by changing which directories are used.

Really, the design of NixOS is worth knowing about. I think if at some point some enterprise Linux vendor decides to copy it and provide a solid LTS with commercial support, but still OSS (so there can be OSS clones)...
I really need to look more into NixOS, but I wonder - my understanding is the point of NixOS (and also Guix?) is to have delcarative setup - does that declarative setup also include the sorts of config files under /etc?

One nice thing about a CoW filesystem is, if you break something in the configs, which isn't really part of the packages themselves, then rollback the root filesystem, you not only rollback the packages, but you rollback the configuration (or you could have /etc be a separate subvolume that you can snapshot and restore independently) too.

Seems like with something like a NixOS or Guix, you could do something similar by having declarative config that essentially rolls back the files under /etc for you too?

But, CoW snapshots are also almost instantaneous to snapshot or rollback - I'm not sure that's so true of NixOS/Guix?

Although, I think they use a lot of symlinks/hardlinks? So would be pretty rapid to rollback too, I guess, because it only takes a second or two to re-write thousands of symlinks/hardlinks, I would think?
 

koala

Ars Tribunus Angusticlavius
7,579
Necro'ing, but yes... Basically NixOS declarative config extends to /etc.

It's actually pretty cool; you create a configuration for a system; and Nix packages can hook into this configuration. So for instance, the system configuration can include configuration for sshd, and the sshd package will "render" the documentation you declare in the NixOS system configuration to the appropriate configuration files.

So if you switch back to an earlier NixOS config, yes, /etc/ will be rolled back- basically because it's generated from the NixOS config.