I think that installation was originally 18.04 and I installed it when it was released. A while ago anyways and I’ve been upgrading it as new versions roll out and with the latest upgrade and snapd software it has become more and more annoying to keep the operating system happy and out of my way so I can do whatever I need to do on the computer.
Snap updates have been annoying and they randomly (and temporarily) broke stuff while some update process was running on background, but as whole reinstallation is a pain in the rear I have just swallowed the annoyance and kept the thing running.
But now today, when I planned that I’d spend the day with paperwork and other “administrative” things I’ve been pushing off due to life being busy, I booted the computer and primary monitor was dead, secondary has resolution of something like 1024x768, nvidia drivers are absent and usability in general just isn’t there.
After couple of swear words I thought that ok, I’ll fix this, I’ll install all the updates and make the system happy again. But no. That’s not going to happen, at least not very easily.
I’m running LUKS encryption and thus I have a separate boot -partition. 700MB of it. I don’t remember if installer recommended that or if I just threw some reasonable sounding amount on the installer. No matter where that originally came from, it should be enough (this other ubuntu I’m writing this with has 157MB stored on /boot). I removed older kernels, but still the installer claims that I need at least 480MB (or something like that) free space on /boot, but the single kernel image, initrd and whatever crap it includes consumes 280MB (or so). So apt just fails on upgrade as it can’t generate new initrd or whatever it tries to do.
So I grabbed my ventoy-drive, downloaded latest mint ISO on it and instead of doing something productive I planned to do I’ll spend couple of hours at reinstalling the whole system. It’ll be quite a while before I install ubuntu on anything.
And it’s not just this one broken update, like I mentioned I’ve had a lot of issues with the setup and at least majority of them is caused by ubuntu and it’s package management. This was just a tipping point to finally leave that abusive relationship with my tool and set it up so that I can actually use it instead of figuring out what’s broken now and next.
In general, consider setting up any kind of rollback functionality; this will enable you to get right back to action without any downtime when you’re time-restricted. This can be achieved by configuring your system with (GRUB-)Btrfs+TImeshift/Snapper. Please bear in mind that it’s likely that you have to come back to solve it eventually, though*. (Perhaps it’s worth thinking about what can be done to ensure that you don’t end up with a broken system in the first place. *cough* ‘immutable’ distro *cough*)
If this seems too troublesome to setup, then consider using distros that have this properly setup from the get-go by default; like (in alphabetical order) Garuda Linux, Manjaro, Nobara, openSUSE Aeon/Kalpa/Leap/Slowroll/Tumbleweed, siduction and SpiralLinux. Furthermore, so-called ‘immutable’ distros also have rollback functionality while not relying on aforementioned (GRUB-)Btrfs+TImeshift/Snapper; this applies to e.g. blendOS, Fedora Kinoite/Sericea/Silverblue, Guix, NixOS and Vanilla OS.
If you feel absolutely overwhelmed by the amount of choice, then you should probably consider the bold ones; not because I think they’re necessarily better but:
While any of the aforementioned distros do a decent job at ‘supporting’ Nvidia, perhaps you might be best off with uBlue’s Nvidia images. As these are images relying on the same technology that Fedora’s immutable distros do, rollback functionality and all the other good stuff we’ve come to love -like automatic upgrades in the background- are present as well. In case you’re interested to know how these actually provide improved Nvidia support:
"We’ve slipstreamed the Nvidia drivers right onto the operating system image. Steps that once took place on your local laptop are now done in a continuous integration system in GitHub. Once they are complete, the system stamps out an image which then makes its way to your PC.
No more building drivers on your laptop, dealing with signing, akmods, third party repo conflicts, or any of that. We’ve fully automated it so that if there’s an issue, we fix it in GitHub, for everyone.
But it’s not just installation and configuration: We provide Nvidia driver versions 525, 520, and 470 for each of these. You can atomically switch between any of these, so if your driver worked perfectly on a certain day and you find a regression you just rebase to that image.
Or switch to another desktop entirely.
No other desktop Linux does this, and we’re just getting started."
Source
Great piece of information. I personally don’t see the benefits with immutable distribution, or at least it (without any experience) feels like that I’ll spend more time setting it up and tinkering with it than actually recovering from a rare cases where things just break. Or at least that’s the way it’s used to be for a very long time and even if something would break it atleast used to be pretty much as fast as reverting a snapshot to fix the problem. Sure, you need to be able to work on a bare console and browse trough log files, but I’m old enough that it was the only option back in the day if you wanted to get X running.
However the case today was something that I just couldn’t easily fix as the boot partition just didn’t have enough space (since when 700MB isn’t enough…) even a rollback wouldn’t have helped to actually fix the installation. Potentially I might had an option to move LVM partition on the disk to grow boot partition, but that would’ve required shrinking filesystem first (which isn’t trivial on a LVM PV) and the experience ubuntu has lately provided I just took the longer route and installed mint with zfs. It should be pretty stable as there’s no snap packages which update at random intervals and it’s a familiar environment for me (dpkg > rpm).
Even if immutable distros might not be for my use case, your comment has spawned a good thread of discussion and that’s absolutely a good thing.
Ah, I had misunderstood your /boot situation previously. There’s an easy way to fix it by backing up current content of boot, unmounting it, creating some dir somewhere where there’s space (
/tempboot
was my choice last time), bind mounting it to /boot and going through the apt process. Then unmount the bind, mount the real boot, delete everything except currently booted kernel stuff, copy all the things from/tempboot
update the initrd and grub. Et voila!Why I didn’t think of that. It whould have fixed the immediate problem pretty fast. I would still have the issue with too small boot partition, but it would’ve been faster to fix the issue at hand. But in either case, I’m pretty happy I got new distro installed and hopefully that’ll fulfil my needs better for years to come.
Thinking straight is rare in stressful situations.
Broken computers aren’t really stressful to me anymore, but it sure plays a part that I kinda-sorta had waited for reason to wipe the whole thing anyways and as I could still access all the files on the system, so in the end it was somewhat convenient excuse to take the time to switch the distribution. Apparently I didn’t have backup for ~/.ssh/config even if I thoguht I did, but those dozen lines of configuration isn’t a big deal.
Thanks anyway, a good reminder that with linux there’s always options to work around the problem.
Thank you for your kind words 😊!
That might be the case depending on your proficiency and to what degree the ‘immutable’ distro allows you to configure your distro declarative. On e.g. NixOS you can define (most of) your system declarative. As such, reinstalling your entire setup is done through some config files. You can even push this further with the (in)famous Impermanence module that has been popularized by the popular Erase your darlings blog-post, in which your system is wiped every time you shut off the machine and rebuild (basically from scratch) every time you boot into it.
I haven’t worked with LVM yet. Defaulting to Btrfs (as Fedora -amongst others- does) has so far provided me a reliable experience, even though I’m aware that I’m missing out on performance. Hopefully, Bcachefs will prove to be a vast improvement over Btrfs in a relatively short time-span. You’ve pointed out to have installed Linux Mint with ZFS. Would I be correct to assume that you’ve been hurt by Btrfs in its infancy and choose to not rely on it since? Or is it related to lacking proper support for RAID 5/6? Or perhaps something else? Please feel free to inform me as I don’t feel confident on this topic!
Understandable. Though, I can’t stop myself from being very interested in their upcoming Ubuntu Core Desktop. But I imagine you couldn’t care less 😜.
I have absolutely zero experience with btrfs. Mint doesn’t offer it by default and I’m just starting to learn bits’n’bobs of zfs (and I like it so far) so I just chose it with an idea that I can learn it on a real world situation. I already have zfs pool on my proxmox host, but for that I hope I’d gone with something else as it’s pretty hungry for memory and my server doesn’t have a ton to spare. But reinstalling that with something else is a whole another can of worms as I’d need to dump couple terabytes worth of data to somewhere else in order to make a clean install. I suppose it might be an option to move data around on the disks and convert the whole stack to LVM one drive at the time, but it’s something for the future.
I was a debian only user for a long time but when woody/sarge (back in 2005-2006) had pretty old binaries compared to upstream and ubuntu started to gain popularity I switched over. Specially the PPA support was really nice back then (and has been pretty good for several years), so specially for a desktop it was pretty good and if I’m not mistaken you could even switch from debian to ubuntu only by editing sources list and running dist-upgrade with some manual fixes.
So, coming from a mindset that everything just works and switching from a release to another is just a bit longer and more complex update the current trend rubs me in a very much wrong way.
So, basically the tl;dr is that life is much more complex today than it was back in the day where I could just tinker with things for hours without any responsibilities (and there’s a ton more to tinker with, my home automation setup really needs some TLC to optimize electricity consumption) so I just want an OS which gets out of my way and allows me to do whatever I need to whenever I need it. Immutable distro might be an answer, but currently I don’t have spare hours to actually learn how they work. I just want my sysVinit back with distributions which can go on for a decade without any major hiccups.
Are there even immutable distros old enough to have compatibility issues between a 5 year old installation and the latest version?
NixOS has been around since 2003, thus making it older than Ubuntu (2004). Even Silverblue has been out since more than 5 years (October 2018). Finally, we can’t forget about Guix that had its first release over 10 years ago (January 2013).
What is an immutable distro? I’m just now learning this is a thing
It’s often used to describe a distro in which (at least some) parts of the system are read-only on runtime. Furthermore, features like atomicity (i.e. an upgrade either happens or doesn’t; no in-between state), reproducibility[1] and improved security against certain types of attacks are its associated benefits that can (mostly) only exist due to said ‘immutability’. This allows higher degree of stability and (finally) rollback-functionality, which are functionalities that are often associated with ‘immutability’ but aren’t inherently/necessarily tied to it; as other means to gain these do exist.
The reason why I’ve been careful with the term “immutable” (which literally is a fancy word for “unchanging”), is because the term doesn’t quite apply to what the distros offer (most of these aren’t actually unchanging in absolute sense) and because people tend to import associations that come from other ecosystems that have their own rules regarding immutability (like Android, SteamOS etc). A more fitting term would be atomic (which has been used to some degree by distros in the past). The name actually applies to all distros that are currently referred to as ‘immutable’, it’s descriptive and is the actual differentiator between these and the so-called ‘mutable’ distros. Further differentiation can be had with descriptions like declarative, image-based, reproducible etc.
Great post. However, I will add my opinion about Debian Sid and its lineage: just don’t use them for production. Sid is an unstable distribution that looks like a rolling release distribution and most of the time it’s fine, but it is fundamentally different since it’s okay if it gets broken.
I’m guessing the idea behind Siduction is to use this rollback functionality to counter its innate instability, but with solid alternatives like openSUSE or the already installed Linux Mint + Timeshift, I wouldn’t recommend Siduction. Also, Manjaro is unstable by design, wouldn’t recommend that one either.
I personally agree with your assessments regarding Debian Sid and Manjaro. However, I didn’t want to force my (potential) ‘bias’ in a comment that tries to be otherwise neutral. Thank you for bringing up the ‘asterisks’ associated with both of these!