Just another Swedish programming sysadmin person.
Coffee is always the answer.
And beware my spaghet.
Well, things like the fact that snap is supposed to be a distro-agnostic packaging method despite being only truly supported on Ubuntu is annoying. The fact that its locked to the Canonical store is annoying. The fact that it requires a system daemon to function is annoying.
My main gripes with it stem from my job though, since at the university where I work snap has been an absolute travesty;
It overflows the mount table on multi-user systems.
It slows down startup a ridiculous amount even if barely any snaps are installed.
It can’t run user applications if your home drive is mounted over NFS with safe mount options.
It has no way to disable automatic updates during change critical times - like exams.
There’s plenty more issues we’ve had with it, but those are the main ones that keep causing us issues.
Notably Flatpak doesn’t have any of the listed issues, and it also supports both shared installations as well as internal repos, where we can put licensed or bulky software for courses - something which snap can’t support due to the centralized store design.
I’m currently sitting with an Aura 15 Gen 2, and I’m definitely happy with it.
I do wish they’d get their firmware onto LVFS, but that’s about my main complaint.
I personally use ~/.bin
for my own symlinks, though I also use the user-specific installation instead of the system-wide one.
I wouldn’t guarantee that any automation handles ~/.local/bin
or ~/.bin
either, that would depend entirely on the distribution. In my case I’ve added both to PATH manually.
Flatpak already creates executable wrappers for all applications as part of regular installs, though they’re by default named as the full package name.
For when inkscape has been installed into the system-wide Flatpak installation, you could simply symlink it like; ln -s /var/lib/flatpak/exports/bin/org.inkscape.Inkscape /usr/local/bin/inkscape
For the user-local installation, the exported runnable is in ~/.local/share/flatpak/exports/bin
instead.
The official binhost project has been an experimental thing until now, I’ve personally been using it for the year on multiple machines, but it’s not been something that you can just enable. And it’s definitely not been something that’s come pre-prepared in the stage 3.
Flatpak uses OSTree - a git-like system for storing and transferring binary data (commonly referred to as ‘blobs’), and that system works by addressing such blobs by hashes of their content, using Linux hardlinks (multiple inodes all referring to the same disk blocks) to refer to the same data everywhere it’s used.
So basically, whenever Flatpak tells OSTree to download something, it will only ever store only copy of that same object (.so-file, binary, font, etc), regardless of how many times it’s used by applications across the install.
Note that this only happens internally in the OSTree repo - i.e. /var/lib/flatpak
or ~/.local/share/flatpak
, so if you have multiple separate Flatpak installations on your system then they can’t automagically de-duplicate data between each other.
A lot of that data doesn’t actually exist, ostree hardlinks data blobs internally, so the actual size on disk is much smaller than most disk usage tools will show.
Definitely the third / middle left, but the bottom right definitely gets second place to me.
Not a major fan of too abstract art, and those are just both so serene.
It makes sense to use the words that people are most used to, and bluescreen/BSOD has been the go-to lingua for describing a crash/error screen - even if not blue - since a while now.
I’ve had to grab PPDs for the printer system at work, but generally nowadays printers do tend to work with the default system.
When I worked through some AutoYaST setups for Leap 15.5 the default disk setup did BTRFS across the line, though that could definitely differ from doing the install interactively.
RHEL is going hard on XFS, they’ve even completely removed BTRFS support from their kernel - they don’t have any in-house development competency in it after all. It’s somewhat understandable in that regard, since otherwise they wouldn’t necessarily be able to offer filesystem-level support to their paying customers.
Though it is a little bit amusing, seeing as Fedora - the RHEL upstream - uses BTRFS as their default filesystem.
The main benefits to BTRFS over something like ext4 tends to be considered as; the subvolume support - which is what’s used for snapshotting, the granluar quotas, reflinks, transparent compression, and the fact that basically all filesystem operations can be performed online.
I’m personally running BTRFS in a couple of places; NAS, laptop, and desktops. Mainly for the support to do things like snapshots and subvolumes, but I also make heavy use of both reflinks and compression, and I’ve also made use of online filesystem actions quite a few times.
Well, both SUSE and Fedora use BTRFS as the default file system, RHEL uses XFS, etc.
Nope, I’m personally staying away from ZFS on Linux on any critical system (and I don’t really have enough personal hardware to have properly non-critical systems) until it has more proper support, don’t feel like being at the whim of out-of-kernel modules for such things.
Supposedly you can just install and use it though, it’s available in the filesystems repo at least.
Ah, two entirely separate BTRFS volumes on the disks?
I’ve not actually done that myself, but the disk configuration XML definitely supports it.
One thing I’ve found I dislike is how limited the installer is in partitioning disks. I like having multiple disks in my servers, and I can’t set them up in btrfs at install time like I want to.
Interesting, my only experience with installing openSUSE so far has been doing AutoYaST installs, and that seems to handle multi-disk BTRFS gallantly.
FreeIPA (the server part) has also been a bit of a friction point for me as well, but they have a containerized version which has been working rather well in my own usage so far, so having it as a direct system package is less important.
I work as a Linux sysadmin for a university, we’re paying for a full RedHat site license with all the goodies, and we certainly feel royally screwed over by this.
Not every single piece of software we run is a RedHat developed/sanctioned thing, and the removal of a guaranteed bug-compatible development platform for the people building those pieces of software - without jumping through hoops or limiting development efficiency - mean that we can no longer guarantee that core pieces of our infrastructure software will remain available for our RHEL installs. Not to mention course IT, where things are even worse in that regard. Lots of such software is already developed/tested/packaged on Alma/Rocky, and if they start diverging from being RHEL bug-compatible - which is very likely with this change - then we’re going to either have to switch away from RHEL - and the paid support, or lose support from the pieces of software.
We’re probably going to have to move a bunch more of our ~1.4k systems off of RHEL and onto things like SUSE, Debian, etc in the near future, just so that we’re ready for when removed really hit the fan.