There is a similar question on the site which must not be named.

My question still has a little different spin:

It seems to me that one of the biggest selling points of Nix is basically infrastructure as code. (Of course being immutable etc. is nice by itself.)

I wonder now, how big the delta is for people like me: All my desktops/servers are based on Debian stable with heavy customization, but 100% automated via Ansible. It seems to me, that a lot of the vocal Nix user (fans) switched from a pet desktop and discover IaC via Nix, and that they are in the end raving about IaC (which Nix might or might not be a good vehicle for).

When I gave Silverblue a try, I totally loved it, but then to configure it for my needs, I basically would have needed to configure the host system, some containers and overlays to replicate my Debian setup, so for me it seemed like too much effort to arrive nearly at where I started. (And of course I can use distrobox/podman and have containerized environments on Debian w/o trouble.)

Am I missing something?

  • hackeryarn@lemmy.world
    link
    fedilink
    arrow-up
    27
    ·
    5 months ago

    I would separate NixOS from other immutable distros. NixOS is really about giving you blank slate and letting you fully configure it.

    You do that configuration using a static config language that is able to be far more idempotent than Andible. It’s also able to define packages that are well contained and don’t require dynamic linking setup by manually installing other packages.

    Immutable distros, on the other hand, really have no advantage to your setup and will probably feel more restrictive. The main use I see for them is for someone new or lazy that wants to get a working system up and running quickly.

    • hackeryarn@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      5 months ago

      My favorite example of how idempotent NixOS is has to do with the DE. If you’ve ever looked at switching from gnome to KDE, or the other way around, most distros suggest to just re-install because each DE leaves so much cruft around and it’s so hard to remove everything in a safe manner.

      With NixOS, you just change one line in your config, and the DE is cleanly swapped.

      • Laser@feddit.de
        link
        fedilink
        arrow-up
        6
        ·
        5 months ago

        Or you can add specialisations, which to be fair might require a reboot (system accounts might change during specialisations switch which will confuse the script trying to reload services for the now non-existent user) but it is how I have multiple DEs installed without their applications flooding the other ones, each with their own login manager (SDDM for plasma, gdm for gnome, greetd for sway).

        • hackeryarn@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Definitely. That’s a great way to run different option together.

          I was just using the DE as an example to demonstrate how cleanly NixOS can add and remove packages. The clean removal of packages with lots of configs is something that most distros struggle with.

    • skilltheamps@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 months ago

      Well maybe you youself are too new to recognize some of the appeals ;)

      One large advantage with silverblue is, that the whole composition of the OS does not take place on the target machine. That means that all the issues that could arise will not take place on the target machine, and can be dealt with beforehand. In the simple case this could mean just enjoying vanilla silverblue without having to think about possibly borking the machine. In an advanced usecase this could mean for example building the os images in a GitLab CI/CD pipeline (with well working tooling that exists already for docker etc), then having automatic tests in the pipeline ensure that everything important works as expected. And only if the tests pass, the image will be added to the repositorie’s image registry, where the target machines will fetch it from automatically and rebase to it.

      • hackeryarn@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        No, I fully understand it. But if you build the whole system where every package is isolated, none of the packages interfere with each other, and every package is tested across a wide array of architectures, you can just as safely put together your ideal OS setup and don’t have to deal with being locked into very simple and bare system.

        The right place for immutable OSes is if you’re using it as a server for container workloads, where you will never customize the base system. Or if you never want to customize your system. Yes, you can customize the system image, but it breaks all the guarantees that the images gives you because the packages themselves are not isolated and by bumping a wrong dependency for a custom packages you can still break the whole system.

        • skilltheamps@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Partly yes, but just installing a package without running into conflicts does not yet guarantee a working system. You have to cater for the right configurations too, for example when you think about a corporate setting with all kinds of networking whoes (like shares, vpns and such). I think you could get this to work with Nix somehow, but you want to test these things beforehand, and if you do so using images then you have the thing to ship to machines in your hands already, there’s no need to compose the OS and configurations over and over again for every machine.

          Another aspect with non-atomic OS composition on the target is that you have to deal with the transient phase from one state to the next. In this phase all kinds of things could happen, for example an update of nvidia drivers would render cuda disfunctional until the next reboot, as the userspace and kernelspace parts do not fit together anymore. With something like any of the fedora atomic variants, transient phases with basically undefined behaviour do not exist, and the time the system is not guaranteed to be in working order gets reduced to just the reboot.

          Nix is cool and definetely better than any traditional package manager. But it is not an ultimate solution, to be honest so far it seems to me like it is living in a nieche of enthusiasts that are smart enough to put up with its unique declaration language. And below that niche you have ordinary linux users that may just be happy with silverblue without any modifications, and above that niche you have corporate doing their own images in CI/CD, CoreOS and all that jazz.

          • hackeryarn@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Your summary of the language is spot on. I still hope that more distros take inspiration from the declarative config and try to move in the direction, or nix supports a better language in the future. I think that ultimately that’s what the average linux user would want. The ability to still customize in a safe manner. Silverblue, and others, are and will remain a great option for the new or indifferent user.

            On your point about the transient phase, nix actually does that by default already. It installs everything at a separate path and then flips over in one go. You can even pick the mode, either try to do a live switch as you describe, or on boot. I don’t know if I see many benefits to images there.

            I am at a second place now that uses NixOS in a corporate setting, and it is much easier than maintaining the CoreOS images, or similar. I’ve had some many broken builds of CoreOS images because something goes wrong between the custom packages and the base CoreOS images, I would rather just run an Ansible script at this point. Also, you end up using the exact same test suite for NixOS images as for your other images, so the same guarantees end up being met.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Thanks, that clarified things for me. Only thing I really would love to have are atomic updates, but for this I probably could just use BTRFS snapshots.

      • hackeryarn@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Already has that. And if you use flakes, you can fully lock down your package versions that way the install is 100% identical on every machine no matter when you run it.

  • d3Xt3r@lemmy.nzM
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    5 months ago

    Everyone here have already explained their various stances very eloquently and convincingly - so I won’t argue against that - so instead I’ll just put forth my own 2c on why I use Silverblue instead of Nix/Ansible.

    The main draw for me in using Silverblue (well, uBlue to be exact) is the no-cost, cloud-based, industry-standard, CI/CD and OCI workflow. Working with these standard technologies also helps me polish up my skills for work, as we’ve started to make use of containers and gitops workflows, so the skills that I’m gaining at personal level are easily translatable for work (and vice-versa).

    With Nix (the declarative way), I’d have to learn the Nix language first and maintain the non-standard Nix config files and, tbh, I don’t want to waste so much time on something that no one in the industry actually uses. Declarative Nix won’t really help me grow professionally, and whilst I agree it has some very unique advantages and use-cases, it’s completely overkill for my personal needs. In saying that, I’m happy with using Nix the imperative way though - I don’t need to learn the Nix language, and it’s great having access to a vast package repository and access my programs without having to go thru the limitations of containers.

    As for Ansible, I’d have to have my own server (and pay for it, if it’s in the cloud), and spend time maintaining it too. And although we use Ansible at work as well, so the skills I gain here won’t be waste of time, it’s unfortunately too inflexible/rigid for my personal needs - my personal systems are constantly evolving, whether it is in the common packages I use, or my choice of DE (my most recent fling is with Wayfire) etc. With an Ansible workflow, I’d be constantly editing yaml files instead of actually making the change I want to see. It’s overkill for me, and a waste of time (IMO). You could argue that I’m already editing my configs on Github with uBlue, but it’s nowhere as onerous as having to write playbooks for every single thing. And as I mentioned, I like to maintain some flexibility and manual control over my personal machines and Ansible will just get in the way of that.

    With the uBlue workflow, I just maintain my own fork on Github with most of my customisations, + a separate repository for specific dotfiles and scripts that I don’t want to be part of my image. Pull bot keeps my main uBlue repo in sync with upstream, and I only need to jump in if there’s some merge conflicts that cannot be resolved automatically. At the end of it all, I get a working OCI image, with a global CDN and 90 days of image archives, allowing for flexible rollback options - all of this without incurring any costs or wasting too much time on my part. Plus I can easily switch between different DEs and OCI distros, with just a simple rebase - I could go from a Steam-Deck like gaming experience (Bazzite) to a productivity-oriented workstation (Bluefin), or play around with some fancy new opinionated environments like Hyprland and River (Wayblue) - all with just a simple rebase and a reboot, without needing to learn some niche language or waste time writing config files. How cool is that?

  • Tobias Hunger@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    5 months ago

    Ansible must examine the state of a system, detect that it is not in the desired state and then modify the current state to get it to the desired state. That is inheritently more complex than building a immutable system that is in the desired state by construction and can not get out of the desired state.

    It’s fine as ,one as you use other people’s rules for ansible and just configure those, but it gets tricky fast when you start to write your own. Reliably discovering the state of a running system is surprisingly tricky.

  • Laser@feddit.de
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    5 months ago

    Another aspect I like about Nix compared to what I understand from Ansible (which I used a bit but not much) is that your configuration describes your system without any hidden state. Yes, you only get your dependencies through full evaluation, but what I mean is this: Let’s say you install something on a system, i.e. you add it to your list of packages, which you later remove. To my knowledge, Ansible won’t remove the package if not explicitly asked. However, if you explicitly tell Ansible to not have it installed, what happens if that package is later introduced as a dependency?

    Ansible will always operate on a stateful system, which is kind of the combination of what others have already mentioned – it’s (EDIT: it being Nix) idempotent and there’s no hidden state that will break something down the way.

    • mumblerfish@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      Ansible works on tasks, and to your hypothetical there, if you have a task that calls the package manager to put a package in the state ‘absent’, but it is another package’s dependency, it will have little to do with ansible, and just follow the package manager’s behaviour. (Up to some details. Like for ‘apt’, ansible runs the command with ‘-y’, which has a little different behaviour than just removing the interaction part and assuming yes). If the package manager removes the depending package, and your playbook has first a task that installs it, then a taks that removes the dependency, you will always get ‘changed’ on both tasks everytime you run the playbook, even if your playbook puts the machine in the same state as before.

  • mogoh@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    5 months ago

    I use Fedora Silverblue and in my experience the updates are very stable. But with Debian and Ansible automation I think you are not missing a much, maybe nothing at all.

    Would you mind sharing how you automated your setup with Ansible or generally how to use Ansible in that way? I use some bash scripts for my automation and it is a bit hacky, so if I could improve that, it would be nice.

    • XenGi@lemmy.chaos.berlin
      link
      fedilink
      arrow-up
      6
      ·
      5 months ago

      The thing about ansible is to always remember that it really is just a backup python script that gets copied to your server and executed. Yes it works quite well, but you have to be careful to not have break on you.

      For me the difference to nix is, that my bud expression will actually always produce the same output or tell me it can’t. Instead of ansible which will fail after some updates went past.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 months ago

      Yes, I really love the Silverblue download in the background, reboot and you are up to date updates. So much better than watching the package manager do its thing. :-)

      I don’t know about your knowledge about Ansible, and when you are already running Silverblue and are happy with it, it might be more worthwhile for you to explore how to automate Silverblue and the containers you are using… and write a blog post for people like me, how you did it, so I can learn. :-P

      Ansible… basically it allows you to install software with the package managers (apt, dnf, …), configure/restart etc. services, clone git repositories, run arbitrary commands, configure stuff with dconf.

      Example for my workflow:

      When Debian 12 got into the alpha stage, I simply set up a virtual machine, install git, ansible and vim, and then I start from a known starting place (like Gnome Desktop for desktops, minimal for servers). First, I clone my git repository with my dotfiles, and link all the relevant dotfiles. After that I simply use Ansible to install all packages I will use from that distribution, run dconf to configure Gnome for my needs, configure/download software from 3rd party package repositories or just download tarballs and install them to /opt or ~/opt. Of course also flatpaks can be configured/downloaded via Ansible.

      Once, everything works great in the virtual machine I will work in the VM for a few days or even weeks. If everything works stable I’ll just make a clean install of the operating system, add some hardware specific tweaks (change grub config, tweak WIFI drivers power mode) and then I am up and running. Thanks to Debian, my Ansible configs are mostly stable with minor tweaks for around 2 years, and when time is due for Debian 13, I’ll repeat the cycle.

      The way I do things with Ansible have grown for a long time and are tailored to my private/professional use cases. I simply like having the same setup on every desktop/server I deploy, because I never have to wonder, if my software is configured in the way I like it, if a hotkey works or if something I use is installed or not. (And if my hardware dies or I do an SSD upgrade, I am up and running within minutes, same is true if I get new hardware.)

      Still, it is a tradeoff. I really like Fedora, but one year of updates is too short for me and my initial investment to setup a new version of Debian. Further, I only use dconf based desktops like Mate or Gnome, because I can simply configure them painless 100% via Ansible. OTOH I have MY Debian desktop setup running on multiple AMD64 and AARCH64 physical and virtual machines. If I want to experiment with software, I just create a VM, start Ansible, get a cup of tea and I have a disposable machine to play around. Further I have my setup 100% documented, if I wonder, what strange power settings tweak I needed in which file to make Debian 11 work on my netbook, I know were to find the 100% correct answer…

      Excuse the wall of text, hope that gave you an idea, don’t hesitate to reach out if any questions are left. Obviously, you have to decide for yourself if such a setup is worthwhile for you. In case you use only one Desktop, this would be total overkill. :-P

      • mogoh@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Thank you for this in depth answer. It makes me want to explore Ansible and setup automation. Sounds really great!

        and write a blog post for people like me, how you did it, so I can learn.

        I am thinking about that … 🤔

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Really, Ansible doesn’t matter, the IaC part is the killer.

          Just start to put your config into code and learn, over time it will grow!

          I cannot go back to setting desktops/servers up by hand, IaC just solves too many problems and gives me peace of mind.

      • theshatterstone54@feddit.uk
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Reading this, I find myself really, REALLY wanting to replicate that sort of setup, especially the docs part which is something I’ve been neglecting. I always say to myself, “The next Arch install, I’ll document the setup” and it never happens!

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          “The next Arch install, I’ll document the setup” - Famous last words! :-)

          Seriously, I wonder how good my approach would work with a rolling distribution like Arch. I would be afraid, that pacman updates would drift/change the system and over time the delta to my assumed setup grows… OTOH if you keep your scripts in sync with Archs updates, you might simply distribute the maintenance of your Ansible script. If you go full Ansible with Arch, please give an experience report in 6 months!

          • theshatterstone54@feddit.uk
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            I don’t think I will. I switch between Arch and NixOS constantly, and this time (I’m on NixOS right now) I intend on remembering distrobox is a thing if I need to compile from source.

  • excitingburp@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    5 months ago

    Silverblue doesn’t solve the same problems as Nix, or Ansible for that matter. I built my own in the past and it was non-trivial - although the CI process could pair quite nicely with Ansible. IMHO the primary advantage of Silverblue is that updates are a download, with practically zero work to do after the download has completed (this is a very big deal for RPM-based systems because an update boot can take a long time).

    As for Ansible vs Nix, try switching from one program to another across all your machines. It’s doable but not fun. Now try switching back across all your machines. Nix makes your system equal a configuration, it does not add configuration.

  • b_m_f@discuss.tchncs.de
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    5 months ago

    I switched from a big custom Ansible deployment to NixOS.

    The system includes 8 managed machines, multiple VPNs and a custom certificate authority.

    Downsides:

    • rethinking of how to manage Certificates and VPN configs outside of Nix
    • getting secrets to work took a bit until I found agenix
    • deployments can take a while with deploy-rs

    Still, I can only tell you how much more at ease I feel with the NixOS based system. Its just much easier to refactor, not having to take care of legacy cleanup and polluting the machines over time.

    Once you wrap your head around it all more complex system architectures start to become manageable/maintainable.

    IaC

    You still need sth like Terraform on the side for your actual infrastructure provisioning.

    Solutions to bridge this with the Nix ecosystem are evolving in the nix-community repos on Github, but I found it easier to manage that separately for the time being.

    All in all I would recommend NixOS based systems for the heavy lifters in your setup. If you want to deploy a fleet of machines you are entering new territory. Exciting, but maybe too much of a time commitment for some.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      Sorry, I have only Ansible files at work which I cannot share and my private Ansible setup is in a private git repository. I elaborated further down in another comment my workflow.

      My suggestion is to forget about best practices (like roles) for private desktop setup, simply start with a task file and a fresh installation of your favorite distro inside a virtual machine. From that starting point, everything you do to configure the VM you do via Ansible. Want to set the hostname? Learn about ansible.builtin.hostname, want do install a package? Use ansible.builtin.apt, ansible.builtin.dnf or similar, want to harden your sshd config? Look at ansible.builtin.lineinfile, ansible.builtin.copy or ansible.builtin.template … Screwed up your VM? Replace it with a new one, run your Ansible tasks and continue were you left off…

      Hope that helps!

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        Thanks. I use Ansible myself, but I was more interested in how I would run Ansible on my daily driver, from my daily driver.

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Sorry, perhaps I do not understand what you are asking for?

          On a *NIX box you install ansible, start sshd and then run something like:

          ansible-playbook -i inventory -u username -e 'ansible_user=username' all.yml  -K --limit hostname.domain.net
          
  • Falcon@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    5 months ago

    I appreciate this is more asking about nicks, but I’ll offer some feedback on my experience with immutable distributions more generally.

    I took an adventure into silver blue and micro OS recently and I was completely unimpressed. It’s a novel idea from a good place, but it was the most incoherent and buggy experience I’ve ever had on Linux distribution in the past 10 years. Nothing walked reliably, and everything broke, I also found that trying to use anything other than the default gnome desktop was an exercise in futility.

    I need to clarify, I think it’s a great idea. In practice though, Both implementations, silver blue and micro OS, are really over engineered.

    I have adapted the ideas into my current install and I achieve the same thing with A/B Snapshots And a script that takes me from a base snapshot to my daily driver. Everything else exists in containers So bootstrapping up only involves half a dozen packages (iwd, node, nvim etc. ).

  • mcepl@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    I think you have arguments about MicroOS (or Silberblue, which I know less about, and possibly Nix, which I know nothing about, and it seems to me it is not in the same group) wrong. Take a look at this https://youtu.be/lKYLF1tA4Ik.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    22
    ·
    edit-2
    5 months ago

    I wonder now, how big the delta is for people like me: All my desktops/servers are based on Debian stable with heavy customization, but 100% automated via Ansible.

    Close to none. Immutable solve the same problem that was solved years ago with Ansible and BTRFS/ZFS snapshots, there’s an important long-term difference however…

    Immutable distros are all about making thing that were easy into complex, “locked down”, “inflexible”, bullremoved to justify jobs and payed tech stacks and a soon to be released property solution. We had Ansible, containers, ZFS and BTRFS that provided all the required immutability needed already but someone decided that is is time to transform proven development techniques in the hopes of eventually selling some orchestration and/or other proprietary repository / platform like Docker / Kubernetes does. Docker isn’t totally proprietary and there’s Podman but consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

    “Oh but there are truly open-source immutable distros” … true, but again this hype is much like Docker and it will invariably and inevitably lead people down a path that will then require some proprietary solution or dependency somewhere (DockerHub) that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. Those people now popularizing immutable distributions clearly haven’t had any experience with it before the current hype. Let me tell you something, immutable systems aren’t a new thing we already had it with MIPS devices (mostly routers and IOTs) and people have been moving to ARM and mutable solutions because it’s better, easier and more reliable.

    The RedHat/CentOS fiasco was another great example of this ecosystems and once again all those people who got burned instead of moving to a true open-source distribution like Debian decided to pick Ubuntu - it’s just a matter of time until Canonical decides to do some move.

    Nowadays, without Internet and the ecosystems people can’t even do removed anymore. Have a look at the current state of things when it comes to embedded development, in the past people were able to program AVR / PIC / Arduino boards offline and today everyone moved to ESP devices and depends on the PlatformIO + VSCode ecosystem to code and deploy to the devices. Speaking about VSCode it is also open-source until you realize that 1) the language plugins that you require can only compiled and run in official builds of VSCode and 2) Microsoft took over a lot of the popular 3rd party language plugins, repackage them with a different license… making it so if you try to create a fork of VSCode you can’t have any support for any programming language because it won’t be an official VSCode build. MS be like :).

    All those things that make development very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did.

    This is all about commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

    • mogoh@lemmy.ml
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      5 months ago

      Who hurt you?

      I mean, you got some points, but went way over board with it and beyond the scope of the question.

      but someone decided that is is time to transform proven development techniques in the hopes of eventually selling some orchestration and/or other proprietary repository / platform like Docker / Kubernetes does.

      So, you really think, that this must be the reason immutable desktops were invented?

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        5 months ago

        So, you really think, that this must be the reason immutable desktops were invented?

        Most likely not, but the people pushing for the / the narrative certainly are for that.

    • lily33@lemm.ee
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      We had Ansible, containers, ZFS and BTRFS that provided all the required immutability needed already but someone decided that is is time to transform proven development techniques

      Just so you know, NixOS is older than all of these, actually. And for that matter, no less flexible.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      Sorry, hard disagree from me.

      Immutable distros solve a lot of problems and are IMHO a great idea. I love my SteamDeck with SteamOS, I really like Silverblue and OpenSUSEs MicroOS (Avalon or however it is called right now). For my desktops/servers Debian is the best choice right now, but my Thinkpad from 2012 which runs now happily as an entertainment machine is a perfect example where an immutable distro would be much better and practical.

      Immutable distros are a solution to a real problem, and this problem is not solved by Ansible/BTRFS etc. Hell, I’ll happily jump ship sooner than later. Of course, YMMV and I don’t say immutable distros solve all problems for everyone, but having this option is great IMHO.

      • Falcon@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 months ago

        Well I tried Aeon for a month and it has been the least reliable system I’ve used since, well actually probably anything, like maybe vista I guess.

        The thing is a mess and it brings nothing to the table over A/B snapshots.

        The scales must be different for enterprise use because I’d never go near another immutable OS again after this terrible experience.

        Maybe it’s just flatpak that’s unreliable on Aeon, I found moving electron apps into podman containers was a lot better. But on void it was fine, clearly a lot more work to do the flesh it out I goes.

        Tbf SB had far less issues than Aeon.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        5 months ago

        Immutable distros are a solution to a real problem, and this problem is not solved by Ansible/BTRFS etc.

        Just tell me what that problem is and how it isn’t already solved with Ansible/BTRFS.

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          Some examples pointed out above, the big thing is the ‘immutable’ and bit for bit replication to the best of my knowledge.

          Ansible is imperativ and applies changes to a starting state. Immutable distros replicate a known state 100%, which is in every respect superior and prevents nasty surprises Immutable distros are 100% reproducible from a config file, which is a big thing for cyber security, building software etc. Debian has too many packages given the amount of contributors they have. The immutable distros are mostly moving to flatpak, which hopefully means that the Distros can focus their energy on a great core experience, and communities like LibreOffice can focus on creating a great flatpak experience.

          Nobody says that containers / and/or immutable distros are a good solution for your specific needs and use cases, that’s fine. For me, and after using Silverblue for some time (and btw. containers on multiple occasions), I am looking forward to jumping ship, because I like the user experience, declarative configurations are the logical next step when using Ansible and atomic updates in the backgrounds w/o the problems of package managers are great IMHO.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            Ansible is imperativ and applies changes to a starting state. Immutable distros replicate a known state 100%, which is in every respect superior and prevents nasty surprises Immutable distros are 100% reproducible from a config file, which is a big thing for cyber security, building software etc. Debian has too many packages given the amount of contributors they have.

            So does Ansible. Pick something like Alpine and destroy and recreate instances whenever you need to change your setup. Done.

    • folkrav@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I don’t know of anyone with a modicum of experience with cloud solutions that would pretend it is making anything “simpler” lol

      The only closed parts of Docker are Docker Desktop, which isn’t required at all, and Docker Hub, which is a repo like any other. You can load images from anywhere. It’s hard to take anything you say regarding container technology seriously if you seriously think VMs & Ansible/Chef/Puppet really answers the same problems as lightweight containers.

      MS did take some language servers and relicensed them, yes. Other language servers still exist, and the LSP protocol is still open, and used in many other editors.

      This reads like “Real Programmers Don’t Use Pascal”, minus the tongue in cheek tone…

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 months ago

        You just missed the point. There are always alternatives, generally not as good and unlike before all tooling is now hostage of some big provider.

        • folkrav@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Feel free to point where I missed the point, cause I don’t see it. Outside these “functions as a service” things like Lambda, I genuinely struggle to think of anything that’s truly “hostage” of a big provider or just plain worse. Especially amongst the examples you’ve given.