the best way to learn is by doing!
the best way to learn is by doing!
I just built my own automation around their official documentation; it’s fantastic.
https://www.wireguard.com/#conceptual-overview
vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.
Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.
If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.
Talescale proper gives you an external dependency (and a lot of security risk), but the underlying technology (wireguard) does not have the same limitation. You should just deploy wireguard yourself; it’s not as scary as it sounds.
Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation
https://docs.docker.com/engine/network/packet-filtering-firewalls/
For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.
The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)
+1 for cmk. Been using it at work for an entire data center + thousands of endpoints and I also use it for my 3 server homelab. It scales beautifully at any size.
Are you maybe thinking of https://distr1.org/ made by the i3 guy?
Generally the lifecycle with this sort of thing is old_thing becomes an alias to new_thing, and eventually old_thing gets dropped as an alias down the line.
It’s still decent advice to learn dnf native calls and to update scripts using yum to those native calls.
I’m a big fan of tiling window managers like i3 or awesome (awesome wm). Awesome is the one I use. It’s tiling and the entire interface is built from scripts that they encourage you to modify. Steep learning curve but once you get it how you like, there’s nothing like it.
I support your position in principle, but canceled my own nitro when they did the android app redesign. It went from really snappy (respecting system animation scale settings) to completely ignoring them. It feels like molasses compared to every other phone app that operates at the system set 0.25 animation scale.
They also completely broke foldable support, and if your device changes aspect ratios inside a chat, you have to restart your client to get it to behave correctly again.
The enremovedtification is real and I am voting with my wallet.
Xmpp is by design, an extensible protocol. There just doesn’t seem to be any motivation to develop for it.
I certainly wasn’t just born good at this. Unironically if you want to learn how something works, try to automate it. By the time it’s automated you’ll understand basically every part of it at at least a basic high-level.
I have condensed almost all of my workflows into pure bash scripts that will run on anything from bare metal to a vm to a docker container (to set up and/or run an environment). My dockerfiles mostly just run bash scripts to set up environments, and then run functions within the same bash scripts to do whatever things they need to do. That process is automated by the bash scripts that built my main host. For the very few workflows I have that aren’t quite as appropriate for straight docker (wireguard for example) I use libvirt to automate building and running virtual machines as if they were ephemeral containers. Once the abstraction between container and vm is standardized in bash, the automation doesn’t really need to care which is which, it just calls start/stop functions that change based on what the underlying tech is. Because of that, I can have the canary system build and run containers/vms in a sandbox, run unit tests, and return whether or not they passed. It does that via cron once a week and then supplants all the running containers with the canary versions once unit tests pass.
Basically I got sick of reinventing the wheel every time a new technology came out and eventually boiled everything down into bash so that it’ll run on anything it needs to. Maybe podman in userland becomes the new hotness next year, or maybe I run a full fat k8s like I do at work. Pure bash lets me have control over everything, see how everything goes together, and make minor modifications to accommodate anything I need it to.
It sounds more complicated than it really is, It took me like a week of evenings to write and it’s worked flawlessly for almost a year now. I also really really really hate clicking things by hand lol, so I automate anything I can. Since switching off proxmox, this is the first environment that I have entirely automated from bare-metal to fully running in a single command.
I’m incredibly lazy; it’s one of my best qualities.
Virtual machines also exist. I once got bit by a proxmox upgrade, so I built a proxmox vm on that proxmox host, mirroring my physical setup, that ran a debian vm inside of the paravirtualized proxmox instance. They were set to canary upgrade a day before my bare-metal host. If the canary debian vm didn’t ping back to my update script, the script would exit and email me letting me know that something was about to break in the real upgrade process. Since then, even though I’m no longer using proxmox, basically all my infrastructure mirrors the same philosophy. All of my containers/pods/workflows canary build and test themselves before upgrading the real ones I use in my homelab “production”. You don’t always need a second physical copy of hardware to have an appropriate testing/canary system.
Generally end-user applications like Firefox would be the latest/same version, but system libraries might be a few versions different. Generally security patches are written for a few major versions of libraries/daemons at the same time. So features might be different but it’s all the same security for the most part.
That’s the major draw between one distro to another, they will have different philosophies on what to include, and what major version to use. Debian for example is much more reluctant to upgrade something unless there’s a large demand for a new feature. The theory is it is more stable and consistent to use that way.
Ubuntu on the other hand features much more modern versions of libraries because they want to be more hip and modern, expecting users to learn new things more often because they think the new features are worth it and they want to support all the things.
Yes but they use different repositories with different maintainers. Think of a package manager like steam, epic, etc, except instead of games it’s everything. Some package managers get different applications, some have different versions of the same applications. In the case of Debian/Ubuntu it’s more like steam in China vs steam in the rest of the world. Same steam, different games, different maintainers of who decides what games get to go in which steam.
This still doesn’t solve the issue with underlying kernel feature and function compatibility. 99% of the time when I have an issue getting something to work, it’s because of something like my LTS kernel doesn’t support floc(), etc.
This only solves competence issues, it does nothing to resolve the difficult compatibility problems.
How did they manage to just take the worst of both and put them together?
Rsync is more “copy on steroids” than “backup utility”. Many people use it as a backup tool because it allows very lightweight syncs between a source and a destination. It has no concept of snapshots or restores, it’s just copying files. You’d have to build a snapshot system around rsync. It’s not the solution you think you’re looking for, but by the time you figure out how to use it it’s the solution you probably always wanted. If that makes any sense
I run ubuntu’ server base headless install with a self-curated minimal set of gui packages on top of that (X11, awesome, pulse, thunar) but there’s no reason you couldn’t install kde with wayland. Building the system yourself gets you really far in the anti-bloatware dept, and the breadth of wiki/google/gpt based around Debian/Ubuntu means you can figure just about any issues out. I do this on a ~$200 eBay random old Dell + a 3050 6gb (slot power only).
For lighter gaming I’ll use the Ubuntu PC directly, but for anything heavier I have a win11 PC in the basement that has no other task than to pipe steam over sunshine/moonlight
It is the best of both worlds.