NixOS […] some packages are kinda old
Fair
that server will be going back to debian next summer.
I don’t think that will solve the “some packages are kinda old” issue.
NixOS […] some packages are kinda old
Fair
that server will be going back to debian next summer.
I don’t think that will solve the “some packages are kinda old” issue.
I have full IPv6, none of my ports that I haven’t explicitly whitelisted in the firewall can be accessed from the Internet. I can open a host completely, but it’s not default. This is on the most common brand of consumer routers here.
Just because it’s not NATted doesn’t mean there’s no firewall in place.
My router will still block all ports not explicitly allowed for the hosts regardless of protocol, it’s a firewall after all and not just NAT. Just because the host addressable doesn’t mean its ports are reachable.
Testing is actually mandatory, what’s not mandatory though is to do it before deploying.
what’s feurking
An optional step in the développement process
Emacs? When there’s ed
? Talk about bloat…
Personally I’d love to see more wider usage of S/MIME and/or PGP.
I’d rather see less. https://www.latacora.com/blog/2019/07/16/the-pgp-problem/ is a good summary about the issue and they have a shorter follow-up post about why encrypting mail in general is bad at https://www.latacora.com/blog/2020/02/19/stop-using-encrypted/
What I take issue with actalis, is that they don’t just sign your private key but you actually get the private key from them. It then depends on how much you trust the issuer.
By definition, that key can no longer be considered “private”.
Could be the kernel itself
Wouldn’t make sense to me because the thread says GNU/Linux and others, though this could relate to Android or distros not using any GNU.
gnupg
Usually not exposed to the network though, but it’s generally a mess so wouldn’t be too surprising
Another candidate I have in mind is ntpd, but again that is usually not easily accessible from outside and not used everywhere, as stuff like systemd-timesyncd exists.
Just want to stress that I’m not sure about it being OpenSSH, it was more supposed to be a fun guess than a certain prediction
Since this affects Linux and others, I’m guessing this is about OpenSSH. But I’m not very certain. Just can’t think of another candidate.
But holy sh, if your software has been running on everything for the last 20 years
This doesn’t sound like glibc as someone in the thread guessed.
I was also with a provider that didn’t offer API access for the longest time. When they then increased prices, I switched, now paying a third of their asking price per year at a very good provider.
I guess migrating is difficult if the provider doesn’t offer a mechanism to either dump the DNS to a file or perform a zone transfer (the later being part of the standard).
Can only recommend INWX for domains, though my personal requirements aren’t the highest.
A lot of paid cert providers were not so great before LE put the spotlight on the issue; it was more of a scheme to extract money from operators who couldn’t afford to not offer TLS / SSL. https://bugzilla.mozilla.org/show_bug.cgi?id=647959 was a famous post that made fun of / criticized the system before LE. This hurt security, and if not free, LE wouldn’t have worked.
Also wildcard certificates are more difficult to do automated with let’s encrypt.
They are trivial with a non-garbage domain provider.
If you want EV certificates (where the cert company actually calls you up and verifies you’re the company you claim to be) you also need to go the paid route
The process however isn’t as secure as one might think: https://cyberscoop.com/easy-fake-extended-validation-certificates-research-shows/
In my experience trustworthyness of certs is not an issue with LE. I sometimes check websites certs and of I see they’re LE I’m more like “Good for them”
Basically, am LE cert says “we were able to verify that the operator of this service you’re attempting to use controls (parts of) the domain it claims to be part of”. Nothing more or less. Which in most cases is enough so that you can secure the connection. It’s possibly even a stronger guarantee than some sketchy cert providers provided in the past which was like “we were able to verify that someone sent us money”.
I, a systems guy, have a better time learning go than nix packages.
Go is a simple and elegant imperative language (that does come with its downsides); Nix the DSL is a functional language which requires a different way of thinking. Systems usually are operated imperatively, so it’s normal that you’d find it easier.
It’s not an easy language at all and one might ask if another one wouldn’t do the job better, which is what Guix System kind of explores, but its (nix) design goals make a lot of sense.
NTSYNC is one example, I don’t know what the current progress is https://lore.kernel.org/lkml/20240124004028.16826-1-zfigura@codeweavers.com/
It was supposed to be in 6.10, I don’t know if that actually happened
For most network share I use /mnt/$server.
I use /mnt/$proto/$server
, though that level of organization was probably overkill. Whatever…
I do /volumX for additional hard drives.
A good first approximation.
So where in this setup would you mount a network share? Or am additional hard drive for storage? The latter is neither removable nor temporary. Also /run
is quite more than what this makes it seem (e.g. user mounts can be located there), there is practically only one system path for executables (/usr/bin
)…
Not saying that the graphic is inherently wrong or bad, but one shouldn’t think it’s the end all be all.
The title says “bcachefs-tools”, the linked kernel thread that the comment referred to was about the bcachefs kernel part and did not touch the bcachefs userspace tools. Debian says they can’t package with these pinned dependencies and explains why. Kent says relaxing dependencies breaks the programs.
The only hint at the other topic I see is this:
(not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit)
I guess this is about https://www.reddit.com/r/bcachefs/comments/1em2vzf/psa_avoid_debian/, and while I think the title is too broad, the actual message is
If you’re running bcachefs, you’ll want to be on a more modern distro - or building bcachefs-tools yourself.
I don’t consider Kent’s reasoning (also further down the thread) a rant - it might not be the most diplomatic, but he’s not the only one who has problems with Debian’s processes. The xscreensaver developer is another one for similar reasons.
I think, in fairness, bcachefs and Debian currently aren’t a good fit. bcachefs is also in the kernel so users can rest it and report, but it wasn’t meant to be stable; it’s meant to not lose data unrecoverably.
Anyhow, while I think that he’s also not the easiest person on the LKML, I don’t consider him ranting there; and with the author’s and my judgement differing in these points, I’m led to believe that we might also disagree on what qualifies as hostile.
Lastly, while I’m not a big fan of how Rust packaging works, it ensures that the program is built exactly the same on the developer’s and other machines (for users and distributors); it is somewhat ironic to see Debian complain about it, since they do understand the importance of reproducibility.
You must have missed the last half of the post then. Especially the last two paragraphs.
There’s isn’t much more to that issue than that sentence, while all other paragraphs cover the packaging. It’s tangential at best.
Who hates ChromeOS? Never heard someone say that