Glorified network janitor. Perpetual blueteam botherer. Friendly neighborhood cyberman. Constantly regressing toward the mean. Slowly regarding silent things.

  • 0 Posts
  • 24 Comments
Joined 6 months ago
cake
Cake day: December 27th, 2023

help-circle

  • What else am I missing?

    Large scale manufacturers pre-installing Linux? Readily available multi-language support for home users? Coherent UI regardless of computer and distro underneath. Billions on lobbying money spent on politicians for favorable policy crafting? Billions spent on marketing campaigns to actually sell the idea to the masses who simply don’t care any of your points (or any technical reasons, privacy or anything else that might be top of mind of the current Linux userbase).

    I’d say Linux has a good chance of capturing 5-6% of the market in the coming years if lucky (I believe we’re somewhere around 4% at the moment), unless one of the big tech monopolies decides to start throwing money into it (Like Google did with Android)







  • A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It’s more or less like Windows shortcut.

    If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can’t).

    Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they’re operating on a “real” file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.

    There’s also software that needs to be “symlink aware” (like shells) and identify and manipulate them directly.

    You can upload a symlink to Dropbox/Gdrive etc and it’ll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it’s preserved as “symlink” when restored. Most backup software is “symlink aware”.




  • Kinda tired of the constant flow of endless “analysis” of xz at this point.
    There’s no real good solution to “upstream gets owned by evil nation state maintainer” - especially when they run it in multi-year op.

    It simply doesn’t matter what downstream does if the upstream build systems get owned without anyone noticing. We’re removeded.

    Debian’s build chroots were running Sid - so they stopped it all. They analyzed and there was some work done with reproducible builds (which is a good idea for distro maintainers). Pushing out security updates when you don’t trust your build system is silly. Yeah, fast security updates are nice, but it took multiple days to reverse the exploit, this wasn’t easy.

    Bottom line, don’t run bleeding edge distros in prod.

    We got very lucky with xz. We might not be as lucky with the next one (or the ones in the past).



  • 0xtero@beehaw.orgtoLinux@lemmy.mlthinking of trying linux,
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    4 months ago

    Just go ahead and try. You don’t really need our permission to do that. Most distros support “live install” direct from the installation media, without making changes to your system. If you don’t like it, reboot and you’re back to whatever you had before

    Have fun!

    And to answer your double negation questions, yes and yes.






  • I regularly remote into in order to manage, usually logged into KDE Plasma as root. Usually they just have several command line windows and a file manager open (I personally just find it more convenient to use the command line from a remote desktop instead of directly SSH-ing into the system)

    I’m not going to judge you (too much), it’s your system, but that’s unnecessarily risky setup. You should never need to logon to root desktop like that, even for convenience reasons.

    I hope this is done over VPN and that you have 2FA configured on the VPN endpoint? Please don’t tell me it’s just portforward directly to a VNC running on the servers or something similar because then you have bigger problems than just random ‘oops’.

    I do also remember using the browser in my main server to figure out how to set up the PiHole

    To be honest, you’re most probably OK - malicious ad campaigns are normally not running 24/7 globally. Chances of you randomly tumbling into a malicious drive-by exploit are quite small (normally they redirect you to install fake addons/updates etc), but of course its hard to tell because you don’t remember what sites you visited. Since most of this has gone through PiHole filters, I’d say there’s even smaller chance to get insta-pwned.

    But have a look at browser history on the affected root accounts, the sites along with timestamps should be there. You can also examine your system logs and correlate events to your browser history, look for weird login events or anything that doesn’t look like “normal usage”. You can set up some network monitoring stuff (like SecurityOnion) on your routers SPAN, if you’re really paranoid and try to see if there’s any anomalous connections when you’re not using the system. You could also consider setting up ClamAV and doing a scan.

    You’re probably OK and that’s just paranoia.

    But… having mentioned paranoia… now you’ll always have that nagging lack of trust in your system that won’t go away. I can’t speak to how you deal with that, because it’s all about your own risk appetite and threat model.

    Since these are home systems the potential monetary damage from downtime and re-install isn’t huge, so personally I’d just take the hit and wipe/reinstall. I’d learn from my mistakes and build it all up again with better routines and hygiene. But that’s what I’d do. You might choose to do something else and that might be OK too.


  • 0xtero@beehaw.orgtoLinux@lemmy.mlSystemD is still too raw
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    6 months ago

    Just waiting for systemd-kernel to replace the “old archaic” Linux kernel. j/k j/k j/k don’t block me yet!

    I used to be very much against systemd and I still don’t like the interdependencies to everything (as highlighted in the OP), but at the same time - this is a decade old discussion and everything that was worth saying was already said back then and nothing has really changed much.

    Most popular distros adopted systemd and that’s that and we’ve since then kept piling in more eggs to that same basket. There are options (distros) available if you don’t like it, but most of the “Linux community” chose systemd and that’s where we’ve been for a decade.

    I don’t really have strong opinions these days - systemd boots my computer and most days I don’t need to know about it. I still have to check the manpages for usage because the flags are just archaic as removed, but that’s more of a “me problem” than problem with the software.

    I am worried about IBM though. The steps RedHat has been taking under their new mothership have been worrying and I have a feeling we “parasites” (as the RH CEO called us) might have just seen the beginning of this new strategy.

    This isn’t systemd specific fear, but while the “well we just fork it” is a nice thought, I’m not sure were “we” have the resources and money to continue maintaining it.

    Anyway, that’s just idle speculation from my part. Systemd discussions tend to be as constructive as “vi vs. emacs”. Sides have been picked. Time has passed.

    It is what it is.


  • Hmm… ProtonVPN team solved this in better way. They put the repo configuration stuff into DEB file, so it’s just a matter of double clicking it and clicking install

    I was wondering how they’d solve signature checking and key installation - and looking at their page they seem to recommend skipping checking package signatures which, to be honest, isn’t a super good practice - especially if you’re installing privacy software.

    Please don’t try to check the GPG signature of this release package (dpkg-sig –verify). Our internal release process is split into several part and the release package is signed with a GPG key, and the repo is signed with another GPG key. So the keys don’t match.

    I get it’s more userfriendly - and they provide checksums, so not a huge deal, especially since these are not official Debian packages, but the package signing has been around since 2000, so it’s pretty well established procedure at this point.