I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 48 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle

  • OK, one possibility I can think of. At some point, files may have been created where there is currently a mount point which is hiding folders that are still there, on the root partition.

    You can remount just the root partition elsewhere by doing something like

    mkdir /mnt/rootonly
    mount -o bind / /mnt/rootonly
    
    

    Then use du or similar to see if the numbers more closely resemble the values seen in df. I’m not sure if that graphical tool you used that views the filesystem can see those files hidden this way. So, it’s probably worth checking just to rule it out.

    Anyway, if you see bigger numbers in /mnt/rootonly, then check the mount points (like /mnt/rootonly/home and /mnt/rootonly/boot/efi). They should be empty, if not those are likely files/folders that are being hidden by the mounts.

    When finished you can unmount the bound folder with

    umount /mnt/rootonly

    Just an idea that might be worth checking.


  • I would agree. It’s useful to know all the parts of a GNU/Linux system fit together. But the maintenance can be quite heavy in terms of security updates. So I’d advise to do it as a project, but not to actually make real use of unless you want to dedicate time going forwards to it.

    For a compiled useful experience gentoo handles updates and doing all the work for you.




  • Well, yes and no. It depends on whether you call the Linux kernel as what makes Linux the OS or not.

    For any operating system there are the kernel components and user space components. The GUI in any operating system is going to be user space.

    They also suggest it’s a “minimalized” Linux microkernel. I kinda agree with this approach, why re-invent the wheel when you can cherry-pick the parts of the existing Linux kernel to make your foundations. The huge caveat in my mind is, the scheduler of modern OS’ is what they were complaining about most. I bet the scheduler is one of the things they took from the Linux kernel.

    As for the rest of the project. I don’t think there’s enough meat in this article to say much, and the very limited free version seems a bit too limited to make a good review of how useful it would be.

    I’ll wait until I’m told I need to port X aspect of my job to DBOS to see if it became a thing or not. :P



  • But then what is a relay? See if a relay doesn’t hold an account and cannot ban/moderate directly content they serve then what’s exactly happening?

    I also wonder if it’s a bit of a legal minefield. See I’m running mbin here. I get content from many other mbin/kbin/lemmy instances. Usually they have pretty good moderation and content is removed on my instance too. But, if someone raises a legal complaint with me directly, I’m required to act on that and moderate on my own instance. Which I can do. It seems like you’re suggesting that’s not directly possible with nostr? So if the main instance chooses to allow it, then it’s tough luck for me, I am required to host it?


  • Here’s why I think activitypub is probably better.

    Having multiple instances, hundreds or even thousands, spreads the load of the network. Smaller instances can curate the communities they want to subscribe to in order to limit traffic and storage. Communities can be hosted across the network too to reduce load on single instances.

    This means that when things are done well, we could produce and serve reddit/twitter levels of content and availability on hobbyist level hosting options spread across the world.



  • I mean, technically you could use unsigned 32bit if you don’t need to handle dates before 1970. But yes, the best course of action now is to use 64bits. The cost is pretty much nothing on modern systems.

    I’m just cautious of people judging software from a time with different constraints and expectations, with the current yardstick.

    I also wonder what the problem will be. People playing ghost recon in 2038 are going to be “retro” gaming it. There should be an expectation of such problems. Would it prevent you loading or saving the file is the question?


  • It’s not poorly written software if it’s is old. Likewise the y2k bug is often declared as bad programming, but at the time the software with the y2k bug was written memory was measured in kilobytes and a lot of accounting software and banking software was written in a time when 64k was the norm. Oh, and I’ll tell you now I know of at least some accounting software that is based on code written for the 8088 and has been wrapped and cross compiled so many times now it’s unrecognisable. But I know that 40 year old code is still there.

    So 2 digits for year was best practice at the time and at the time software vulnerable to the 2038 bug 32bit epoch dates was the best practice.

    Now, software written today doing the same, could of course be considered bad, but it’s not a good blanket statement.


  • r00ty@kbin.lifetoLinux@lemmy.mlThoughts on this?
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    6 months ago

    removed all of that. Linux desktop really could use a benevolent dictator that has some vision and understanding what the average user wants.

    It already has these. They’re called Linux Distros. They decide the combination of packages that make up the end to end experience. And they’re all aimed at different types of user.

    Why are none explicitly aimed at the average Windows user? I suspect there’s one major reason. The average Windows user is incapable of installing an operating system at all, and new PCs invariably come with Windows pre-installed. This isn’t a sleight on them by the way, it’s just that most computer users don’t want or need to know how anything works. They just want to turn it on, and post some crap on Twitter/X then watch cat videos. They don’t have an interest in learning how to install another operating system.

    Also, a distro aimed at an average Windows user would need to be locked down hard. No choice of window manager, no choice of X11/Wayland. No ability to install applications not in the distro’s carefully curated repository, plus MAYBE independently installed flatpak/other pre-packaged things. The risk of allowing otherwise creates a real risk of the system breaking on the next big upgrade. I don’t think most existing Linux users would want to use such a limiting distro.

    Unless Microsoft really cross a line to the extent that normal users actually don’t want anything to do with windows, I cannot imagine things changing too much.





  • I think in terms of gdpr, if you notify a site that is providing service (allows users to register from I guess) to EU countries you want something deleted, they need to comply.

    But I think in terms of federated content, you cannot be expected to do more than send information about the deletion out. If other instances don’t respect it, it’s not the originating instance’s job to police it.

    Now the user could go to these other instances and chase it up. But I wonder if a third party instance doesn’t allow users from EU countries, if they’d be required to comply? Federated content opens up a an interesting set of scenarios that will surely test privacy laws.

    I also wonder what the EU powers are to sites in non EU countries that allow EU users but don’t respect GDPR. what can they even do? Companies like twitter, Facebook, reddit etc have presences in EU countries that can be pursued, but John Smith running a lemmy instance on a $5 vps might be out of reach.



  • I started with LinuxFT from a magazine coverdisk. I also installed it on an old 486 at the office. It became the “internet box”. The company director at the time believed Bill Gates that the internet would be a fad and wasn’t worth investing in and would not put any money into the company internet connection. So, it was an old 486, running LinuxFT, with a modem calling out on demand, squid proxy, email boxes etc. But it worked.

    After that I moved to Redhat (before it was paid for). I remember for sure installing RH5. It was definitely a smoother experience.

    Server wise, I went through various distros. Once I got to debian, for servers I never really left the “apt” world. Management wise, it’s just too easy to work with. Hopping between Ubuntu and Debian even now.

    For firewalls I’ve been through ipfwadm (Kernel 2.0.x), ipchains (Kernel 2.2.x) and iptables (Kernel 2.4.x). Now, there is some newer stuff now. Nftables, but there hasn’t been a “you must change” situation like the other two and as such, I’ve generally stuck with iptables, mainly because when I did try nftables I had a real problem getting it to play nice with qos. Probably all fixed now, but I’m too lazy to change.

    Desktop wise. I dual boot windows/linux. Linux is Manjaro, and I like Manjaro, for the fact that gaming generally just worked. However, I feel like every major upgrade I am chasing broken dependencies for far too long. But, when it works, Manjaro is great. However, I have had several failed desktop experiments. I ran Gentoo way way way back, I think I had an AMD Athlon at the time. I thought it was great, I mean building stuff for my specific setup, nice idea and all. But upgrades were so damn slow compiling everything! I tried Ubuntu, but I never found the desktop to be any good. I did also have Redhat way back in the late 90s. But the desktop was just poor back then.