• 1 Post
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • dragontamer@lemmy.worldtoFediverse@lemmy.worldWhy is Mastodon struggling to survive?
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    edit-2
    11 days ago

    My post above is 376 characters, which would have required three tweets under the original 140 character limit.

    Mastodon, for better or worse, has captured a bunch of people who are hooked on the original super-short posting style, which I feel is a form of Newspeak / 1984-style dumbing down of language and discussion that removed nuance. Yes, Mastodon has removed the limit and we have better abilities to discuss today, but that doesn’t change the years of training (erm… untraining?) we need to do to de-program people off of this toxic style.

    Especially when Mastodon is trying to cater to people who are used to tweets.

    Your post could fit on Mastodon

    EDIT: and second, Mastodon doesn’t have the toxic-FOMO effect that hooks people into Twitter (or Threads, or Bluesky).

    People post not because short sentences are good. They post and doom-scroll because they don’t want to feel left out of something. Mastodon is healthier for you, but also less intoxicating / less pushy. Its somewhat doomed to failure, as the very point of these short posts / short-engagement stuff is basically crowd manipulation, FOMO and algorithmic manipulation.

    Without that kind of manipulation, we won’t get the kinds of engagement on Mastodon (or Lemmy for that matter).


  • dragontamer@lemmy.worldtoFediverse@lemmy.worldWhy is Mastodon struggling to survive?
    link
    fedilink
    English
    arrow-up
    118
    arrow-down
    6
    ·
    edit-2
    11 days ago

    Because Threads and BlueSky form effective competition with Twitter.

    Also, short form content with just a few sentences per post sucks. It’s become obvious. That Twitter was mostly algorithm hype and FOMO.

    Mastodon tries to be healthier but I’m not convinced that microblogs in general are that useful, especially to a techie audience who knows RSS and other publishing formats.



  • dragontamer@lemmy.worldtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Honestly, Docker is solving the problems in a lot of practice.

    Its kinda stupid that so many dependencies need to be kept track of that its easier to spin up a (vm-like) environment to run Linux binaries more properly. But… it does work. With a bit more spit-shine / polish, Docker is probably the way to move forward on this issue.

    But Docker is just not there yet. Because Linux is Open Source, there’s no license-penalties to just carrying an entire Linux-distro with your binaries all over the place. And who cares if a binary is like 4GB? Does it work? Yes. Does it work across many versions of Linux? Yes (for… the right versions of Linux with the right compilation versions of Docker, but yes!! It works).

    Get those Dockers a bit more long-term stability and compatibility, and I think we’re going somewhere with that. Hard Drives these days are $150 for 12TB and SSDs are $80 for 2TB, we can afford to store those fat binaries, as inefficient as it all feels.


    I did have a throw-away line with MUSL problems, but honestly, we’ve already to incredibly fat dockers laying around everywhere. Why are the OSS guys trying to save like 100MB here and there when no one actually cares? Just run glibc, stop adding incompatibilities for honestly, tiny amounts of space savings.


  • dragontamer@lemmy.worldtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    5
    ·
    edit-2
    1 year ago

    Because it isn’t inferior.

    Ubuntu barely can run programs from 5 years ago, backwards compatibility is terrible. Red Hat was doing good but it just removed the bed. To have any degree of consistenty, you need to wrap all apps inside of a Docker and carry with you all the dependencies (but this leads you to obscure musl bugs in practice, because musl has different bugs than glibc).

    For better or worse, Windows programs with dependency on kernel32.dll (at the C++ level) have remained consistently deployed since the early 1990s and rarely break. C# programs have had good measures of stability. DirectX9, DirectX10, DirectX11, and DirectX12 all had major changes to how the hardware works and yet all the hardware automatically functions on Windows. You can play Starcraft from 1998 without any problems despite it being a DirectX6 game.

    Switch back over to Ubuntu land, and Wayland is… maybe working? Eventually? Good luck reaching back to programs using X.org dependencies or systemd.


    Windows is definitely a better experience than Ubuntu. I think Red Hat has the right idea but IBM is seemingly killing all good will built up to Red Hat and CentOS. SUSE linux is probably our best bet moving forward as a platform that cares about binary stability.

    Windows networking stack is also far superior for organizations. SAMBA on Linux works best if you have… a Windows Server instance holding the group-policies and ACLs on a centralized server. Yes, $1000 software works better than $0 software. Windows Server is expensive but its what organizations need to handle ~50 to ~1000 computers inside of a typical office building.

    Good luck deploying basic security measures in an IT department with Linux. The only hope, in my experience, is to buy Windows Server, and then run SAMBA (and deal with SAMBA bugs as appropriate). I’m not sure if I ever got a Linux-as-Windows-server ever working well. Its not like Linux development community understands what an ACL is in practice.