• 5 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Your use-case and situation seems very close to mine except I specifically do not host communities.

    First of all, you can run as many services from single nginx as you want (or can handle), usually you do this by having each service on it’s own (sub)domain and routing it all to the same IP, nginx then proxies the requests to the corresponding service running locally on a given port (see nginx reverse proxy).

    I would definitely recommend docker images unless you have specific needs, afaik the ansible recipe installs and manages a docker compose project too (unless they also added official bare-bones ansible setup). Might be wrong here, I do docker and manage it myself, updating is usually a file edit and two commands away.

    About the VPS being enough - from my monitoring, every foreign subscribed community increases the load, with bigger/more active communities increasing it more.
    The main limiting resource for my setup is disk space, sometime ago I’ve calculated my database size is increasing about 1G per month with about 500 subscribed communities and that’s only the postgresql database size without any media. The stats from my s3 provider (you can host images locally too), hint that I am gaining 1-5GBs of media per month.

    I don’t have any metrics how much the amount of active users drains the server as my instance is intentionally small, but I can imagine that having 10-100-1000 active users at the same time would drastically increase the load of at least postgres as well as increase the bandwith.

    And about my setup for comparison, I am renting a dedicated server from Hetzner (AX41-NVMe) running a bunch of other services as well (minecraft server, factorio server, file sharing service, …) and as of the last 30 days my monitoring reports the “average” load average (same for all 1/5/15m) being around 1 core (out of 12 core processor, 6*2 smt).
    Memory is sitting at about 50% month average out of 64G.
    Though, most of the services are really under-utilized (minecraft) or don’t require much (factorio).

    Rule of thumb, if your users subscribe to a lot of outside communities expect at least increased disk space consumption, at worst also increased bandwidth and load.
    If any of your hosted communities get popular on the wider fediverse, definitely expect increased bandwith and load - more servers hitting your server with more data (upvotes, comments, edits…) means nginx, lemmy and postgres also need to process more.
    At baseline there will be a lot of a spiky but small chatter from other instances and the biggest resource drain will be postgres.

    I wouldn’t personally go into this with anything less then 4 vCPUs, 32G of RAM and non-shared/virtual storage (disk latency kills postgres performance).


  • TLDR: Not a bug, feature; Or works as intended:)

    …but few-to-none of the comments/votes did. Everything since subscribing is entirely in sync.

    That is by design, if every instance automatically synchronized (federated) every post and every comment from every other instance …the whole fediverse would explode?:) well it would at least require a loot more resources hosting any/every instance.

    As for the “loading history”, if you take a true url[1] of a post or comment, insert it into the search bar of your instance, it will load it (and it will be visible in the corresponding community). One problem are votes, afair lemmy does not even offer a mechanism to let other instances see all historical votes, do not confuse this with votes that are already federated, the moment you subscribed is the moment the instance hosting that community started forwarding everything happening from now on in that community to lemmy.ml (your instance).

    [1] - true url here means from where the resource originates/which instance is hosting that comment/post/community; You can find it as the little fediverse button on each non-local resource (comment/post/community).

    E: I see others beat me to it haha







  • Numbers from my instance, running for about a 1 year and with average ~2 MAU. According to some quick db queries there is currently 580 actively subscribed communities (it was probably a lot less before I used the subscribe bot to populate the All tab).

    SELECT pg_size_pretty( pg_database_size('lemmy') ): 17 GB

    Backblaze B2 (S3) reports average 22.5 GB stored. With everything capped to max 1 USD, I pay cents - no idea how backblaze does it but it’s really super cheap, except for some specific transactions done on the bucket afaik, which pictrs does not seem to do.

    According to my zabbix monitoring, two months ago (I don’t keep longer stats) the DB had only about 14G of data, so with this much communities I am getting about 1.5G per month (it’s probably a bit more as I was recently prunning stuff from some dead instances).

    Prometheus says whole lemmy service (I use traefik) is getting within about 5 req/s (1m average) though if I go lower it does spike a lot, up to 12 requests within a second then nothing for few.









  • How did you install jellyfin?

    It should not core-dump (read: hard crash, something has gone terribly wrong), at best you should get a configuration error and errors like that.

    You can see the logs of any systemd service/unit with this: journalctl -u <name of sevice> so in this case journalctl -u jellyfin (Tip: add -f to follow the output of a running service - useful for monitoring).

    Note that some programs log to their own files (and not to stdout) so if the above command comes out empty you should look into /var/log/ directory.



  • taaz@biglemmowski.wintoLinux@lemmy.mlJava uses double ram.
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    You could also (hard) limit the total (virtual) memory process will use (but the system will hard kill it if tries to get more) with this:
    systemd-run --user --scope -p MemoryMax=8G -p MemorySwapMax=0 prismlauncher

    You would have to experiment with how much Gs you want to specify as max so that it does not get outright killed. If you remove MemorySwapMax the system will not kill the process but will start aggressively swapping the processes’ memory, so if you do have a swap it will work (an depending on how slow the disk of the swap is, start lagging).

    In my case I have a small swap partition on an m2 disk (which might not be recommended?) so I didn’t notice any lagging or stutters once it overflow the max memory.
    So in theory, if you are memory starved and have swap on a fast disk, you could instead use MemoryHigh flag to create a limit from where systemd will start the swapping without any of the OOM killing (or use both, Max has to be higher then High obv).


  • taaz@biglemmowski.wintoLinux@lemmy.mlJava uses double ram.
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    Fabric is one of many mod loaders ala Forge. It’s newer and less bulky then Forge (but afaik it already did have it’s own drama so now we also have a fork called Quilt, the same goes for Forge and NeoForge).

    The mods I’ve specified above can be considered as a suite replacement for the (old) OptiFine.

    E: For example this all the mod loaders modrinth (mod hosting website, curseforge alternative) currently lists:


  • taaz@biglemmowski.wintoLinux@lemmy.mlJava uses double ram.
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    4 months ago

    As a side note and a little psa, if you need to squeeze out more overall performance of out of MC (and you are playing vanilla or Fabric modpack) I very much recommend using these Fabric mods: Sodium, Lithium, FerriteCore and optionally Krypton (server-only), LazyDFU, Entity Culling, ImmediatelyFast.