The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it’s time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it’s clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

  • taaz@biglemmowski.win
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    10 months ago

    I think your idea is not necessarily wrong but it would be hard to get right, especially without making the entry into fediverse too painful for new (non-tech) people, I think that is still the number one pain point.

    I have been thinking about moderation and spammers on fediverse lately too, these are some rough ideas I had:

    • Ability to set stricter/different rate-limits for new accounts - users older less then X can do only A actions per N seconds [1] (with better explained rate-limit message on the frontend side)
    • Some ability to not “fully” federate with too fresh instances (as a solution to note [1])
    • Abuse reputation from modlog/modlog sharing/modlog distribution (not really federation) - this one is tricky, the theory is that if you get many moderation actions taken against you your “goodwill reputation” lowers (nothing to do with upvotes) and some instances could preemptively ban you/take mod action, either through automated means or (better) the mods of other instances would have some kind of (easy) access to this information so that they can employ it in their decision.
      This has mostly nothing to do with bot spammers but instead with recurring problem makers/bad faith users etc.
      Though this whole thing would require some kinds of trust chains between instances, not easy development-wise (this whole idea could range from built-in algorithms taking in information like instance age, user count, user age and so on, to some kind of manual instance trust grading by admins).

    ~

    All this together, I wouldn’t be surprised if, in the future, there will eventually be some kinds of strata of instances, the free wild west with federate-to-any and the more closed in bubbles of instances (requiring some kind of entry process for other new instances).


    [1] This does not solve the other problem with federation currently being block-list based instead of allow-list based (for good reasons).
    One could write a few scripts/programs to simulate a federating instance and have tons of bots ready to go. While this exact scenario is probably not usual because most instances will defed. the domain the moment they detect bigger amount of spam, it could still be dangerous for the stability of servers - though I couldn’t confirm if the lemmy federation api has any kind of limits, can’t really imagine how that would be implemented if the federation traffic spikes a lot.

    (Also in theory one could have a removed-ton of domains and subdomains prepared and just send tons spam from these ? Unless there are some limits already, afaik the only way to protect from this would be to switch to allow-list based federation.)

    Lot of assumptions here so tell me if I am wrong!
    Edit: Also sorry for kind of piggy-backing on your post OP, wanted to get this ideas out here finally

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I’ve wondered about instance bombing before, it seems like a low success but high impact vector