Offline communities rely on support workers, first responders, safety enforcement and so on. As online communities grow, we need to think of ways to structure an equivalent human support system that functions transparently, and outside corporate monopoly.
But this isn’t easy to do. If you moderate a global social network, you have to take into account different voices, different contexts, different needs. Once the network is big enough, bad things will show up and you have to deal with them.
Current self-hosted implementations like Diaspora, Hubzilla or Mastodon rely on the volunteer admins of each instance to do the right thing. This is ok as long as the instances are small enough, and the overall network doesn’t get too big.
In addition to the obvious moderation tasks, we will also consider the need to address various harmful practices that range from hate speech to bullying or even human trafficking.
We will implement an API that allows admins to outsource moderation to us. Our moderation guidelines and principles will be informed by social research and accepted best-practices. We will work transparently, with open and documented guidelines, under a trusted advisory board to make sure that everyone has a safe place in this new online civic space.