This specific example is on my mind in part because of reading @kissane's article on Facebook's role in the genocide of the Rohingya in Myanmar. One of the things it mentions is that Facebook's internal apparatus for what we might call moderation was its "bullying-focused 'Compassion Team'". Like many social media platforms constructed by the sorts of people who construct social media platforms, Facebook construed the problem of moderation being one of preventing or discouraging interpersonal conflict on the platform.
But the problem unfolding in the Burmese-language parts of Facebook was not people disagreeing with one another or expressing conflict with one another. It was their *agreeing* with one another.
Agreeing to go kill their neighbors.
This was not something that was even on Facebook's radar, apparently.
🧵