Conversation
Notices
-
Embed this notice
I really wish more people acknowledged that content moderation—whether here in the #fedi or on a corporate social network like #Bluesky—is service work that requires individuals to be exposed to terrible, terrible things. The people performing content moderation are providing care work for society and deserve to be compensated accordingly, but rarely are.
If your experience online is mostly free from racism, abuse, and other forms of violent or extreme content, there's a good chance that it's because someone, somewhere, filtered it out for you.
- Joachim repeated this.
-
Embed this notice
And "AI" will not change this truth, because these systems *still* need to be trained and that training still requires badly-paid people to be exposed to terrible, traumatic content.
So when people go and sing the praises of corporate social networks and how their "trust and safety" policies are protecting users, I hope they spare a second to think about the individuals doing the actual moderation.