what's especially infuriating is that this outcome is *totally obvious* to anyone who knows the first thing about language, i.e., that even the tiniest atom of language encodes social context, so of course any machine learning model based on language becomes a social category detector (see Rachael Tatman's "What I Won't Build" https://slideslive.com/38929585/what-i-wont-build) & any model put to use in the world becomes a social category *enforcer* (see literally any paper in the history of the study of algorithmic bias)