@ari @micr0 @Makura To give a concrete example, unless it was trained from data where all use of gendered words was intentionally stripped, image labeling models will necessarily misgender people in the images they describe based on biases regarding gender roles or gender presentation.
They'll also do the opposite, e.g. mislabeling a doctor as a nurse because of feminine appearance. This direction of error is likely to have racist aspects too.
All of this is a very legitimate reason to hate "AI" even in "benign" and local contexts.