@ari @micr0 @Makura These are all harmful in that they normalize something harmful and regurgitate biases (some of them hateful and oppressive) from training data into output. But some of the reactions (like DDoS) are disproportionate and unjustifiably harmful too.
Conversation
Notices
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Friday, 10-Jan-2025 12:18:46 JST Rich Felker
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Friday, 10-Jan-2025 13:33:57 JST Rich Felker
@ari @micr0 @Makura To give a concrete example, unless it was trained from data where all use of gendered words was intentionally stripped, image labeling models will necessarily misgender people in the images they describe based on biases regarding gender roles or gender presentation.
They'll also do the opposite, e.g. mislabeling a doctor as a nurse because of feminine appearance. This direction of error is likely to have racist aspects too.
All of this is a very legitimate reason to hate "AI" even in "benign" and local contexts.
-
Embed this notice
artfulrobot (artfulrobot@fosstodon.org)'s status on Sunday, 19-Jan-2025 18:01:25 JST artfulrobot
@dalias @ari @micr0 @Makura also, quoting how much electricity is used in use is misleading.
How much water and power was used by the creation of Gemini? This needs accounting for too.
And while building dependency on billionaires' products, you need to consider the harm those are going to be doing thanks to projects normalising and (indirectly) funding them. eg google have recently abandoned CO2 reduction targets bc AI.
-
Embed this notice