I’m going to make explicit something I’ve been practicing for a while now: I will not boost posts that contain machine learning- (“AI”) generated images, as surely as I will not boost those without alt text, and I encourage you not to do so either.
Conversation
Notices
-
Embed this notice
Adam Greenfield (adamgreenfield@social.coop)'s status on Tuesday, 26-Mar-2024 18:09:32 JST Adam Greenfield -
Embed this notice
Christine Malec (christinemalec@mstdn.ca)'s status on Tuesday, 26-Mar-2024 18:09:30 JST Christine Malec @adamgreenfield Genuinely curious cause I'm blind, is it always obvious that an image is generated by machine learning?
-
Embed this notice
Adam Greenfield (adamgreenfield@social.coop)'s status on Tuesday, 26-Mar-2024 18:09:30 JST Adam Greenfield @ChristineMalec That is a super-interesting question. I would say that it is *not* always obvious, which is why such images can cause such social havoc – think of deepfake pornography, or faked images of politicians doing something problematic – but that the high-fidelity ones tend to be generated by the paid, “pro” versions of the software. What we tend to see as post illustrations, on the other hand, do bear certain consistent telltales.
-
Embed this notice
Adam Greenfield (adamgreenfield@social.coop)'s status on Tuesday, 26-Mar-2024 18:40:17 JST Adam Greenfield @ChristineMalec The best way I can describe it is by analogy to the image processing technique known as HDR, or high dynamic range: everything in the image is just a little too intense, unnaturally bright and so on. With the free versions of the software, as well, there are frequently even more obvious signs: figures with extra or missing fingers or limbs, text that looks like alien hieroglyphics, etc. Hope that’s helpful.
-
Embed this notice