I still stand by the opinion that ai generated alt-text can be worse than nothing in a lot of cases, as it can be (and often is) just straight up inacurate, or doesn't understand the context or focus of an image. For instance it could focus on details that are not important or relevant to the purpose of the image.
Conversation
Notices
-
Embed this notice
Chloe (chloe@catwithaclari.net)'s status on Thursday, 09-Jan-2025 23:41:07 JST Chloe -
Embed this notice
SuperDicq (superdicq@minidisc.tokyo)'s status on Thursday, 09-Jan-2025 23:41:06 JST SuperDicq @Chloe@catwithaclari.net I'm more concerned with these AI models usually being proprietary and not running on your local machine, stripping away the freedom from visually impaired computer users.
Christmas Sun likes this. -
Embed this notice
Christmas Sun (sun@shitposter.world)'s status on Thursday, 09-Jan-2025 23:55:52 JST Christmas Sun @Chloe If you're a government office or human rights org then you have a stronger moral imperative to tag images since they're important and they're authoring the image. But almost none of the regular public images on the Fediverse are in the category where wrong or inaccurate alt text is very risky. In which case it feels better than nothing.
I did accessibility for websites for an educational company for a number of years and for example blind people are used to the support being half-assed but even broken support is better than nothing, with nothing you have no handle to grasp at all, with bad information you may be able to use your brain to infer what's happening.
-
Embed this notice