Just say it's a labor dispute. "Stolen datasets" rings hollow if you're using ML upscalers, ML translation software, and ML face tracking software - most of whom have also been trained non-consensually and/or in violation of the expectations of the data's authors or rightsholders.
It's hard for me to agree with the current "AI" "art" hate train for this reason. Never seen as many people react to literal art theft than they do to LLM-generated content. I've heard of people complaining about "AI" on piracy websites which reproduce the work of independent artists they claim to stand with.
People are even complaining about usecases where using someone else's artworks en masse was already accepted - like inspiration/references. I don't understand - LLMs, to me, are best understood as highly imprecise/hallucinatory search engines, and I consider their output similarly "tainted" in a copyright sense.
Making the discourse about copyright will backfire. Adobe is already trying to get their LLMs cleared. Companies like Apple and Getty are in a position to. The thing about copyright law is that the house always wins.
It's a LABOR dispute. The problem is the erosion of creative LABOR, not copyright. Almost nobody minded the copyright violation potential (waifu2x, all the VTuber tracking programs derived from non-commercial use restricted datasets, etc) until the LABOR was threatened.
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
asie (asie@mk.asie.pl)'s status on Wednesday, 24-Jan-2024 13:59:34 JSTasie