@mnl @sam @forestine Regardless of how they market the tool, unless they can clearly demonstrate that it was built with consent, then it absolutely was not. That's just the truth of the industry.
And if they have somehow managed to build a useful AI tool that isn't built on plagiarism, there's still the problem of LLMs consuming significantly more power than basically anything else we do with computers. Boiling the oceans, as it were.
I can certainly understand having an emotional reaction if you find that these tools provide a genuine assistive value to you, and you see them being criticized. I think a lot of that criticism does assume that the assistive argument is made in bad faith, not because it can't be true, but because it is used as a distraction from the actual issues at hand.
AI as assistive tech is used as a red herring by companies looking to launder their own unethical behaviors. The fact that it is also occasionally truly useful as an assistive technology doesn't negate the rest of the criticisms leveled against it.
(And AI based technology is, in general, unreliable as an assistive technology because it is unreliable in general. It can absolutely be helpful occasionally, but at what cost?)