when I find out someone's been using an AI assistant to compose replies to me they drop to the bottom of the queue of people I will talk to
I don't need someone to disrespect me like that. use words you mean
when I find out someone's been using an AI assistant to compose replies to me they drop to the bottom of the queue of people I will talk to
I don't need someone to disrespect me like that. use words you mean
I already get enough information. I get too much information. Too many words.
I don't need more words. I need meaningful words, words that matter to me.
Don't waste my time, don't waste others' time. Say what *you* mean when you talk to someone.
If you're using it to translate things you've written, that's different. That one's ok.
@cwebber I’ve adopted a principle at work when people want to show me something they produced using an LLM: I will not be the first to read the output. If you used an LLM to produce something, have you read it and decided it’s good enough to share with me? If you can’t be arsed to read it, I can’t either.
It is shocking the number of people who take LLM output, copy/paste it, and don’t read it.
I got a really strange reply on Bluesky, and I think it's worth quoting, because it's worth refuting:
> one of the biggest problems in human communication is that people (without assistance) say things they do not mean. it's one of the primary forms of miscommunication out there. AI assistants can, actually, help ensure people _do_ say what they mean, while also anticipating how others may take it.
https://bsky.app/profile/alyruffruff.bsky.social/post/3lonpe4tdkk22
(cotd)
Strange reply about AI, continued:
> in this sense what people do with LLMs is not meaningfully different from translation, in fact it is all translation between idiolects and sociolects, colloquial dialects and prestige dialects. there's no linguistically sound distinction between these forms of translation.
https://bsky.app/profile/alyruffruff.bsky.social/post/3lonpe547qc22
(ok, now to refute it)
My reply:
The AI doesn't have access to your internal thoughts to translate from.
If I ask an AI to translate a book, but I keep the book closed, and it can only look at the cover, it can't translate it unless it already knows its contents.
It can guess, but that's not the same.
https://bsky.app/profile/dustyweb.bsky.social/post/3lonsg4gxps2k
So, it's true that communication is largely, even primarily, translation between different mediums of information. Even your thoughts to the words from your mouth is translation.
But it's fully strange to say that AI generating thoughts "for you" is "translation" from a source material you cannot examine.
This is like the weird thing of an "AI representation of a deceased defendant" appearing in court.
What absolute, dangerous nonsense. Complete misunderstanding of life, ideas, communication.
I removed the links to the post because the original poster felt it was dogpiling for me to do so, and fair enough I guess.
But I am troubled by this line of thinking, and I think it *is* a line of thinking people are going down.
@trwnh I boost my own replies so you see it twice because then it means the reply is double good
@cwebber unrelated to the thread but somewhat related to what that person was saying: is there a reason why you boost your own self-replies? i think the overwhelming majority of fedi apps will show self-replies in timelines, so boosting yourself just makes your posts show up twice in a row, which is unnecessary. (i can understand doing it on bluesky, because bluesky collapses reply chains to max 3 or so, but fedi largely doesn't do that...)
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.