Cory Doctorow, Chris Hayes, and David Roberts: why do people hate AI i can't figure it out
the AI companies: we are going to take away your ability to think and sell it back to you, that's our plan, literally and explicitly, we are saying it from a stage.
He was an ass for ranting about "purity culture" and belittling real concerns people have (and why many people hate AI), to quote:
"Doubtless some of you are affronted by my modest use of an LLM. You think that LLMs are “fruits of the poisoned tree” and must be eschewed because they are saturated with the sin of their origins. I think this is a very bad take, the kind of rathole that purity culture always ends up in."
"Purity culture" is a highly derogatory term, implying that people who are anti-AI are some sort of religious cult.
I'm not going to argue the matter as if I perfectly understand and agree with Cory's position. He can defend himself. I'm just not sure that article says what was implied by @peter's post. It seems like Cory understands exactly why people hate AI, and he's trying to make a distinction between the technology and the application of it.
@jonne@thomasfuchs@malcircuit@peter If I have been reading his complaints correctly Cory never hated AI unless it was burdening the user: i.e. "Reverse centaur"
I guess that's good enough to sell a lot of books because it is a "functionally smart" position with a little pushback against the AI trend, and not enough people in the media are willing to give us even that, but -my- problems with AI go deeper than the interface. I still consider it (attempted) intellectual theft and a sanitized interface for environmental destruction that provides results which are a lot less helpful than all the stuff that came before AI -- all the stuff they deliberately took away to make AI seem useful
(I recently stopped letting Cory in my feed after he started to share substack articles)
@malcircuit@RnDanger@jonne@peter spell checkers have been around for 45 years commercially, grammar checkers for 30; it’s just lists of words with some stemming and grammar rules.
LLMs don’t really add anything to this wrt to typos.
Yeah, I know nothing of Ollama, so it's hard for me to take a position on it. At the same time though, previous generations of spell checking and grammar checking are also arguably "AI" technology, and trained on similar sorts of datasets. I'm not sure whether a spellchecker based on lower complexity neural nets and Markov chains is meaningfully different from one that's based on an LLM. Seems more like just a matter of scale.
@malcircuit@jonne@thomasfuchs@peter He's running olama on a local computer. I haven't looked into the the training for that model but if i was able to be convinced that there's an ethical model to use it might be that one.
So he's set up an AI at home. That's not "Big LLM", but it's also definitely not "anti AI", which a lot of his fans are
I feel like it's important to keep in mind the context of the article. He's using a spellchecker LLM. It's not a chat bot. He's not asking it questions. He's not asking it to write for him. It's like criticizing someone for using a keyboard app with autocorrect.
I also have a deeper philosophical opposition to most uses of LLMs, but a spell checker is such a trivial application that I'm having a hard time thinking of a reason it's "bad".
@RnDanger fwiw many word processors have the option to do a full spellcheck on the whole text (without any LLMs); indeed that was the default until the 90s.
I agree that this is way less distracting and a byletter workflow.
Anyway, he should just hire a good editor—who for example could tell him not to insult his audience…
@malcircuit@jonne@thomasfuchs@peter Well one difference is that he sends his whole work through at once and says "find errors" instead of looking for squiggles in the text as he goes, which sounds good to me because those squiggles distract me so much from actual writing.
Another is the training. Where's the data from? Did those people agree to provide it? How much energy did it take to train? I just don't know these things.
@malcircuit@thomasfuchs@RnDanger@jonne i don't care about how he uses AI personally, i care that in defending himself, he's diagnosed other people who criticize AI as having a "psychosis" and extended that to a defense of AI technology in general.
I think the point of it is more to identify parts of a sentence that, while being grammatically correct, are worded in a way that's "hard to read" or whatever. An LLM would be very good at suggesting alternative ways to say the same thing.
But as you have pointed out elsewhere, that's essentially what an editor does.
I'm not arguing it's a good use of the technology, just that it's such a trivial application that it's not really worth talking about.
For wha it’s worth, it’s him talking about it and he’s using it as a springboard for what is abusive behavior.
The way he gets preemptively angry and defensive and is blaming people with generalizations and comparing them to a cult _before they even did anything_ is ringing my (tiny) alarm bells.
Of course that’s just my opinion, but I think I have sound reasoning.