It’s OK to call it Artificial Intelligence: I wrote about how people really love objecting to the term "AI" to describe LLMs and suchlike because those things aren't actually "intelligent" - but the term AI has been used to describe exactly this kind of research since 1955, and arguing otherwise at this point isn't a helpful contribution to the discussion.
@codinghorror@simon Hey, Jeff. I think Simon is just saying that in computer science and software development we use "artificial intelligence" for a slew of techniques that are not actually human-equivalent general intelligence.
As the founder of Stack Overflow, you're already aware of that, since as you know there are thousands of questions tagged "artificial intelligence" on the platform.
Short version: "I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet."
@simon Another usage of the term AI is also inside video games where it's effectively a glorified pathfinder or something that can somewhat play a game like chess. In fact before the current LLMs trend, that's what would pop in my head for "AI" in most contexts.
Meanwhile if you take a tool like Siri, it's typically called something like a voice-assistant to maybe be clear it's not something like Skynet but instead something that became a rather common (and boring?) tool.
@simon the mathematical structure of algorithms is as objective as it gets in terms of classifying them and "#AI" in its current #llm form is an incremental evolution of a vast prior body that ultimately goes back to linear regression.
Fitting functions to data, extrapolating and doing something with the outcome is bread and butter in many industries.
I suspect that one factor reinforcing the (ab)use of the term "AI" is to decouple any regulatory discussion from historically established norms
@happyborg This objection is discussed in the article, so it's not new or original, but this exact reason has been my exact tipping point too.
"AI" was a fine term as an analogy or historical background before people on the street started taking it seriously, at face value.
Without going into technical terms like LLM or now LMM that possibly just confuse things, I call it for whatever the use case at hand is, text generation tools, decision automation systems etc. Whether the implementation came from an AI lab or not is not where it gets its value.
@simon I think there is a point because something has changed. People are suddenly experiencing something uncannily like all the fictional AIs they've read about and watched in movies.
Many people, including plenty I expect to know better are seeing a conversational UX with a black box behind it, as opposed to a few lines of basic, and then make wildly overblown assumptions about what it is. Deliberately encouraged by those using deceptive framing such as 'hallucinations' to describe errors.
It's not just "people in the street" though. It includes career technologists who have dived deep into LLMs. I know one who says things like, perhaps humans aren't all that intelligent after all, reasoning that we're doing something very similar to an LLM.
I know but cannot prove that an LLM lacks a human like mind, but so much of what is said implies that it's not that different and encourages conflation, and there are already dangerous legal precedents built on such lies. @simon
@serapath@simon Yeah, it’s polysemic. It means x to researchers, but y to laypeople who only know of ChatGPT. I honestly haven’t seen/heard anyone IRL immediately jumping into a conversation with “but it’s not actually intelligent!!”. What I have experienced is getting partway into a conversation and having to say it - because it has become obvious the other person DOES think “Intelligence” is human-like decision making.
@simon The word AI does not help anyone with anything, because you also cant tell which version or part i even mean when saying that, hence it is just confusing. 😁
@simon hm, yeah no. i disagree. mainstream people have as much rights to their words as scientists, but mainstream is in the majority and AI will also continue to be abused by marketing to make outrageous claims. i dont think AI helps anyone and i will continue to ignore anyone talking about AI
@serapath@gamedev.place That's the exact position I'm arguing against
Yes, it's not "intelligent" like in science fiction - but we need to educate people that science fiction isn't real, not throw away a whole academic discipline and pick a different word!
@simon i do think AI gives way to much credibility to it. People saw and read scifi movies/books and believe chat gpt & co. despite all the confident bullshit it shares.
also, image recognition is different from a language learning model, so what are we even talking about when talking about AI?
it is way to broad to make useful statements, other than what we all saw in scifi movies at some point imho
@serapath I think refusing to accept the word at this point actively hurts our ability to have important conversations about it
Is there an argument that refusing to use the word Artificial Intelligence can have a positive overall impact on conversations and understanding? I'm open to hearing one!
@simon maybe, but because some science people working on it called it that doesnt mean we have to accept the word. the more general term hides the more specific and nuanced and more informative details, also once introduced into the mainstream vocabulary it might clash with other mainstream meaning and it is easier for a small group to change their wording than for a large group.
i generally think scientists should strive to simplify their language, but some actually hide behind it.
@pieist yes, absolutely - I think the thing that's not OK here is fiercely arguing that people who call LLMs AI shouldn't do that to the point of derailing more useful conversations