It’s hard not to say “AI” when everybody else does too, but technically calling it AI is buying into the marketing. There is no intelligence there, and it’s not going to become sentient. It’s just statistics, and the danger they pose is primarily through the false sense of skill or fitness for purpose that people ascribe to them.
@Gargron This false sense of skill or fitness for purposs comes from its incomprehensibility. Not because it's complex, although it is that, too. More because it is not designed to be comprehended. Only to be consumed.
The danger of AI is not in that it's not intelligent, it is in that it's unintelligible.
@Gargron As usual, rms is telling the truth and nobody listens ;-)
"I can't foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn't know anything and doesn't understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can't avoid that because it doesn't know what the words _mean_."
@Gargron The people using the phrase "AI" to describe their weak products are the same people who used the word "algorithm" for everything ten years ago.
@Gargron It's a hybrid of Symbolic AI and Neural Networks, which is where this was always going. This current weak adaptation is already destroying egos left right and centre, and it's going to improve. More feedback. Deeper abstraction. Inference.
@Gargron yeah. The true test for AI I believe is when they’re able to self learn and improve. So recessions of generations without human input can result in them being “better”. So far all the “AI” systems being pushed are known to become worse if fed their own output.
@Gargron There is no common accepted definition of intelligence with which you can prove that a human around you is really intelligent and not just acting like?
@Gargron And let me guess...your definition of "intelligence" specifically excludes what people are calling AI. But here's the thing...there is nothing called "intelligence." Find out what the independent-variables are that are relevant to the behavior we _call_ "intelligent" - even in nonhumans. And, actually, after the era of GOFAI ended, they did...sort of. AI people have ignored the natural science of behavior and it has hurt their efforts for AGI. Think "conditioning," people. Sheesh!
@Gargron i have been long thinking about this. Isn't our own brain electric pulses? Web are social animals, we learn imitating others, so, what is intelligence? Not trolling, i'd like to read you
@Gargron Those systems behave intelligently even if they are not. They understand complex questions and complex code for example. Even if it is all statistics at end, the results are breathtaking. #AI#ChatGPT#OpenAI
@Gargron It's tempting to call it AI, but I'd rather refer to it as 'sophisticated pattern recognition software.' Ultimately it's just a fancy algorithm, or a glorified data matcher.
The best we get to call it is Augmented (Human) Intelligence. Like those glasses that overlay things in front of your eyes, current AI is mostly a tool that does stuff for you, and it just happens to do it better than tools before it. New types of problems bring in new tools to solve them. But it’s just brute-forcing an answer in the end, I agree.
@Gargron come now! This overstates our current knowledge of the nature of intelligence. LLMs are adaptive, they have memory, they use language to communicate, and they integrate disparate experiences to solve problems. They have many of the hallmarks of what we call intelligence. They have more such characteristics than, say, dolphins or chimps. Us knowing how they work is not a disqualifier for them being intelligent.
@Gargron LLMs are more sophisticated versions of the old ELIZA chatbot. At least ELIZA's creator Joseph Weizenbaum did not attempt to overhype what it was.
@Gargron I liked the article "Sparks of AGI" which does systematic comparisons of intelligence metrics on GPT-4. There are also plenty of critiques worth reading.
@evan@Gargron that paper cites a definition of intelligence by racist eugenicists, and doesn't have any actual controls, only vibes. It is worth watching / listening, as is the linked radiolab series on intelligence measuring
@Gargron I've been telling people this forever and they don't want to hear it. They're so enamored with the science fiction of "AI" that they don't understand companies will lie for money.
@jalcine@KevinMarks@Gargron I see a lot of discussion of the Gottfredson definition of intelligence, which was removed. I've only read parts of the most recent version, which says "there's no generally agreed definition of intelligence." Which I think is still true, although I am not an expert in this field.
@evan@Gargron Not too long ago — in fact, roughly a year or two ago — "Artificial Intelligence" was a term used to describe computer systems which could perform tasks that historically required human cognition. Few people were offended that Chess or Go-playing systems were considered "AI" and "real intelligence" was never a requirement. But, as we see time and time again, "AI is whatever hasn't been done yet."
@evan@Gargron I think it's historically incorrect to say that, "technically calling it AI is buying into the marketing". Yes, marketing is capitalizing on it! But the nomenclature matches my CS education from the late 2000s and it matches 70 years of how "AI" is used in research and literature. The recent obsession with asserting "theory of mind" or "intentions" or "originality" or "real intelligence" seems, well, recent.
@MattHodges@Gargron I think there are a lot of things GPT4 is bad at. It's not very good at simple arithmetic. It is bad at geographical information -- what places are near others, parts of each other. It also does a bad job at string manipulation -- words that start with a particular letter, or words that are anagrams of other words. I don't think you have to resort to mysticism to say why it is not yet human-equivalent. But that doesn't mean it's not intelligent.
@evan@Gargron I'd have to disagree. LLMs are primarily used for two things, parsing text, and generating text.
The parsing functions of LLMs are truly incredible, an represent (IMHO) a generational shift in tech. But the world's best regex isn't intelligence in my book, even if it parses semantically.
I'll just add that having memory, being adaptive, and using language to communicate are all things that computer programmes that don't use LLMs do today.
LLMs are (IMHO) the most convincing mimics we've ever created by many orders of magnitude. But they don't actually *know* anything.
I can't wait for the world to see what truly *useful* things LLMs can do other than be sometimes right on logic puzzles and write bad poetry.
@evan@Gargron The generating functions of LLMs are (again, IMHO) both the most hyped and least useful function of LLMs.
While LLMs generate text that is coherent, that can illicit emotion or thought or any number of things, we're mostly looking into a mirror. LLMs don't "integrate" knowledge, they're just really, really, really big Markov chains.
Don't get me wrong, "intelligent" systems most certainly will use an LLM, but generating text from prompts the way we do isn't intelligence.
@evan Hey, you may be unaware of the actual problem solving, social lives and intelligence of dolphins, they're far more adaptive to reality than an LLM is. And LLMs don't have experiences, that's projecting human sensory capabilities onto them they simply don't have since they're not embodied (experiences are far more than memories and they don't just live in narratives/texts/memories....see current research into PTSD and memory, for instance). @Gargron
This is a recurrent example that is starting to illustrate the difference between bare LLMs and the products built on top of them. Eg, ChatGPT is a product built on top of a system. That system has a lot of components. One of those components is a LLM. And another component is a Python interpreter. LLMs can write Python quite well, and Python can do math quite well.
@evan@Gargron Ya, I think that's the heart of the question :)
What I'm trying to communicate is that when I ask an LLM "what is on the inside of an orange", the programme isn't consulting some representation of the concept of "orange (fruit)". Rather, it's looking at all the likely words that would follow your prompt.
If you get a hallucination form that prompt, we think it made an error, but really the LLM is doing it's job, just plausible words. My bar for intelligence is personally higher
@bikeshed I am not! I think you should go back and reread my post with fresh eyes. I said that LLMs do better on some of the measures of intelligence than chimps and dolphins. I didn't say that they are more intelligent than those animals, nor did I say that the measures of intelligence they excel at are more important than the intelligence necessary to survive in the world.
@evan but again, this is anthropocentric. You're defining language as language that is intelligible to humans and then saying that the tool designed by humans to output human language is better at human language than chimps! It's a silly game that plays into this very stratified view of what constitutes intelligence.
I certainly think that ranking LLMs over dolphins, who we have little understanding of their communication, seems very bizarre.
@evan additionally, why is language use a more defining characteristic of intelligence than tool use? Chimps, bonobos, dolphins, octopi, corvids etc all can use tools and solve complex tasks but aren't good at language (to our definition of language). Does this matter?
@Gargron Oh, the irony. Aren't you just parroting Emily Bender? A chatbot could have said this. How can we know that what you say is more than just your cognitive statistics?
1. There is intelligence in current LLM based AI. A different sort, but still intelligence. Language competence without comprehension.
2. Most of what people say is pretty much at the level of parroting.
3. What you say is half true, half misleading.
Several people on this thread have mentioned this sort of idea.
@evan agree to disagree, but I struggle to read "They have many of the hallmarks of what we call intelligence. They have more such characteristics than, say, dolphins or chimps" in another way than a kind of ranking.
@Gargron I always read "AI" in the news as "Artificial Idiocy." Although the terms "Augmented Idiocy" or "Amplified Idiocy" just came to mind. @etherdiver
@Gargron But to be fair, it is doing a better job at pretending to be intelligent than tons of humans voting nowadays… so I’m not sure I really really care ablut the broad interpretation of “I” being used.
@Gargron I just went to a major US marketing conference in Sept, every single vendor was touting the 'AI-fication' of their products. It's ridiculous because we are customers, we know that they've just rebranded the ML stuff they were already doing!