Let me make something clear. It’s not because ants have DNA, or because they share a distant common ancestor with us, that allows me to know them as thinking beings— and to respect their volition as I would respect yours, (while rejecting, totally, such a possibility for current LLMs). No. It’s precisely because I think it’s possible, even likely, synthetic systems worthy of such respect might someday exist that I strenuously reject the language parlor trick of LLMs.
@Scmbradley@Swedneck@futurebird In this domain we need to be aware that some distinctions are falsifiable and can be part of theories, while others are social constructs. Regarding machine intelligence or consciousness, the key things the AI scam industry is missing are empiricism and consequences. Intelligence is intelligence because it enables an organism to act in complex ways that are beneficial to the survival of it and its offspring, and because it's able to evaluate, against real world consequences, whether the behaviours it outputs are harmful or helpful, and adapt its model based on that. LLM parlor tricks cannot do any of this because they're effectively stateless and have no sensory inputs or body subject to survival, just a static trained statistical model of likely language expressions.
@futurebird The way i feel about this is that i don't know what qualifies as conscious, but LLMs just obviously aren't it, because we *know* how they work and they *specifically* have no fucking clue what they're doing. It's literally just statistically predicting what sequence of numbers could follow an input sequence. LLMs can't be conscious any more than a desktop calculator can be.
If LLMs were actually able to see letters and words then we could start to entertain the idea of conscience.
@Swedneck@futurebird but isn't the same true of humans? We know it's just electrical signals and action potentials in the brain and nervous system. But somehow, mysteriously, that collection of bits and pieces we understand gives rise to consciousness and agency.
To be clear, I'm not arguing that LLMs are conscious, I'm arguing that consciousness is hard and the critique that LLMs aren't conscious or "don't understand" things is the wrong way to criticise them.
But this is a big deal to me. I don’t think there’s anything magical about DNA or carbon-based life that makes consciousness only a possibility for our relatives. If someone could show me a computer that could do what ants do I would be impressed and I would take it seriously. When people give an LLM more respect than the ant they are prejudiced by our affinity for language as a signifier of humanity. They underestimate the complexity of the ant. Maybe that’s why this bothers me so much.
@Scmbradley@Swedneck@futurebird They are important because they reveal another part of the malevolent AI cult: whenever you have intelligence, it's intelligence by virtue of benefiting some being/actor. The type of disembodied intelligence the cult envisions is not its own being but an extension of its owners' being, an enhancement to facilitate maintaining *their* dominance.
@dalias@Swedneck@futurebird I don't think having offspring, being the outcome of evolution or being embodied are necessary for intelligence. All intelligent things we have observed so far appear to also have those features, but that's accidental, in my view. Making it true by stipulation that LLMs can't be intelligent doesn't help the case of the LLM critic. "Ok fine if that's how you define intelligence then the AI isn't intelligent, but it's still got all these great helpful properties" is what they'd respond, and we're no further forward. Because the actual argument we should be having is whether LLMs do in fact have these useful desirable properties. And, for the most part, they don't. There's no value to arguing over the abstract question of "intelligence" or "consciousness".
@Scmbradley@Swedneck@futurebird Also, FWIW, I use offspring there in a very abstract sense. None of this needs to involve biological organisms and biological reproduction, but it does involve some sort of agent/being capable of acting in complex-reasoning-based ways that further the continued existence of "itself" or some class of phenomena similar to itself.
@dalias@Swedneck@futurebird I didn't say they weren't important, just that they weren't part of my understanding of intelligence.
But I think you're right that there's an awkward tension in the AI industry between wanting to say that they are creating genuine intelligent agents, but also not wanting to acknowledge the moral agency of their creations.