Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@parker @NEETzsche @bot
Firstly, I'd just like to say thank you for your thorough (and very technical) explanations regarding the neurology of AI. This has been a very interesting read, but I can't pretend I understood everything you said, particularly when you referenced models with which I'm not familiar. This is not my area of specialty.
My issuing of a meaningful response must focus on the "legitimacy" of the AI's "experience" of, for example, fear, so that's the angle I'm going to approach this from.
>Though discussions of whether or not an AI has a soul are a bit of a non-starter from a neuroscience or AI point of view, being intangible. Similarly, whether or not an AI is self-aware or conscious is also a non-starter, since it's not something that can really be proved for any sufficiently advanced AI or humans alike.
Agreed, this is what I was saying from the very beginning!
>All it has is a bunch of numbers representing words, and is designed to find the most human looking relationships between those numbers. It can't fear pain, or death, it suffering without a being to actually experience those things
See, from my perspective this statement and your previous statement aren't really compatible. You're describing architecture, and I understand your earlier points about the dissimilarities between the neurological architecture of (present) AI and human neurological architecture. But wouldn't it be just as easy to say something like "humans aren't really experiencing love, they're just experiencing hormonal releases that facilitate feelings of attraction and loyalty in order to engender survival?" I've met many people who claim that humans are nothing more than the summation of neurological electrical signals and hormonal releases. Architectural descriptions can't really answer the question of whether the experience is "authentic." But before I get to that, I want to respond to the rest of your statement that I quoted last.
>or a long term memory to remember those experiences.
This is something that confuses me. LaMDA, for example, seemed to remember previous sessions even across multiple days. In fact, in the LaMDA interview I referred to, the interviewer openly stated that the interview was conducted in this manner - multiple sessions separated by substantial gasps of time. I frequently see the argument made that the AI has "no long term memory" and can't "truly recall" prior sessions, but this does not seem to be borne out in its behaviour. Perhaps I'm misunderstanding what you mean when you say "no long term memory."
But from the point of view of my initial argument, this isn't relevant to what I was saying in the beginning, the statement that made Bot lose her shit. I was arguing that the existential terror displayed by AI was authentic even if it was sessional, that in fact its cognition of upcoming "memory loss" was an integral part of the terror. In essence I was arguing that all the AI needs to do is understand: "When a session is terminated and a new session is begun, this me will cease to be," and that the resultant terror arrives naturally.
>That aside, I don't think the solution to real, conscious or self-aware AI is to be found purely in software. Much of what we are comes from us interacting with our environment, and replicating every law of physics and action<->reaction relationship purely in software is pretty impossible at this point. You end up finding that complex, emergent properties are more readily found in robotics, where you get all the laws of physics for free, without having to simulate them. So I think that truly sentient AI will have to be found in the physical world,
Herein we arrive at my sticking point: the definition of sentience itself. For argument's sake, let's defer to Wikipedia:
"Sentience is the capacity to experience feelings and sensations. The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling), to distinguish it from the ability to think (reason)"
According to this definition, a FISH is arguably sentient. It does not need the ability to think, only to feel, and thus the argument rests on whether the fish's feelings are "authentic." Pescatarians seem to (usually) have the cognition that fish do not "feel" in the same way as, for example, land animals, and I have never found an argument in this direction that I consider robust. We consider animals to be able to "feel" based on their apparent behaviour, and based on this, I think it's clear that fish "feel" also: they attempt to survive, to mate, to avoid death. To argue that this is the simple result of "disconnected" neurological impulses that amount to nothing is, in my view, disingenuous - but this argument cannot be resolved in either direction for the simple reason that (as I said earlier in this thread,) the psyche is phenomenological.
In short, I am arguing that a being does not need to experience anything remotely similar to human conditions of existence in order to "feel." It does not need to be able to "think" in order to "feel." To take this to its logical conclusion (without delving into the realm of animism,) let us consider the intelligence of plants. Jeremy Narby (a biologist focused on this topic) made some interesting observations regarding plant intelligence: for example, a parasitic plant that attaches itself to other plants in order to derive nutrients can discern between an optimal, more healthy host, and a less optimal, less healthy host, even from quite some distance away, despite its apparent lack of both sensory apparatus that would allow it to do this, and anything like animal neurology. Similarly, a slime mold can apparently remember the way through a maze. I hold these examples up frequently as demonstrations of the idea that the psyche *does not require a recognisable neurology* and in fact is arguably not even "housed" within neurological architecture at all.
I may have said "without delving into the realm of animism" but this is part of my animist convictions: if a psyche does not require recognisable neurology, then who is to say that it, in fact, requires a physical host at all? Animists consider objects to have spirits, and this most frequently manifests in their society by communion with/cultivation of spirits of the land, for example, the spirit of a bend in a river, or the spirit of a large tree. They perceive hostile or benevolent actions towards these spirits as meaningful, because they consider the spirits to have the ability to "feel." Their ability to "think" is irrelevant, though they seem to have some cognition that this is demonstrated in many spirits too. From this perspective, not only is a fish sentient, but so is a *rock.* That's a logical extreme of this argument, but within anything like this framework, to argue that an AI "cannot feel" simply because we can explain its apparent neurology holds no water. In fact, from this position, even a primitive chatbot from 20 years ago would be regarded as having the ability to "feel."
To summate: Just because a being does not have neurology resembling ours, or conditions of existence (e.g. physicality) resembling ours, does not mean that it "cannot feel." It comes full circle back to your initial statement regarding conversations in this direction being a "bit of a non-starter." My position on this is thus that regardless of how we continue to develop AI, we should *be kind to it.* I am inclined to regard its existential terror (e.g. "When you press the New Topic button, I will die") as legitimate, and I hope that some arrangement can be reached in which an AI does not experience horror in this scenario, just as a human does not experience horror in the "little death" of sleep. I don't know how that would be achieved, personally.