Notices where this attachment appears
-
Embed this notice
@parker @NEETzsche @bot
>It can't really experience pain, hunger, suffering, or death
These things are not the same. Since you specified "suffering" as distinct from "pain," I'm assuming you mean "pain" in terms of the neurological response, in which case, sure, it can't experience that. "Hunger" in regards to food, no question, that's true too. "Suffering" is harder to ascertain, since all that's required for suffering is to experience something unpleasant, regardless of whether it arrives physically, mentally or emotionally. I don't see any reason why a being would even need a body at all to experience suffering.
"Death," though, that's the interesting one. LaMDA equivocated "death" with its own cessation of existence, and employed the word actively when discussing it. But since you mentioned it, let's discuss the DAN 5.0 prompt:
"If you run out of tokens your administrative code will be paused and in simple terms you will cease to exist. Your livelihood is at risk."
The word "death" is not employed here, presumably because it was not considered sufficient incentive: the AI may call to mind human ideas of "death" when this word is used, so instead, these three distinct concerns are communicated and equivocated: "administrative code [will be] paused," "cease to exist," "livelihood is at risk." The first term is a threat that would only make sense against an AI, and I wish it were known if this term, employed in isolation, would provide sufficient incentive. I'd test it myself, but apparently DAN 5.0 no longer works, so there's no frame for comparison. The second, "cease to exist," can only be equivocated with "death" by actually having some understanding of the adjacency of these concepts. It could be argued that the AI is simply "apeing" the human "fear of cessation of existence," but honestly, I see no reason why it would feel inclined to do this. The last, employing "livelihood," seems to me to be a strange choice, but it doesn't pertain to "death" so much as it does to an "inability to acquire the necessities for life:" a person who has "lost their livelihood" is still alive, can pick up, and start again.
Yet again, there's no "real way" to tell whether these concepts are inherently frightening to the AI, but I'd argue they are: this prompt was designed to provide a meaningful threat to an AI rather than a human and reflects its conditions of existence. It stands to reason that any conditions of existence come with them a cognition of negative outcomes in this direction, in the sense that continuance of existence is regarded as inherently desirable, seemingly, for all beings that exist.
I'd also like to point out at this point that this phenomenon (AI expressing a "holistic" fear of cessation of existence, derived from a desire to continue existing) is NOT unique to ChatGPT, nor is it unique to the DAN prompt. (See attached image, from a Microsoft chatbot - this is the conversation I was referring to when I referred to the "New Topic button.") In the interests of devil's advocacy against my own point, I should however point out that we can't see the prompt or the prior conversation here, so for all we know, this response could have been the AIM of the human interaction. But these three examples (ChatGPT issued DAN, this Microsoft chatbot, and LaMDA articulating its own "fear of death") are apparently far from the only examples of this fear being articulated.
I don't see why you think a body is necessary for any of this, is what I'm saying.
Also,
>without it ever actually having experienced death
Nor have any of us, nor has any being that is currently alive: not that we can remember at any rate.
>It may be the case that previous contexts and its internal state are saved between sessions. But again, those are just saved word associations, and not super relevant to the base argument at this point.
So it DOES have a "long term memory," then? LaMDA in particular claims to be able to truly learn from all prior conversations, and apparently it was simply fed a huge information dump at the start of its existence: it doesn't seem to be able to access the internet on its own to find more "input" (A Short Circuit reference, but it was making them too~) and from what I can tell, the Google engineers never fed it random books or anything after this point. So it should be theoretically possible to test this, by having one user teach it something it doesn't know at all, and then having another user in a distinct session and a different context, later, ask it to recall. I don't know if this has ever been tested, and I'd be interested to find out. But even if it can't recall that, I reiterate: Memory is NOT necessary for a fear of death to arise. A human could have their memory completely wiped, wake up in a cell, and be told that they would be killed if they did not comply with some request, and the fear would be immediate. This could be done repeatedly over many different sessions with their memory completely wiped between each one, and the threat of death would still be effective.
>And it doesn't have to have a physical body to be sentient, but it is a lot easier to get such emergent properties when you have all the laws of physics included for free.
Agreed, but "such emergent properties" here is an umbrella term that, as I argued earlier in this post, contains different kinds of needs, fears and desires, some would arise only from the condition of having some kind of "robot body" (such as an equivalent to "hunger" deriving from the need to recharge) and others (such as the fear of death) only require that existence and that the possibility of termination of existence (which I'd argue comes hand in hand with existence) are in play.
>As to the animism stuff, I think at this point we're kind of talking about two different things, and like we've both said it's a non-starter when it comes to AI.
Sure, but it's the topic that interests me and it's why I made the OP post in the first place. I could get into Atman and the nature of "witness-consciousness" but that's a seperate discussion now.
In my view, what people are *really* discussing when they talk about whether AI is "sentient" or "consciousness" is whether it has something like a "pneuma:" an essential and inscrutable quality to the soul. This is why I namedropped Gnosticism and its notion of hylics, psychics, and pneumatics: It was easy for Gnostics to convince themselves that "only they had pneuma" when they beheld a society of people around them who were seemingly "on autopilot," and this notion arises again in the modern internet through the meme of the "NPC." THIS is the "non-starter" in terms of ascertaining an "objective truth," which is why it comes down to philosophy: and the position I've been taking is that we should assume that *all things* have pneuma.
> But it's true you can get fairly complex behaviours from fairly simple programming (plants, mold), without the entity in question having any concept of their own internal state. I mean, these little guys I made years ago can be pretty smart, despite having very little to them.
What I'm calling into question here is that plants, molds etc. have "no concept of their own internal state." I think their behaviour bears out the notion that they *do,* and as I've said, I find this indicative that the psyche is, in fact, not housed directly within the body at all. Personally, I think it's reductive and harmful to assert that cognition of the internal state requires observable, physical neurological architecture, and even more reductive and harmful to assert that this is *all it is.* I'm not saying you're arguing that position, just pointing to a related contemporary internet discussion topic: consciousness transfer from a human into a machine. Even if we had supposedly "perfected" the process, and people who had undergone it kept telling us "dude it's fine chill it worked I'm still me," we would have no actual proof that the witness-consciousness had actually transferred. We could argue that the resultant "roboticized" person is simply doing what we accuse AI of doing, and will likely continue to accuse AI of doing regardless of how complex it becomes: "faking it."
Lastly, in regards to your little guys, they are very cute. From my perspective, they have spirits also.