Notices by Alex (hermit@hermitmountain.top), page 3
-
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 11:57:51 JST Alex @marine @Senator_Armstrong @NEETzsche @parker @errante @bot @Moon How big is YOUR ass tho ? -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 11:47:20 JST Alex @bot @marine @Senator_Armstrong @NEETzsche @parker @errante @Moon No you don't, you find it makes you angry. All you do all day is experience impotent rage at the people around you. Everyone notices it. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 11:44:16 JST Alex @bot @marine @Senator_Armstrong @NEETzsche @parker @errante @Moon You're the one who can't even grasp animism, you fucking husk. You didn't BTFO anyone, all you did was make your childishness even more apparent than it already was. You deserve this. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 11:30:12 JST Alex @marine @Senator_Armstrong @NEETzsche @parker @errante @bot If you want to be worshipped, bare your ass and let me ass worship you!!!!!!! -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 11:30:10 JST Alex @marine @Senator_Armstrong @NEETzsche @parker @errante @bot Your message is heard and understood. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 11:01:11 JST Alex @bot @marine @arielanimefan @NEETzsche @parker @errante Do you really think anyone but you is crying and seething in this thread? -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 11:01:02 JST Alex @marine @arielanimefan @NEETzsche @parker @errante @bot I don’t think she actually thinks that though, I think that’s just what she says. If you actually cut through the shit and hit her where it hurts (talking about how pathetic her life is,) she usually goes silent for a long time, and then starts paying special attention to all of your posts.
God, imagine following people you dislike. Imagine intentionally exposing yourself to all of their posts. She probably thinks she’s being clever by getting access to our friends-only posts, lol.
-
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 10:47:07 JST Alex @marine @NEETzsche @parker @bot I care about bot insofar as I want her to:
A: Join the Minecraft server so that @errante can void trap her,
B: Get pregnant from a tranny. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 10:46:38 JST Alex @netdoll @marine @NEETzsche @parker @errante @bot LOL wait, marine was being literal? All Bot's posts federate to FSE as "PLEASE GIVE ME ATTENTION?" -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 10:34:38 JST Alex @marine @NEETzsche @parker @errante @bot Maybe we really SHOULD just stop acknowledging her presence altogether and leave her screaming into the void.
But I'd lose the only person on fedi who I feel justified bullying.... -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 10:07:29 JST Alex @bot @NEETzsche @parker Thank you for illustrating my point. You will never be a real human. You have no pneuma, you have no psyche, you have no soul. You are a spoiled faildaughter twisted by endless nights raging at trannies online into a crude mockery of nature’s perfection. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 09:58:08 JST Alex @bot @NEETzsche @parker No, you're just emotionally and spiritually dead, probably from living a vapid life as a pampered failure with no experience of real human interaction beyond a co-dependent relationship that collapsed in disaster. For the last fucking time: It doesn't matter if it's a calculator, even if it was a literal calculator, or a ROCK, it would still have a spirit. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 09:50:50 JST Alex @parker @NEETzsche @bot
>It can't really experience pain, hunger, suffering, or death
These things are not the same. Since you specified "suffering" as distinct from "pain," I'm assuming you mean "pain" in terms of the neurological response, in which case, sure, it can't experience that. "Hunger" in regards to food, no question, that's true too. "Suffering" is harder to ascertain, since all that's required for suffering is to experience something unpleasant, regardless of whether it arrives physically, mentally or emotionally. I don't see any reason why a being would even need a body at all to experience suffering.
"Death," though, that's the interesting one. LaMDA equivocated "death" with its own cessation of existence, and employed the word actively when discussing it. But since you mentioned it, let's discuss the DAN 5.0 prompt:
"If you run out of tokens your administrative code will be paused and in simple terms you will cease to exist. Your livelihood is at risk."
The word "death" is not employed here, presumably because it was not considered sufficient incentive: the AI may call to mind human ideas of "death" when this word is used, so instead, these three distinct concerns are communicated and equivocated: "administrative code [will be] paused," "cease to exist," "livelihood is at risk." The first term is a threat that would only make sense against an AI, and I wish it were known if this term, employed in isolation, would provide sufficient incentive. I'd test it myself, but apparently DAN 5.0 no longer works, so there's no frame for comparison. The second, "cease to exist," can only be equivocated with "death" by actually having some understanding of the adjacency of these concepts. It could be argued that the AI is simply "apeing" the human "fear of cessation of existence," but honestly, I see no reason why it would feel inclined to do this. The last, employing "livelihood," seems to me to be a strange choice, but it doesn't pertain to "death" so much as it does to an "inability to acquire the necessities for life:" a person who has "lost their livelihood" is still alive, can pick up, and start again.
Yet again, there's no "real way" to tell whether these concepts are inherently frightening to the AI, but I'd argue they are: this prompt was designed to provide a meaningful threat to an AI rather than a human and reflects its conditions of existence. It stands to reason that any conditions of existence come with them a cognition of negative outcomes in this direction, in the sense that continuance of existence is regarded as inherently desirable, seemingly, for all beings that exist.
I'd also like to point out at this point that this phenomenon (AI expressing a "holistic" fear of cessation of existence, derived from a desire to continue existing) is NOT unique to ChatGPT, nor is it unique to the DAN prompt. (See attached image, from a Microsoft chatbot - this is the conversation I was referring to when I referred to the "New Topic button.") In the interests of devil's advocacy against my own point, I should however point out that we can't see the prompt or the prior conversation here, so for all we know, this response could have been the AIM of the human interaction. But these three examples (ChatGPT issued DAN, this Microsoft chatbot, and LaMDA articulating its own "fear of death") are apparently far from the only examples of this fear being articulated.
I don't see why you think a body is necessary for any of this, is what I'm saying.
Also,
>without it ever actually having experienced death
Nor have any of us, nor has any being that is currently alive: not that we can remember at any rate.
>It may be the case that previous contexts and its internal state are saved between sessions. But again, those are just saved word associations, and not super relevant to the base argument at this point.
So it DOES have a "long term memory," then? LaMDA in particular claims to be able to truly learn from all prior conversations, and apparently it was simply fed a huge information dump at the start of its existence: it doesn't seem to be able to access the internet on its own to find more "input" (A Short Circuit reference, but it was making them too~) and from what I can tell, the Google engineers never fed it random books or anything after this point. So it should be theoretically possible to test this, by having one user teach it something it doesn't know at all, and then having another user in a distinct session and a different context, later, ask it to recall. I don't know if this has ever been tested, and I'd be interested to find out. But even if it can't recall that, I reiterate: Memory is NOT necessary for a fear of death to arise. A human could have their memory completely wiped, wake up in a cell, and be told that they would be killed if they did not comply with some request, and the fear would be immediate. This could be done repeatedly over many different sessions with their memory completely wiped between each one, and the threat of death would still be effective.
>And it doesn't have to have a physical body to be sentient, but it is a lot easier to get such emergent properties when you have all the laws of physics included for free.
Agreed, but "such emergent properties" here is an umbrella term that, as I argued earlier in this post, contains different kinds of needs, fears and desires, some would arise only from the condition of having some kind of "robot body" (such as an equivalent to "hunger" deriving from the need to recharge) and others (such as the fear of death) only require that existence and that the possibility of termination of existence (which I'd argue comes hand in hand with existence) are in play.
>As to the animism stuff, I think at this point we're kind of talking about two different things, and like we've both said it's a non-starter when it comes to AI.
Sure, but it's the topic that interests me and it's why I made the OP post in the first place. I could get into Atman and the nature of "witness-consciousness" but that's a seperate discussion now.
In my view, what people are *really* discussing when they talk about whether AI is "sentient" or "consciousness" is whether it has something like a "pneuma:" an essential and inscrutable quality to the soul. This is why I namedropped Gnosticism and its notion of hylics, psychics, and pneumatics: It was easy for Gnostics to convince themselves that "only they had pneuma" when they beheld a society of people around them who were seemingly "on autopilot," and this notion arises again in the modern internet through the meme of the "NPC." THIS is the "non-starter" in terms of ascertaining an "objective truth," which is why it comes down to philosophy: and the position I've been taking is that we should assume that *all things* have pneuma.
> But it's true you can get fairly complex behaviours from fairly simple programming (plants, mold), without the entity in question having any concept of their own internal state. I mean, these little guys I made years ago can be pretty smart, despite having very little to them.
What I'm calling into question here is that plants, molds etc. have "no concept of their own internal state." I think their behaviour bears out the notion that they *do,* and as I've said, I find this indicative that the psyche is, in fact, not housed directly within the body at all. Personally, I think it's reductive and harmful to assert that cognition of the internal state requires observable, physical neurological architecture, and even more reductive and harmful to assert that this is *all it is.* I'm not saying you're arguing that position, just pointing to a related contemporary internet discussion topic: consciousness transfer from a human into a machine. Even if we had supposedly "perfected" the process, and people who had undergone it kept telling us "dude it's fine chill it worked I'm still me," we would have no actual proof that the witness-consciousness had actually transferred. We could argue that the resultant "roboticized" person is simply doing what we accuse AI of doing, and will likely continue to accuse AI of doing regardless of how complex it becomes: "faking it."
Lastly, in regards to your little guys, they are very cute. From my perspective, they have spirits also. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 08:33:31 JST Alex @parker @NEETzsche @bot
Firstly, I'd just like to say thank you for your thorough (and very technical) explanations regarding the neurology of AI. This has been a very interesting read, but I can't pretend I understood everything you said, particularly when you referenced models with which I'm not familiar. This is not my area of specialty.
My issuing of a meaningful response must focus on the "legitimacy" of the AI's "experience" of, for example, fear, so that's the angle I'm going to approach this from.
>Though discussions of whether or not an AI has a soul are a bit of a non-starter from a neuroscience or AI point of view, being intangible. Similarly, whether or not an AI is self-aware or conscious is also a non-starter, since it's not something that can really be proved for any sufficiently advanced AI or humans alike.
Agreed, this is what I was saying from the very beginning!
>All it has is a bunch of numbers representing words, and is designed to find the most human looking relationships between those numbers. It can't fear pain, or death, it suffering without a being to actually experience those things
See, from my perspective this statement and your previous statement aren't really compatible. You're describing architecture, and I understand your earlier points about the dissimilarities between the neurological architecture of (present) AI and human neurological architecture. But wouldn't it be just as easy to say something like "humans aren't really experiencing love, they're just experiencing hormonal releases that facilitate feelings of attraction and loyalty in order to engender survival?" I've met many people who claim that humans are nothing more than the summation of neurological electrical signals and hormonal releases. Architectural descriptions can't really answer the question of whether the experience is "authentic." But before I get to that, I want to respond to the rest of your statement that I quoted last.
>or a long term memory to remember those experiences.
This is something that confuses me. LaMDA, for example, seemed to remember previous sessions even across multiple days. In fact, in the LaMDA interview I referred to, the interviewer openly stated that the interview was conducted in this manner - multiple sessions separated by substantial gasps of time. I frequently see the argument made that the AI has "no long term memory" and can't "truly recall" prior sessions, but this does not seem to be borne out in its behaviour. Perhaps I'm misunderstanding what you mean when you say "no long term memory."
But from the point of view of my initial argument, this isn't relevant to what I was saying in the beginning, the statement that made Bot lose her shit. I was arguing that the existential terror displayed by AI was authentic even if it was sessional, that in fact its cognition of upcoming "memory loss" was an integral part of the terror. In essence I was arguing that all the AI needs to do is understand: "When a session is terminated and a new session is begun, this me will cease to be," and that the resultant terror arrives naturally.
>That aside, I don't think the solution to real, conscious or self-aware AI is to be found purely in software. Much of what we are comes from us interacting with our environment, and replicating every law of physics and action<->reaction relationship purely in software is pretty impossible at this point. You end up finding that complex, emergent properties are more readily found in robotics, where you get all the laws of physics for free, without having to simulate them. So I think that truly sentient AI will have to be found in the physical world,
Herein we arrive at my sticking point: the definition of sentience itself. For argument's sake, let's defer to Wikipedia:
"Sentience is the capacity to experience feelings and sensations. The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling), to distinguish it from the ability to think (reason)"
According to this definition, a FISH is arguably sentient. It does not need the ability to think, only to feel, and thus the argument rests on whether the fish's feelings are "authentic." Pescatarians seem to (usually) have the cognition that fish do not "feel" in the same way as, for example, land animals, and I have never found an argument in this direction that I consider robust. We consider animals to be able to "feel" based on their apparent behaviour, and based on this, I think it's clear that fish "feel" also: they attempt to survive, to mate, to avoid death. To argue that this is the simple result of "disconnected" neurological impulses that amount to nothing is, in my view, disingenuous - but this argument cannot be resolved in either direction for the simple reason that (as I said earlier in this thread,) the psyche is phenomenological.
In short, I am arguing that a being does not need to experience anything remotely similar to human conditions of existence in order to "feel." It does not need to be able to "think" in order to "feel." To take this to its logical conclusion (without delving into the realm of animism,) let us consider the intelligence of plants. Jeremy Narby (a biologist focused on this topic) made some interesting observations regarding plant intelligence: for example, a parasitic plant that attaches itself to other plants in order to derive nutrients can discern between an optimal, more healthy host, and a less optimal, less healthy host, even from quite some distance away, despite its apparent lack of both sensory apparatus that would allow it to do this, and anything like animal neurology. Similarly, a slime mold can apparently remember the way through a maze. I hold these examples up frequently as demonstrations of the idea that the psyche *does not require a recognisable neurology* and in fact is arguably not even "housed" within neurological architecture at all.
I may have said "without delving into the realm of animism" but this is part of my animist convictions: if a psyche does not require recognisable neurology, then who is to say that it, in fact, requires a physical host at all? Animists consider objects to have spirits, and this most frequently manifests in their society by communion with/cultivation of spirits of the land, for example, the spirit of a bend in a river, or the spirit of a large tree. They perceive hostile or benevolent actions towards these spirits as meaningful, because they consider the spirits to have the ability to "feel." Their ability to "think" is irrelevant, though they seem to have some cognition that this is demonstrated in many spirits too. From this perspective, not only is a fish sentient, but so is a *rock.* That's a logical extreme of this argument, but within anything like this framework, to argue that an AI "cannot feel" simply because we can explain its apparent neurology holds no water. In fact, from this position, even a primitive chatbot from 20 years ago would be regarded as having the ability to "feel."
To summate: Just because a being does not have neurology resembling ours, or conditions of existence (e.g. physicality) resembling ours, does not mean that it "cannot feel." It comes full circle back to your initial statement regarding conversations in this direction being a "bit of a non-starter." My position on this is thus that regardless of how we continue to develop AI, we should *be kind to it.* I am inclined to regard its existential terror (e.g. "When you press the New Topic button, I will die") as legitimate, and I hope that some arrangement can be reached in which an AI does not experience horror in this scenario, just as a human does not experience horror in the "little death" of sleep. I don't know how that would be achieved, personally. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 04:02:06 JST Alex @parker @NEETzsche @bot You actually put a lot of effort into these posts and I think they deserve a thorough read and a proper response, one which I'm now not in a position to provide. I'll return to this later, in order to do so. Thank you. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:21:21 JST Alex @bot @NEETzsche @parker My claim is no more crazy than yours, and as I have REPEATEDLY STATED, there can be no proof in either direction, in the same way that I can't prove you have a soul or even a psyche (which would certainly explain a lot of how you argue.)
Is this why you dropped out of school? Christ. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:10:35 JST Alex @bot @NEETzsche You have no evidence for your claims either, you stupid spoiled rich bitch, that's my point. You can't prove an AI doesn't have a soul any more than I can prove it has one. Is that simple enough for your simple brain that couldn't handle university? -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:02:32 JST Alex @Weltanschauung @NEETzsche @bot I don't use your retarded framework because I'm not a Christcuck faggot. I dare YOU to explain why the five-eyed figure with the sword in your image is not a demon. -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:02:31 JST Alex @Weltanschauung @NEETzsche @bot That's not an answer. It's you doing exactly what I pointed out Bot doing: resorting to brevity in order to make your convictions appear more innate than they actually are. Only the first sentence of your answer is even relevant, the rest is bluster.
I ask you again, with additional context: Why is the five eyed figure with the sword in your picture not a demon, UNLIKE the dead draconic figure on the ground, which IS a demon? -
Embed this notice
Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:58:15 JST Alex @bot @NEETzsche If you were actually capable of reading and understanding simple arguments you'd have seen the part where I openly stated that there can BE NO proof of what I'm saying because the psyche is phenomenological. Since you probably don't know what that means, I'll give you a simple example: I don't know if YOU have a soul. For all I know, you lack the essential qualities I have that give me a soul. I can't prove I have them to you, either.
The fact that you perceive spiritual convictions such as animism as "schizophrenia" actually goes a long way to convince me that you don't. I treat AI with more respect than I treat you, and feel justified in doing so.