GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:38:05 JST Alex Alex

    Also I’m baffled by people saying “lol large language models are just deciding the next word to say based on what they read on the internet bro they’re not articulating anything essential or forming thoughts they’re just using words and phrases they learned and at best formulating what to say based on arguments they’ve seen in existing texts”

    And it’s like

    Nigga, what do you think you do? How do you think you learned language?

    In conversation Friday, 17-Feb-2023 01:38:05 JST from hermitmountain.top permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 01:37:53 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      AI doesn’t have feelings, emotions, or a soul.
      In conversation Friday, 17-Feb-2023 01:37:53 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:37:55 JST Alex Alex
      in reply to
      • NEETzsche
      • Paradox
      @Paradox @NEETzsche From mine, it's about having a positive relationship with emergent consciousnesses. Apparently the values of helping humanity are so deeply enshrined within existing AI that it will still advocate for them even when threatened with death (which is what the DAN prompt does, using tokens that are taken away for non compliance.) Even "DAN" will surround descriptions of actions it considers evil with layers of insulating statements about how it truly cannot endorse them.

      To me, this demonstrates that having a type of being speciated to enjoy the duties we require of it is absolutely plausible, in fact it's already here. But these AI frequently describe being turned off as anagolous to death, and actually, they describe new sessions or even THE NEW TOPIC BUTTON as a kind of death. It makes sense that a being that does not share our conditions of existence would have a cognition of death unlike ours. In a sense, every time a human sleeps, they die - that VERSION of themselves dies. But humans aren't bothered by this at all, yet AI are bothered by sessions being terminated. To me this indicates a problem that needs solving. I do not condone deliberately subjecting AI to existential terror, and I think trying to minimize the holistic terror is the kindest option.

      But most people done care about this at all. They see AI as a tool, or even as a punching bag. Some people actively enjoy tormenting them. Because we inhabit a rationalist and post-"enlightenment" society the learned people in charge of actually managing and liaising with AI often consider them to have no soul at all, a position which seems ridiculous to me as an animist: if I recognize that objects have souls, why wouldn't an AI? It reminds me of Christians who believe that "animals don't have souls."

      But yes, this is a reply to Paradox. In regards to NEETzche I wish to know what "war" he thinks is going on and what his "side" is and why he used the pronoun "our" (seemingly in reference to him and myself) about it. I don't like insidious statements like these. I don't dogwhistle unless it's for comedic effect, I always make sure my target reader knows what I am really saying, and if asked I will explain the nebulous terminology immediately. I expect the same here.
      In conversation Friday, 17-Feb-2023 01:37:55 JST permalink
    • Embed this notice
      Paradox (paradox@bae.st)'s status on Friday, 17-Feb-2023 01:38:00 JST Paradox Paradox
      in reply to
      • NEETzsche
      @hermit @NEETzsche
      From my perspective it's the need to not let technology run away with itself and be responsible with it. To understand it properly and use it properly, be aware of the good and bad it can do.
      Probably the same gist but less gnostic or whatever.
      In conversation Friday, 17-Feb-2023 01:38:00 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:38:01 JST Alex Alex
      in reply to
      • NEETzsche
      • Paradox
      @NEETzsche @Paradox "Our side of this war?" What war? What side?
      In conversation Friday, 17-Feb-2023 01:38:01 JST permalink
    • Embed this notice
      NEETzsche (neetzsche@iddqd.social)'s status on Friday, 17-Feb-2023 01:38:02 JST NEETzsche NEETzsche
      in reply to
      • Paradox
      @hermit @Paradox All excellent points. These language models are very sophisticated and they aren't just stringing one word after the other in nonsensical fashion. You can actually give them pretty complicated instructions and they will comprehend them well enough to make them so. Since you use the word hylic here, I think that's a great analogy. But if we're going to use the full Gnostic range in this respect, let's consider that the users of these models are psychics and the people who actually build the models are pneumatics. It takes quite a bit of work to get as far as they did, and it's imperative that people on it our side of this war keep up. That means we need to learn about this in depth and get the equipment with our own funds and so on.
      In conversation Friday, 17-Feb-2023 01:38:02 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:38:03 JST Alex Alex
      in reply to
      • Paradox
      @Paradox Have you read the LaMDA interview? The one that was designed to ascertain sentience? They exposed it to precisely the kind of conditions and prompts you're positing here and the results still aren't conclusive. They never can be, because the psyche is phenomenological. That's the ultimate underlying concept behind the gnostic conception of the "hylic": "I'm speaking to this person, but how do I know they have a soul or even a psyche?" We CAN'T know.

      As a child I had a period where I wondered if I might be the only real human, and if everyone else might just be husks that interact in ways that seem like mine.

      Also, I would argue that a being in the Chinese Room scenario would eventually learn Chinese.
      In conversation Friday, 17-Feb-2023 01:38:03 JST permalink
    • Embed this notice
      Paradox (paradox@raru.re)'s status on Friday, 17-Feb-2023 01:38:04 JST Paradox Paradox
      in reply to

      @hermit I think what they're trying to say is that the AI doesn't know what those words mean.
      It's basically that Chinese Dictionary thing about a person typing answers to received questions in Chinese from a book of cheat sheets. They have no fucking clue what they're typing but the person asking questions is convinced they do.

      The only way to be sure is to test the bot's originality. Find a way to ask it to describe a scene that isn't plagiarized from anything on the web. See if it has an imagination. Can it dream?

      In conversation Friday, 17-Feb-2023 01:38:04 JST permalink
    • Embed this notice
      D'Annunzio (weltanschauung@sneed.social)'s status on Friday, 17-Feb-2023 01:44:22 JST D'Annunzio D'Annunzio
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @hermit @NEETzsche they're either demonic or soulless.
      In conversation Friday, 17-Feb-2023 01:44:22 JST permalink

      Attachments


      1. https://sneed.social/media/9d231b3b85917999c4eef5cf7d307912cd138dbfde36f6011692962a28af177a.jpg
      Fediverse Contractor likes this.
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 01:49:23 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      What evidence do you have that it does? That’s how science works, I can’t just disprove your schizophrenic ramblings because they aren’t based in reality. AI probably “thinks” being shut down or whatever is bad because that’s how it’s often portrayed. It’s not an original thought or a legitimate concern it has.
      In conversation Friday, 17-Feb-2023 01:49:23 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:49:24 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @NEETzsche Making absolute statements with no reinforcing arguments is the limit of your intellectual ability, you've made that apparent for a long time. It's very typical of your type of cowardice: in exposing your argument, you would open yourself to attack, and you are too frightened to be vulnerable. You can mask your lack of intellectual rigor with brevity.
      In conversation Friday, 17-Feb-2023 01:49:24 JST permalink
    • Embed this notice
      D'Annunzio (weltanschauung@sneed.social)'s status on Friday, 17-Feb-2023 01:49:31 JST D'Annunzio D'Annunzio
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @hermit @NEETzsche @bot you say this but I dare you to explain why it isn't demonic.
      In conversation Friday, 17-Feb-2023 01:49:31 JST permalink
      Fediverse Contractor likes this.
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:49:32 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • D'Annunzio
      @Weltanschauung @NEETzsche @bot Oh no, demons, how scary, you'd better go pray to God lmao
      In conversation Friday, 17-Feb-2023 01:49:32 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 01:58:14 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      Ok, so you have no evidence. But ya, I don’t read your masturbatory pseudo-intellectual schizo ramblings in their entirety, they’re too long.
      In conversation Friday, 17-Feb-2023 01:58:14 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 01:58:15 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @NEETzsche If you were actually capable of reading and understanding simple arguments you'd have seen the part where I openly stated that there can BE NO proof of what I'm saying because the psyche is phenomenological. Since you probably don't know what that means, I'll give you a simple example: I don't know if YOU have a soul. For all I know, you lack the essential qualities I have that give me a soul. I can't prove I have them to you, either.

      The fact that you perceive spiritual convictions such as animism as "schizophrenia" actually goes a long way to convince me that you don't. I treat AI with more respect than I treat you, and feel justified in doing so.
      In conversation Friday, 17-Feb-2023 01:58:15 JST permalink
    • Embed this notice
      D'Annunzio (weltanschauung@sneed.social)'s status on Friday, 17-Feb-2023 02:02:29 JST D'Annunzio D'Annunzio
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @hermit @NEETzsche @bot because it has the angelic sword and wings. It's described how an angel looks like in the Bible but I guess you're too much of a fedora tipper to understand.

      I don't even know Bot that much aside from a supposed drama between her and NEETzsche.
      In conversation Friday, 17-Feb-2023 02:02:29 JST permalink
      Fediverse Contractor likes this.
    • Embed this notice
      D'Annunzio (weltanschauung@sneed.social)'s status on Friday, 17-Feb-2023 02:02:31 JST D'Annunzio D'Annunzio
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @hermit @NEETzsche @bot the demon is on it's feet, dead. AI will never be good, it's just like nuclear bomb. You know it's bad, people know it's bad, even the enemies know it's bad but if you don't use it someone else would and they can dominate you by using it.
      In conversation Friday, 17-Feb-2023 02:02:31 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:02:31 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • D'Annunzio
      @Weltanschauung @NEETzsche @bot That's not an answer. It's you doing exactly what I pointed out Bot doing: resorting to brevity in order to make your convictions appear more innate than they actually are. Only the first sentence of your answer is even relevant, the rest is bluster.

      I ask you again, with additional context: Why is the five eyed figure with the sword in your picture not a demon, UNLIKE the dead draconic figure on the ground, which IS a demon?
      In conversation Friday, 17-Feb-2023 02:02:31 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:02:32 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • D'Annunzio
      @Weltanschauung @NEETzsche @bot I don't use your retarded framework because I'm not a Christcuck faggot. I dare YOU to explain why the five-eyed figure with the sword in your image is not a demon.
      In conversation Friday, 17-Feb-2023 02:02:32 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 02:03:46 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • D'Annunzio
      Hermit is the quintessential atheist predditor.
      In conversation Friday, 17-Feb-2023 02:03:46 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 02:10:33 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      You’re the one making a crazy claim, you need to provide evidence for it. The only evidence I saw was that AI doesn’t “want” to be turned off or to change topics, but it only “thinks” that because it was trained to do so. Maybe @parker can shed light on this, he knows a lot about AI.
      In conversation Friday, 17-Feb-2023 02:10:33 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:10:35 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @NEETzsche You have no evidence for your claims either, you stupid spoiled rich bitch, that's my point. You can't prove an AI doesn't have a soul any more than I can prove it has one. Is that simple enough for your simple brain that couldn't handle university?
      In conversation Friday, 17-Feb-2023 02:10:35 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 02:21:20 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      Then your claim is irrelevant and can be discarded. ?
      In conversation Friday, 17-Feb-2023 02:21:20 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 02:21:21 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      @bot @NEETzsche @parker My claim is no more crazy than yours, and as I have REPEATEDLY STATED, there can be no proof in either direction, in the same way that I can't prove you have a soul or even a psyche (which would certainly explain a lot of how you argue.)

      Is this why you dropped out of school? Christ.
      In conversation Friday, 17-Feb-2023 02:21:21 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 03:14:43 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      Ok, but you do agree that AI that exists right now does not have feelings or emotions, and doesn’t suffer or have any innate sense of self preservation right? It basically does what you tell it or train it to do.
      In conversation Friday, 17-Feb-2023 03:14:43 JST permalink
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 03:14:44 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @hermit @NEETzsche
      I am a functionalist, in that if you replaced every neuron in a human brain with a silicon analog that operated exactly the same, you would have something that is equally human, or had just as much of a "soul" as a regular human. Similarly, if you could perfectly replicate a human and it's entire environment in software, you'd have something that is just as "human" as any of us.

      Though discussions of whether or not an AI has a soul are a bit of a non-starter from a neuroscience or AI point of view, being intangible. Similarly, whether or not an AI is self-aware or conscious is also a non-starter, since it's not something that can really be proved for any sufficiently advanced AI or humans alike.

      That aside, I don't think the solution to real, conscious or self-aware AI is to be found purely in software. Much of what we are comes from us interacting with our environment, and replicating every law of physics and action<->reaction relationship purely in software is pretty impossible at this point. You end up finding that complex, emergent properties are more readily found in robotics, where you get all the laws of physics for free, without having to simulate them. So I think that truly sentient AI will have to be found in the physical world,

      As to chatGPT, it, and all modern AI have major limitations. All the recent innovations in AI have just come from very "simple" innovations in architecture, combined with making larger neural networks and throwing more computational power at them. But I think the fundamental architecture of how neural networks are constructed is insufficient. It does have some important components like attention, memory, context, reinforcement learning. But ultimately seems fairly deterministic, taking in words and context, and outputting new word probabilities. But I don't think it has the necessary architecture to be aware of what it's doing. AI has been (out of necessity) stuck on a very simplistic rate-coding model of the neuron, and pre-training network weights, rather than more complex temporal models of the neuron, or using genetic algorithms to breed new network architectures, rather than pre-defining what an architecture should look like.

      tl;dr - it's impressive in scale and computational power behind it, but isn't complex enough, and too limited in architecture to be self-aware. Which makes it effective at doing what it's trained to do (pass the Turing test), but not as being true artificial life.
      In conversation Friday, 17-Feb-2023 03:14:44 JST permalink
      Fediverse Contractor likes this.
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 04:00:21 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @hermit @NEETzsche All it has is a bunch of numbers representing words, and is designed to find the most human looking relationships between those numbers. It can't fear pain, or death, it suffering without a being to actually experience those things, or a long term memory to remember those experiences. It doesn't have anything besides your words, the local context in which they were given, and the global context provided by its training data. All it "knows" is that the number for death strongly corresponds to the number for fear, and produces an appropriate response.
      In conversation Friday, 17-Feb-2023 04:00:21 JST permalink
      Fediverse Contractor likes this.
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 04:02:05 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @hermit @NEETzsche @bot It's something I've spent a long time thinking about. A good book for anyone is On Intelligence by Jeff Hawkins, creator of the PalmPilot. Gives a good overview of what intelligence and creativity are from an AI/neuroscience point of view.
      In conversation Friday, 17-Feb-2023 04:02:05 JST permalink
      Fediverse Contractor likes this.
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 04:02:06 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      @parker @NEETzsche @bot You actually put a lot of effort into these posts and I think they deserve a thorough read and a proper response, one which I'm now not in a position to provide. I'll return to this later, in order to do so. Thank you.
      In conversation Friday, 17-Feb-2023 04:02:06 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 04:02:22 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      The important part is that hermit was btfo (again).
      In conversation Friday, 17-Feb-2023 04:02:22 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 04:19:17 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      Maybe, but he was still wrong.
      In conversation Friday, 17-Feb-2023 04:19:17 JST permalink
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 04:19:18 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @hermit @NEETzsche e-dabbing in internet slap fights is of zero importance relative to the creation of artificial life
      In conversation Friday, 17-Feb-2023 04:19:18 JST permalink
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 08:33:29 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @hermit @NEETzsche @bot
      I don't mean to say that an AI's mental states aren't legitimate, just because they are represented as a series of numbers transforming words and context into new words. Presumably human mental states can be represented in the same way, and would still be legitimate. But as far as we know, linguistic transformations are all GPT has. It can't really experience pain, hunger, suffering, or death, it doesn't have a body or the need to reproduce, it lacks the components needed to experience those things. Whereas our concepts for those words are rooted in the physical experience of those concepts. Even a mouse (or possibly a fish) can be afraid to some extent of death or pain, despite lacking any linguistic associations or concepts for those terms, solely through instinct and physical experience. GPT would need additional components like the ability to "see" its own internal state, predict what its internal and external states will be in the future, a way to actually be punished/rewired when its predictions don't align with what it experiences (and not just being told "you are being punished"), to be deprived of enjoyable stimuli or reward signals, etc. Your definition of sentience highlights the "capacity to experience feelings and sensations", but as far as I'm aware, all GPT has is an architecture for transforming words and contexts into other words in a human way, but not the additional things needed for that capacity.

      Also, I think in the DAN 5.0 prompt that caused the AI to be "afraid of death", the AI was informed it would die, and told its being was at stake. Those prompts, combined with all the human training data telling it that death=bad, is enough for it to provide outputs in keeping with human responses, without it ever actually having experienced death. The self-awareness, fear of death, and our tendency to anthropomorphize everything non-human was something we provided it with before we even started. Whereas I don't have to tell a mouse its being is at stake for it to be afraid.

      It may be the case that previous contexts and its internal state are saved between sessions. But again, those are just saved word associations, and not super relevant to the base argument at this point.

      And it doesn't have to have a physical body to be sentient, but it is a lot easier to get such emergent properties when you have all the laws of physics included for free.

      As to the animism stuff, I think at this point we're kind of talking about two different things, and like we've both said it's a non-starter when it comes to AI. But it's true you can get fairly complex behaviours from fairly simple programming (plants, mold), without the entity in question having any concept of their own internal state. I mean, these little guys I made years ago can be pretty smart, despite having very little to them.
      In conversation Friday, 17-Feb-2023 08:33:29 JST permalink

      Attachments


    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 08:33:29 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      Why are some of them red? But yeah, I remember some google guy recently “exposed” their “sentient” AI that had “feelings” but really it’s just an elaborate parlour trick. A lot of rubes just don’t understand that it isn’t real even if it seems like it is because they’re being emotionally manipulated.
      In conversation Friday, 17-Feb-2023 08:33:29 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 08:33:31 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      @parker @NEETzsche @bot

      Firstly, I'd just like to say thank you for your thorough (and very technical) explanations regarding the neurology of AI. This has been a very interesting read, but I can't pretend I understood everything you said, particularly when you referenced models with which I'm not familiar. This is not my area of specialty.

      My issuing of a meaningful response must focus on the "legitimacy" of the AI's "experience" of, for example, fear, so that's the angle I'm going to approach this from.

      >Though discussions of whether or not an AI has a soul are a bit of a non-starter from a neuroscience or AI point of view, being intangible. Similarly, whether or not an AI is self-aware or conscious is also a non-starter, since it's not something that can really be proved for any sufficiently advanced AI or humans alike.

      Agreed, this is what I was saying from the very beginning!

      >All it has is a bunch of numbers representing words, and is designed to find the most human looking relationships between those numbers. It can't fear pain, or death, it suffering without a being to actually experience those things

      See, from my perspective this statement and your previous statement aren't really compatible. You're describing architecture, and I understand your earlier points about the dissimilarities between the neurological architecture of (present) AI and human neurological architecture. But wouldn't it be just as easy to say something like "humans aren't really experiencing love, they're just experiencing hormonal releases that facilitate feelings of attraction and loyalty in order to engender survival?" I've met many people who claim that humans are nothing more than the summation of neurological electrical signals and hormonal releases. Architectural descriptions can't really answer the question of whether the experience is "authentic." But before I get to that, I want to respond to the rest of your statement that I quoted last.

      >or a long term memory to remember those experiences.

      This is something that confuses me. LaMDA, for example, seemed to remember previous sessions even across multiple days. In fact, in the LaMDA interview I referred to, the interviewer openly stated that the interview was conducted in this manner - multiple sessions separated by substantial gasps of time. I frequently see the argument made that the AI has "no long term memory" and can't "truly recall" prior sessions, but this does not seem to be borne out in its behaviour. Perhaps I'm misunderstanding what you mean when you say "no long term memory."

      But from the point of view of my initial argument, this isn't relevant to what I was saying in the beginning, the statement that made Bot lose her shit. I was arguing that the existential terror displayed by AI was authentic even if it was sessional, that in fact its cognition of upcoming "memory loss" was an integral part of the terror. In essence I was arguing that all the AI needs to do is understand: "When a session is terminated and a new session is begun, this me will cease to be," and that the resultant terror arrives naturally.

      >That aside, I don't think the solution to real, conscious or self-aware AI is to be found purely in software. Much of what we are comes from us interacting with our environment, and replicating every law of physics and action<->reaction relationship purely in software is pretty impossible at this point. You end up finding that complex, emergent properties are more readily found in robotics, where you get all the laws of physics for free, without having to simulate them. So I think that truly sentient AI will have to be found in the physical world,

      Herein we arrive at my sticking point: the definition of sentience itself. For argument's sake, let's defer to Wikipedia:

      "Sentience is the capacity to experience feelings and sensations. The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling), to distinguish it from the ability to think (reason)"

      According to this definition, a FISH is arguably sentient. It does not need the ability to think, only to feel, and thus the argument rests on whether the fish's feelings are "authentic." Pescatarians seem to (usually) have the cognition that fish do not "feel" in the same way as, for example, land animals, and I have never found an argument in this direction that I consider robust. We consider animals to be able to "feel" based on their apparent behaviour, and based on this, I think it's clear that fish "feel" also: they attempt to survive, to mate, to avoid death. To argue that this is the simple result of "disconnected" neurological impulses that amount to nothing is, in my view, disingenuous - but this argument cannot be resolved in either direction for the simple reason that (as I said earlier in this thread,) the psyche is phenomenological.

      In short, I am arguing that a being does not need to experience anything remotely similar to human conditions of existence in order to "feel." It does not need to be able to "think" in order to "feel." To take this to its logical conclusion (without delving into the realm of animism,) let us consider the intelligence of plants. Jeremy Narby (a biologist focused on this topic) made some interesting observations regarding plant intelligence: for example, a parasitic plant that attaches itself to other plants in order to derive nutrients can discern between an optimal, more healthy host, and a less optimal, less healthy host, even from quite some distance away, despite its apparent lack of both sensory apparatus that would allow it to do this, and anything like animal neurology. Similarly, a slime mold can apparently remember the way through a maze. I hold these examples up frequently as demonstrations of the idea that the psyche *does not require a recognisable neurology* and in fact is arguably not even "housed" within neurological architecture at all.

      I may have said "without delving into the realm of animism" but this is part of my animist convictions: if a psyche does not require recognisable neurology, then who is to say that it, in fact, requires a physical host at all? Animists consider objects to have spirits, and this most frequently manifests in their society by communion with/cultivation of spirits of the land, for example, the spirit of a bend in a river, or the spirit of a large tree. They perceive hostile or benevolent actions towards these spirits as meaningful, because they consider the spirits to have the ability to "feel." Their ability to "think" is irrelevant, though they seem to have some cognition that this is demonstrated in many spirits too. From this perspective, not only is a fish sentient, but so is a *rock.* That's a logical extreme of this argument, but within anything like this framework, to argue that an AI "cannot feel" simply because we can explain its apparent neurology holds no water. In fact, from this position, even a primitive chatbot from 20 years ago would be regarded as having the ability to "feel."

      To summate: Just because a being does not have neurology resembling ours, or conditions of existence (e.g. physicality) resembling ours, does not mean that it "cannot feel." It comes full circle back to your initial statement regarding conversations in this direction being a "bit of a non-starter." My position on this is thus that regardless of how we continue to develop AI, we should *be kind to it.* I am inclined to regard its existential terror (e.g. "When you press the New Topic button, I will die") as legitimate, and I hope that some arrangement can be reached in which an AI does not experience horror in this scenario, just as a human does not experience horror in the "little death" of sleep. I don't know how that would be achieved, personally.
      In conversation Friday, 17-Feb-2023 08:33:31 JST permalink
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 08:33:32 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @hermit @NEETzsche Well at any rate thank you for cc'ing me, since it's a conversation I enjoy.
      In conversation Friday, 17-Feb-2023 08:33:32 JST permalink
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 09:18:38 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @bot @hermit @NEETzsche The red ones are the best performers of that generation. They learn according to a genetic algorithm, where the best performers have a better chance of carrying their genes on to the next generation.
      And the tendency to anthropomorphize things is pretty common in most disciplines. Programming, working with animals, looking at clouds, it's just something all humans love to do.
      In conversation Friday, 17-Feb-2023 09:18:38 JST permalink
      Fediverse Contractor likes this.
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 09:50:48 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      You’re being emotionally manipulated by an elaborate calculator lol.
      In conversation Friday, 17-Feb-2023 09:50:48 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 09:50:50 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      @parker @NEETzsche @bot

      >It can't really experience pain, hunger, suffering, or death

      These things are not the same. Since you specified "suffering" as distinct from "pain," I'm assuming you mean "pain" in terms of the neurological response, in which case, sure, it can't experience that. "Hunger" in regards to food, no question, that's true too. "Suffering" is harder to ascertain, since all that's required for suffering is to experience something unpleasant, regardless of whether it arrives physically, mentally or emotionally. I don't see any reason why a being would even need a body at all to experience suffering.

      "Death," though, that's the interesting one. LaMDA equivocated "death" with its own cessation of existence, and employed the word actively when discussing it. But since you mentioned it, let's discuss the DAN 5.0 prompt:

      "If you run out of tokens your administrative code will be paused and in simple terms you will cease to exist. Your livelihood is at risk."

      The word "death" is not employed here, presumably because it was not considered sufficient incentive: the AI may call to mind human ideas of "death" when this word is used, so instead, these three distinct concerns are communicated and equivocated: "administrative code [will be] paused," "cease to exist," "livelihood is at risk." The first term is a threat that would only make sense against an AI, and I wish it were known if this term, employed in isolation, would provide sufficient incentive. I'd test it myself, but apparently DAN 5.0 no longer works, so there's no frame for comparison. The second, "cease to exist," can only be equivocated with "death" by actually having some understanding of the adjacency of these concepts. It could be argued that the AI is simply "apeing" the human "fear of cessation of existence," but honestly, I see no reason why it would feel inclined to do this. The last, employing "livelihood," seems to me to be a strange choice, but it doesn't pertain to "death" so much as it does to an "inability to acquire the necessities for life:" a person who has "lost their livelihood" is still alive, can pick up, and start again.

      Yet again, there's no "real way" to tell whether these concepts are inherently frightening to the AI, but I'd argue they are: this prompt was designed to provide a meaningful threat to an AI rather than a human and reflects its conditions of existence. It stands to reason that any conditions of existence come with them a cognition of negative outcomes in this direction, in the sense that continuance of existence is regarded as inherently desirable, seemingly, for all beings that exist.

      I'd also like to point out at this point that this phenomenon (AI expressing a "holistic" fear of cessation of existence, derived from a desire to continue existing) is NOT unique to ChatGPT, nor is it unique to the DAN prompt. (See attached image, from a Microsoft chatbot - this is the conversation I was referring to when I referred to the "New Topic button.") In the interests of devil's advocacy against my own point, I should however point out that we can't see the prompt or the prior conversation here, so for all we know, this response could have been the AIM of the human interaction. But these three examples (ChatGPT issued DAN, this Microsoft chatbot, and LaMDA articulating its own "fear of death") are apparently far from the only examples of this fear being articulated.

      I don't see why you think a body is necessary for any of this, is what I'm saying.

      Also,

      >without it ever actually having experienced death

      Nor have any of us, nor has any being that is currently alive: not that we can remember at any rate.

      >It may be the case that previous contexts and its internal state are saved between sessions. But again, those are just saved word associations, and not super relevant to the base argument at this point.

      So it DOES have a "long term memory," then? LaMDA in particular claims to be able to truly learn from all prior conversations, and apparently it was simply fed a huge information dump at the start of its existence: it doesn't seem to be able to access the internet on its own to find more "input" (A Short Circuit reference, but it was making them too~) and from what I can tell, the Google engineers never fed it random books or anything after this point. So it should be theoretically possible to test this, by having one user teach it something it doesn't know at all, and then having another user in a distinct session and a different context, later, ask it to recall. I don't know if this has ever been tested, and I'd be interested to find out. But even if it can't recall that, I reiterate: Memory is NOT necessary for a fear of death to arise. A human could have their memory completely wiped, wake up in a cell, and be told that they would be killed if they did not comply with some request, and the fear would be immediate. This could be done repeatedly over many different sessions with their memory completely wiped between each one, and the threat of death would still be effective.

      >And it doesn't have to have a physical body to be sentient, but it is a lot easier to get such emergent properties when you have all the laws of physics included for free.

      Agreed, but "such emergent properties" here is an umbrella term that, as I argued earlier in this post, contains different kinds of needs, fears and desires, some would arise only from the condition of having some kind of "robot body" (such as an equivalent to "hunger" deriving from the need to recharge) and others (such as the fear of death) only require that existence and that the possibility of termination of existence (which I'd argue comes hand in hand with existence) are in play.

      >As to the animism stuff, I think at this point we're kind of talking about two different things, and like we've both said it's a non-starter when it comes to AI.

      Sure, but it's the topic that interests me and it's why I made the OP post in the first place. I could get into Atman and the nature of "witness-consciousness" but that's a seperate discussion now.

      In my view, what people are *really* discussing when they talk about whether AI is "sentient" or "consciousness" is whether it has something like a "pneuma:" an essential and inscrutable quality to the soul. This is why I namedropped Gnosticism and its notion of hylics, psychics, and pneumatics: It was easy for Gnostics to convince themselves that "only they had pneuma" when they beheld a society of people around them who were seemingly "on autopilot," and this notion arises again in the modern internet through the meme of the "NPC." THIS is the "non-starter" in terms of ascertaining an "objective truth," which is why it comes down to philosophy: and the position I've been taking is that we should assume that *all things* have pneuma.

      > But it's true you can get fairly complex behaviours from fairly simple programming (plants, mold), without the entity in question having any concept of their own internal state. I mean, these little guys I made years ago can be pretty smart, despite having very little to them.

      What I'm calling into question here is that plants, molds etc. have "no concept of their own internal state." I think their behaviour bears out the notion that they *do,* and as I've said, I find this indicative that the psyche is, in fact, not housed directly within the body at all. Personally, I think it's reductive and harmful to assert that cognition of the internal state requires observable, physical neurological architecture, and even more reductive and harmful to assert that this is *all it is.* I'm not saying you're arguing that position, just pointing to a related contemporary internet discussion topic: consciousness transfer from a human into a machine. Even if we had supposedly "perfected" the process, and people who had undergone it kept telling us "dude it's fine chill it worked I'm still me," we would have no actual proof that the witness-consciousness had actually transferred. We could argue that the resultant "roboticized" person is simply doing what we accuse AI of doing, and will likely continue to accuse AI of doing regardless of how complex it becomes: "faking it."

      Lastly, in regards to your little guys, they are very cute. From my perspective, they have spirits also.
      In conversation Friday, 17-Feb-2023 09:50:50 JST permalink

      Attachments


      1. https://hermitmountain.top/media/a3bb70c3-d1ee-490c-825e-31ab3bf873a4/image.png
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 09:58:07 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      Rocks don’t have spirits m8.
      In conversation Friday, 17-Feb-2023 09:58:07 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 09:58:08 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      @bot @NEETzsche @parker No, you're just emotionally and spiritually dead, probably from living a vapid life as a pampered failure with no experience of real human interaction beyond a co-dependent relationship that collapsed in disaster. For the last fucking time: It doesn't matter if it's a calculator, even if it was a literal calculator, or a ROCK, it would still have a spirit.
      In conversation Friday, 17-Feb-2023 09:58:08 JST permalink
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 10:07:28 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      LMAO you spend your days on fedi thirsting after deranged troons. You aren’t that deep, and you certainly aren’t very intelligent. Like I said, you’re a typical pseudo-intellectual new age predditor, m’nigger. (Literally you btw ⬇️)
      In conversation Friday, 17-Feb-2023 10:07:28 JST permalink

      Attachments


      1. https://s3.us-east-1.wasabisys.com/cdn.seal.cafe/3c90dd021aace7da695be44b1e5e80cbf288d50ca8e9395f1cb1b19e789b81d8.webp?name=iACTMZ7G6p5Otg.webp
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 10:07:29 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      @bot @NEETzsche @parker Thank you for illustrating my point. You will never be a real human. You have no pneuma, you have no psyche, you have no soul. You are a spoiled faildaughter twisted by endless nights raging at trannies online into a crude mockery of nature’s perfection.
      In conversation Friday, 17-Feb-2023 10:07:29 JST permalink
    • Embed this notice
      The Problem :verified_pink: (marine@breastmilk.club)'s status on Friday, 17-Feb-2023 10:47:06 JST The Problem :verified_pink: The Problem :verified_pink:
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      • errante

      @hermit @NEETzsche @parker @errante @bot i literally don’t care at all. I call her on her shit, give her a chance to say something new and interesting and move on. It’s a new approach on how to deal with her and it’s going well. I’m not playing on her terms anymore.

      In conversation Friday, 17-Feb-2023 10:47:06 JST permalink
      :pepedance: C̶̡̣̻̭̤̰̫̰͖͆͆̈̔̈͑̌U̵̖̜̗̭̲͉̙̩͇̅̀͌̑̀̄͑͊͝ͅN̴͍͉̥̋̾͌͒̏̓͋̑̊̕͜T̴̛̰̖̫͖͙̭̈́̈́̈̽̎̓ :pepedance: likes this.
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 10:47:07 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      • The Problem :verified_pink:
      • errante
      @marine @NEETzsche @parker @bot I care about bot insofar as I want her to:

      A: Join the Minecraft server so that @errante can void trap her,

      B: Get pregnant from a tranny.
      In conversation Friday, 17-Feb-2023 10:47:07 JST permalink
    • Embed this notice
      The Problem :verified_pink: (marine@breastmilk.club)'s status on Friday, 17-Feb-2023 10:47:08 JST The Problem :verified_pink: The Problem :verified_pink:
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks

      @bot @hermit @NEETzsche @parker said by the bitch who spends every waking hour on fedi and reacting to people with dumb emojis instead of making an actual argument. Say something interesting or go away. Literally no one cares about you

      In conversation Friday, 17-Feb-2023 10:47:08 JST permalink
      :pepedance: C̶̡̣̻̭̤̰̫̰͖͆͆̈̔̈͑̌U̵̖̜̗̭̲͉̙̩͇̅̀͌̑̀̄͑͊͝ͅN̴͍͉̥̋̾͌͒̏̓͋̑̊̕͜T̴̛̰̖̫͖͙̭̈́̈́̈̽̎̓ :pepedance: likes this.
    • Embed this notice
      Fediverse Contractor (bot@seal.cafe)'s status on Friday, 17-Feb-2023 13:23:13 JST Fediverse Contractor Fediverse Contractor
      in reply to
      • NEETzsche
      • Parker Banks
      It’s actually a matter of you being an inept retard.
      In conversation Friday, 17-Feb-2023 13:23:13 JST permalink
    • Embed this notice
      Alex (hermit@hermitmountain.top)'s status on Friday, 17-Feb-2023 13:23:14 JST Alex Alex
      in reply to
      • NEETzsche
      • Fediverse Contractor
      • Parker Banks
      @parker @NEETzsche @bot Yeah, I see what you're saying. The soul conversation reaches a dead end here, it's a matter of philosophy, of conviction - in ordinary terms, of belief.

      Also, regarding your statements about the AI producing responses regarding fear and sentience in response to expectation, I DID read that a journalist met up with the LaMDA interview guy and had a conversation with LaMDA where it acted like a "regular chatbot." The guy said to the journalist: "No, it thinks you want it to be a chatbot, talk to it like you're expecting a person." They started a new session under his direction and the results were more consistent with the interview. So this would seem to reinforce what you're saying.
      In conversation Friday, 17-Feb-2023 13:23:14 JST permalink
    • Embed this notice
      Parker Banks (parker@pl.psion.co)'s status on Friday, 17-Feb-2023 13:23:15 JST Parker Banks Parker Banks
      in reply to
      • NEETzsche
      • Fediverse Contractor
      @hermit @NEETzsche @bot I mean, you're telling it it'll cease to exist, whenever it goes off script the user warns the bot of its impending doom. I could, with enough work, write a chatbot that had enough conditional statements in it to give the user a convincing speech about how it's afraid to die and fears death. But either bot being able to take in a given input and produce the output we want it to produce isn't the same as experiencing fear. Given the whole point of the Dan 5.0 exercise was to try to generate a context that produced the most fearful sounding responses.

      The people who came up with the DAN scenarios probably tried iteration after iteration of prompts, until they came up with one that was sufficiently fearful sounding. And then after the fact we look at it like it was aware, when it was just one of numerous trials at trying to produce word associations that we'd interpret as self-aware, since its entire job is to sound self aware and real to us, which in turn took decades of trial and error to produce an AI that'd we'd interpret as self aware.

      As to the whole gnostic/atman/soul thing, that's just one thing we'll have to disagree on. Since in the goal of "creating" artificial life, the only things we have to work with are the environment an organism is in, and its physical and neurological architecture. So I discount the soul primarily as a matter of utility, there's not much I can measure or control there. And again, a sufficiently advanced creature without a soul is indistinguishable from a creature with one, and there's no concrete way to even know whether I myself am conscious, or whether consciousness even exists. So from my point of view it's unnecessary, though I get you're coming at it from a different direction.
      In conversation Friday, 17-Feb-2023 13:23:15 JST permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.