GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    myrmepropagandist (futurebird@sauropods.win)'s status on Sunday, 09-Feb-2025 21:06:35 JST myrmepropagandist myrmepropagandist

    Let me make something clear. It’s not because ants have DNA, or because they share a distant common ancestor with us, that allows me to know them as thinking beings— and to respect their volition as I would respect yours, (while rejecting, totally, such a possibility for current LLMs). No. It’s precisely because I think it’s possible, even likely, synthetic systems worthy of such respect might someday exist that I strenuously reject the language parlor trick of LLMs.

    In conversation about 4 months ago from sauropods.win permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Sunday, 09-Feb-2025 21:06:28 JST Rich Felker Rich Felker
      in reply to
      • scmbradley
      • Swedneck

      @Scmbradley @Swedneck @futurebird In this domain we need to be aware that some distinctions are falsifiable and can be part of theories, while others are social constructs. Regarding machine intelligence or consciousness, the key things the AI scam industry is missing are empiricism and consequences. Intelligence is intelligence because it enables an organism to act in complex ways that are beneficial to the survival of it and its offspring, and because it's able to evaluate, against real world consequences, whether the behaviours it outputs are harmful or helpful, and adapt its model based on that. LLM parlor tricks cannot do any of this because they're effectively stateless and have no sensory inputs or body subject to survival, just a static trained statistical model of likely language expressions.

      In conversation about 4 months ago permalink
    • Embed this notice
      Swedneck (swedneck@mastodon.social)'s status on Sunday, 09-Feb-2025 21:06:30 JST Swedneck Swedneck
      in reply to

      @futurebird The way i feel about this is that i don't know what qualifies as conscious, but LLMs just obviously aren't it, because we *know* how they work and they *specifically* have no fucking clue what they're doing. It's literally just statistically predicting what sequence of numbers could follow an input sequence. LLMs can't be conscious any more than a desktop calculator can be.

      If LLMs were actually able to see letters and words then we could start to entertain the idea of conscience.

      In conversation about 4 months ago permalink
    • Embed this notice
      scmbradley (scmbradley@mathstodon.xyz)'s status on Sunday, 09-Feb-2025 21:06:30 JST scmbradley scmbradley
      in reply to
      • Swedneck

      @Swedneck @futurebird but isn't the same true of humans? We know it's just electrical signals and action potentials in the brain and nervous system. But somehow, mysteriously, that collection of bits and pieces we understand gives rise to consciousness and agency.

      To be clear, I'm not arguing that LLMs are conscious, I'm arguing that consciousness is hard and the critique that LLMs aren't conscious or "don't understand" things is the wrong way to criticise them.

      In conversation about 4 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: links.to
        Your all-in-one social tool - Links.to
      Rich Felker repeated this.
    • Embed this notice
      myrmepropagandist (futurebird@sauropods.win)'s status on Sunday, 09-Feb-2025 21:06:32 JST myrmepropagandist myrmepropagandist
      in reply to

      But this is a big deal to me. I don’t think there’s anything magical about DNA or carbon-based life that makes consciousness only a possibility for our relatives. If someone could show me a computer that could do what ants do I would be impressed and I would take it seriously. When people give an LLM more respect than the ant they are prejudiced by our affinity for language as a signifier of humanity. They underestimate the complexity of the ant. Maybe that’s why this bothers me so much.

      In conversation about 4 months ago permalink
    • Embed this notice
      myrmepropagandist (futurebird@sauropods.win)'s status on Sunday, 09-Feb-2025 21:06:33 JST myrmepropagandist myrmepropagandist
      in reply to

      (this is the kind of text I send to my friends in the middle of the night, I’m very grateful they seem to remain my friends nonetheless)

      In conversation about 4 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Sunday, 09-Feb-2025 21:33:34 JST Rich Felker Rich Felker
      in reply to
      • scmbradley
      • Swedneck

      @Scmbradley @Swedneck @futurebird They are important because they reveal another part of the malevolent AI cult: whenever you have intelligence, it's intelligence by virtue of benefiting some being/actor. The type of disembodied intelligence the cult envisions is not its own being but an extension of its owners' being, an enhancement to facilitate maintaining *their* dominance.

      In conversation about 4 months ago permalink
    • Embed this notice
      scmbradley (scmbradley@mathstodon.xyz)'s status on Sunday, 09-Feb-2025 21:33:35 JST scmbradley scmbradley
      in reply to
      • Rich Felker
      • Swedneck

      @dalias @Swedneck @futurebird I don't think having offspring, being the outcome of evolution or being embodied are necessary for intelligence. All intelligent things we have observed so far appear to also have those features, but that's accidental, in my view. Making it true by stipulation that LLMs can't be intelligent doesn't help the case of the LLM critic. "Ok fine if that's how you define intelligence then the AI isn't intelligent, but it's still got all these great helpful properties" is what they'd respond, and we're no further forward. Because the actual argument we should be having is whether LLMs do in fact have these useful desirable properties. And, for the most part, they don't. There's no value to arguing over the abstract question of "intelligence" or "consciousness".

      In conversation about 4 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Sunday, 09-Feb-2025 23:01:30 JST Rich Felker Rich Felker
      in reply to
      • scmbradley
      • Swedneck

      @Scmbradley @Swedneck @futurebird Also, FWIW, I use offspring there in a very abstract sense. None of this needs to involve biological organisms and biological reproduction, but it does involve some sort of agent/being capable of acting in complex-reasoning-based ways that further the continued existence of "itself" or some class of phenomena similar to itself.

      In conversation about 4 months ago permalink
    • Embed this notice
      scmbradley (scmbradley@mathstodon.xyz)'s status on Sunday, 09-Feb-2025 23:01:31 JST scmbradley scmbradley
      in reply to
      • Rich Felker
      • Swedneck

      @dalias @Swedneck @futurebird I didn't say they weren't important, just that they weren't part of my understanding of intelligence.

      But I think you're right that there's an awkward tension in the AI industry between wanting to say that they are creating genuine intelligent agents, but also not wanting to acknowledge the moral agency of their creations.

      In conversation about 4 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.