GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    novatorine 🏴🏳️‍⚧️ (anarchopunk_girl@kolektiva.social)'s status on Tuesday, 08-Aug-2023 22:00:00 JST novatorine 🏴🏳️‍⚧️ novatorine 🏴🏳️‍⚧️

    Everyone: AI is going to replace us!

    AI:

    In conversation Tuesday, 08-Aug-2023 22:00:00 JST from kolektiva.social permalink

    Attachments


    1. https://kolektiva.social/system/media_attachments/files/110/854/109/198/039/150/original/bac153ddf5fa73e8.png
    • Embed this notice
      SlightlyCyberpunk (admin@mastodon.slightlycyberpunk.com)'s status on Tuesday, 08-Aug-2023 22:39:24 JST SlightlyCyberpunk SlightlyCyberpunk
      in reply to

      @anarchopunk_girl I'm not afraid of the AI taking over; I'm afraid of the politicians trusting (or abusing) its output. What happens when the AI -- trained on keyboard warriors and hollywood scripts -- is asked by the next Trump if he should preemptively nuke Beijing?

      In conversation Tuesday, 08-Aug-2023 22:39:24 JST permalink
    • Embed this notice
      Jaime Herazo (jherazob@mastodon.ie)'s status on Tuesday, 08-Aug-2023 22:39:28 JST Jaime Herazo Jaime Herazo
      in reply to

      @anarchopunk_girl
      Every time somebody defends the usefulness of AI for actual work i reply that it's very unreliable, and always they downplay it, "Humans do that too!" and other shit. Yeah right.

      In conversation Tuesday, 08-Aug-2023 22:39:28 JST permalink
    • Embed this notice
      novatorine 🏴🏳️‍⚧️ (anarchopunk_girl@kolektiva.social)'s status on Tuesday, 08-Aug-2023 22:47:08 JST novatorine 🏴🏳️‍⚧️ novatorine 🏴🏳️‍⚧️
      in reply to
      • Jaime Herazo

      @jherazob The crucial difference is that while humans may make mistakes or make things up occasionally, we still have actual knowledge and memories and understandings of concepts and principles and how they relate, that we can consult to at least attempt to say accurate things, and we are actually usually *trying* to be correct as well, and the reward function our brains have been trained under since birth is a connection with a real objective world that rewards actually understanding how things work and punishes not, because if you're wrong about something it simply won't actually fly if you try it in the real world, so we are still systemically capable of and oriented towards truth. Whereas, AIs have no understanding of concepts or principles and have no actual knowledge or memories or anything — it's all just thrown into a statistical blender, there's no memory storage portion of their neural network, and their reward function only rewards them for the plausibility (in the sense of looking superficially, statistically right) of an assemblage of words given a context. In other words they aren't a system designed to be consistently capable of being accurate, or even "caring" if they're accurate. It's basically purely orthogonal or incidental to what the AI is trying to do that it's right sometimes, it's an accident. It's like saying pathological liars (who only care if something "sounds right") and regular people are the same because regular people make mistakes or exaggerate sometimes. Like, no, one is fundamentally not oriented toward producing accuracy, the other one at least is.

      In conversation Tuesday, 08-Aug-2023 22:47:08 JST permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.