GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 15:01:23 JST clacke clacke

    People overestimate what "AI" of today is and what it can actually do, because memes are spreading that describe any machine-assisted process in a highly glossed-over form, ignoring the required human effort to make it work.

    The naive impression is that you just gave some generative engine a prompt and the result came out fully formed, when the actual process is that the people behind the project used multiple purpose-built engines, for each of the engines they iterated on prompts that would output something semi-coherent, and then they used human efforts to tie the result together.

    This is currently spreading as "this AI-generated pizza commercial" with no further explanation, but Tom's Hardware interviewed the actual people who made it work:

    www.tomshardware.com/news/ai-p…

    #PizzaAd #AIVideo #AI #PepperoniHugSpot #GPT #LLM #Runway #RunwayGen1 #RunwayGen2 #MidJourney #ElevenLabs #AIVoice #soundraw

    In conversation Monday, 01-May-2023 15:01:23 JST from libranet.de permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 15:06:32 JST clacke clacke
      in reply to
      > Pizza Later told us that they used five different models to make various assets for the video and then spent some time using Adobe After Effects to stitch the video, dialog, music and some custom images together. Overall, it took them 3 hours to complete the project.


      Three hours to create a video according to script, with scenery, uncanny-valley people, props etc is impressive and couldn't have been done five years ago, but it's a far cry from "an AI made this".

      In conversation Monday, 01-May-2023 15:06:32 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 15:13:28 JST clacke clacke
      in reply to
      Tools used:
      - "GPT-4 [ . . . ] to come up with a name for the fictional pizza joint [ . . . ] and to write the script"
      - "Runway Gen-2, a text-to-video model that's in private beta"
      - "MidJourney to generate some images that appear in the video, including the restaurant exterior and some pizza patterns"
      - "Soundraw to create background music"
      - "ElevenLabs Prime Voice AI to provide realistic narration with a male voice"
      - and, of course, as already quoted above, "Adobe After Effects to stitch the video, dialog, music and some custom images together"
      In conversation Monday, 01-May-2023 15:13:28 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 15:16:35 JST clacke clacke
      in reply to

      The harm here is that you have people in the comments going "wow, this looks like lucid dreaming", "is this how AIs dream", etc.

      These comments are not the fringe, that's the expected and intended reaction, and these commenters represent the voting public's understanding of the technology, which means we as a society will draw wrong conclusions and make wrong decisions.

      In conversation Monday, 01-May-2023 15:16:35 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 15:21:08 JST clacke clacke
      in reply to

      We are not in a century that has thinking, dreaming machines. We have statistical models with massive amounts of human-generated input doing statistical correlation on the forms of text and the forms of images. That's why you get funny numbers of fingers and a cook pouring ingredients into his arm.

      There are no cognitive objects here, no model of a "hand", "person" or "pizza".

      In conversation Monday, 01-May-2023 15:21:08 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 15:28:32 JST clacke clacke
      in reply to

      I've always been a fan of anthropomorphizing computers and programs, because first of all, they hate it when you do that. Second, it's funny and cute, and third it's convenient shorthand when you talk to people who are in on the joke.

      I've been thinking that people have a stick up their ass when they complain it's imprecise, misleading, etc. Relax, it's just a sleight of hand, a figure of speech.

      But now I see the harm. People who are not in on the joke, and it's becoming clear that that's most of the 99% who are not programmers – and even some of the programmers – take it at face value and then we throw billions of dollars into "A(G)I safety" that could have been spent on automation ("AI") ethics or things that aren't hyped as AI at all.

      In conversation Monday, 01-May-2023 15:28:32 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 16:41:55 JST clacke clacke
      in reply to

      I've listened to multiple interviews with Timnit Gebru and Emily M. Bender lately, recorded at any time in the last year or so, but this recent one with Bender (by Paris Marx / Tech Won't Save Us) is my favorite so far, because it goes deeper into how exactly GPT is emphatically *not* thinking and how we can know that, and why thinking it thinks or will think is a harmful distraction:

      techwontsave.us/episode/163_ch…

      In conversation Monday, 01-May-2023 16:41:55 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 16:42:54 JST clacke clacke
      in reply to

      This very recent interview with both of them (by Adam Conover / Factually!) is my second favorite and really sums up all the themes out there so far and not least of all has lots of references to further listening and/or reading:

      cms.megaphone.fm/channel/STA72…

      farside.link/invidious/watch?v…

      youtube.com/watch?v=jAHRbFetqI…

      In conversation Monday, 01-May-2023 16:42:54 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 17:48:32 JST clacke clacke
      in reply to
      • Digital Mark λ 📚 🕹 💾 🥃
      > The problem is we don't know what Human intelligence is


      &zwnb;@mdhughes Right, and then the followup problem is people claiming "... but we are definitely reproducing it for some reason".

      In conversation Monday, 01-May-2023 17:48:32 JST permalink
    • Embed this notice
      Digital Mark λ 📚 🕹 💾 🥃 (mdhughes@appdot.net)'s status on Monday, 01-May-2023 17:48:33 JST Digital Mark λ 📚 🕹 💾 🥃 Digital Mark λ 📚 🕹 💾 🥃
      in reply to

      @clacke So yeah, any one piece doesn't make AGI. An LLM is just a blithering idiot Markov chain. But at some point if you wire up enough of these stupid things together, you get something that acts intelligently, whether or not it's "self-aware". And that's very dangerous to us, just as hominids were very dangerous to every other life on this planet.

      In conversation Monday, 01-May-2023 17:48:33 JST permalink
    • Embed this notice
      Digital Mark λ 📚 🕹 💾 🥃 (mdhughes@appdot.net)'s status on Monday, 01-May-2023 17:48:34 JST Digital Mark λ 📚 🕹 💾 🥃 Digital Mark λ 📚 🕹 💾 🥃
      in reply to

      @clacke The problem is we don't know what Human intelligence is. There's multiple subsystems in the brain, some going back billions of years, mostly fast signal processing/simplification. There's a very very recently added language model, which might even work quite a lot like LLMs. There's some level of abstract reasoning most mammals, birds, and cephalopods can do, which must not need language. And then whatever makes us think we're smart, probably a pretty small loop.

      In conversation Monday, 01-May-2023 17:48:34 JST permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 18:21:28 JST clacke clacke
      in reply to
      • Digital Mark λ 📚 🕹 💾 🥃

      @mdhughes If we build systems that have an actual known use case and a defined scope we won't accidentally produce sentience.

      Even today's loosely defined "keep up a conversation" system isn't even programmed to understand text, so there is no reason to believe it suddenly will, much less acquire the experience required to make sense of that meaning, just because we feed more text into it.

      What's fascinating about LLMs is how far you can take the illusion given enough input and how much humans are willing to model a mind that isn't there.

      In conversation Monday, 01-May-2023 18:21:28 JST permalink
    • Embed this notice
      Digital Mark λ 📚 🕹 💾 🥃 (mdhughes@appdot.net)'s status on Monday, 01-May-2023 18:21:29 JST Digital Mark λ 📚 🕹 💾 🥃 Digital Mark λ 📚 🕹 💾 🥃
      in reply to

      @clacke Not understanding it means we don't know where the line is, not that the last connections can't be made.

      We're like idiots who can refine uranium, throwing it on a pile. Eventually that's gonna go critical, but nobody knows when.

      In conversation Monday, 01-May-2023 18:21:29 JST permalink
    • Embed this notice
      Jake Miller (jakemiller@federate.social)'s status on Monday, 01-May-2023 20:02:32 JST Jake Miller Jake Miller
      in reply to

      @clacke I do think that anthropomorphizing computers helps us create mental models of them, which is useful. However, that has been used as a critical marketing trick by AI promoters. Today’s tech is both amazing and also overhyped, deliberately so. Even if we continue to talk about loops as conscious intention, I propose that we intentionally stop using this language in the “AI” context.

      In conversation Monday, 01-May-2023 20:02:32 JST permalink
      clacke likes this.
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Monday, 01-May-2023 20:06:16 JST clacke clacke
      in reply to
      • Jake Miller

      @jakemiller Thank you for putting "AI" in scare quotes. That's another one of those "yeah, we know it's just 21st century Eliza" that just ... not everybody knows that, clearly.

      Less than a year ago I told someone "[yes, machine learning counts as AI, that's just how language happened, that's what that means now, move on]" and now here I am.

      In conversation Monday, 01-May-2023 20:06:16 JST permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.