GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Gerry McGovern (gerrymcgovern@mastodon.green)'s status on Monday, 29-Jul-2024 15:41:50 JST Gerry McGovern Gerry McGovern

    "If AI were revolutionizing the economy, we would see it in the data. We're not seeing it. I could talk about the fact that AI companies have yet to find a killer app and that perhaps the biggest application of AI could be, like, scams, misinformation and threatening democracy. I could talk about the ungodly amount of electricity it takes to power AI and how it's raising serious concerns about its contribution to climate change."

    https://www.npr.org/transcripts/1197967800

    In conversation about 10 months ago from mastodon.green permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: media.npr.org
      Is AI overrated? : The Indicator from Planet Money
      Are the promises made by AI boosters mostly hype, or are we actually underappreciating the transformative potential of AI? This week, The Indicator hosts a two-part debate on the hype around generative AI. Today, the second episode: Despite the tech world's love affair with the technology, is AI overrated? Related episodes: Is AI underrated? (Apple / Spotify) For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org. Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.
    • clacke likes this.
    • Embed this notice
      Resuna (resuna@ohai.social)'s status on Friday, 02-Aug-2024 05:25:33 JST Resuna Resuna
      in reply to
      • Josh
      • Steve Hayes

      @stevehayes @krnlg @gerrymcgovern

      Don't mix up neural networks and large language models. Neural networks have a number of useful applications, image recognition being one of them.

      Large language models are a tool based on neural network design that produces a parody of the source data as a plausible continuation of the prompt. This is useful for passing the Turing test and generating spam. It is not however a reasoning system or a viable path towards AI.

      In conversation about 10 months ago permalink
      clacke likes this.
    • Embed this notice
      Steve Hayes (stevehayes@mastodon.green)'s status on Friday, 02-Aug-2024 05:25:41 JST Steve Hayes Steve Hayes
      in reply to
      • Josh
      • Resuna

      @resuna @krnlg @gerrymcgovern Of course neural networks are being applied to things other than LLMs too. For example image recognition. Not always successfully, for example when Teslas drive straight into fire engines with all their lights flashing. That at least is something that humans generally avoid doing.

      In conversation about 10 months ago permalink
    • Embed this notice
      Resuna (resuna@ohai.social)'s status on Friday, 02-Aug-2024 05:25:43 JST Resuna Resuna
      in reply to
      • Josh
      • Steve Hayes

      @stevehayes @krnlg @gerrymcgovern

      Large language modules are a dead end digression that is sucking all the oxygen out of actual AI research. There is huge potential in this area and it has been hijacked by con artists.

      If you want a stupid historical analogy, it's like someone was trying to build a mechanical horse and everyone was excited by the idea of regular carriages being pulled by steam horses.

      But they were just painted canvas.

      In conversation about 10 months ago permalink
    • Embed this notice
      Steve Hayes (stevehayes@mastodon.green)'s status on Friday, 02-Aug-2024 05:25:44 JST Steve Hayes Steve Hayes
      in reply to
      • Josh
      • Resuna

      @resuna @krnlg @gerrymcgovern I'm sure there were people looked at those shuddering, juddering, noisy, smelly things and said they'll never go as fast as a good horse. Do you believe there's some holy spirit that can infuse a lump of protoplasm but not a lump of silicon? I'm not saying more and more powerful AI is something we want or should have but unless we decide to do something to stop it or unless it turns out to not be what it rather looks like being, it's on the way.

      In conversation about 10 months ago permalink
    • Embed this notice
      Resuna (resuna@ohai.social)'s status on Friday, 02-Aug-2024 05:25:45 JST Resuna Resuna
      in reply to
      • Josh
      • Steve Hayes

      @stevehayes @krnlg @gerrymcgovern

      That's not the same thing, and you know it's not the same thing, I think you are just carrying on a meme that you think is funny a bit too far.

      In conversation about 10 months ago permalink
    • Embed this notice
      Steve Hayes (stevehayes@mastodon.green)'s status on Friday, 02-Aug-2024 05:25:47 JST Steve Hayes Steve Hayes
      in reply to
      • Josh
      • Resuna

      @resuna @krnlg @gerrymcgovern Yes indeed. Whole university courses are based on identifying which painters or whatever influenced which other painters or whatever.

      In conversation about 10 months ago permalink
    • Embed this notice
      Josh (krnlg@mastodon.social)'s status on Friday, 02-Aug-2024 05:25:48 JST Josh Josh
      in reply to
      • Resuna
      • Steve Hayes

      @stevehayes @resuna @gerrymcgovern I don't think that's quite right - the principle of operation of an LLM is not a mystery, it is just that it is hard to go from a specific output and work back to exactly why it gave that output. I think? I mean LLMs have been an active research area for some time, people made these things. They don't exhibit mysterious emergent intelligent properties afaik, they just seem like they do at a glance.

      In conversation about 10 months ago permalink
    • Embed this notice
      Resuna (resuna@ohai.social)'s status on Friday, 02-Aug-2024 05:25:48 JST Resuna Resuna
      in reply to
      • Josh
      • Steve Hayes

      @krnlg @stevehayes @gerrymcgovern

      Exactly, there's no magic, and in some cases we can even track down the exact training text or images that led to a particular generated result.

      In conversation about 10 months ago permalink
    • Embed this notice
      Steve Hayes (stevehayes@mastodon.green)'s status on Friday, 02-Aug-2024 05:25:49 JST Steve Hayes Steve Hayes
      in reply to
      • Josh
      • Resuna

      @krnlg @resuna @gerrymcgovern My point is that we don't really know what's going on in there. It's not like a traditional program where we can point to lines of code. At the same time there's nothing we can point to in an animal's brain and say that's where the magic happens and that AI doesn't have and can never have that thing.

      In conversation about 10 months ago permalink
    • Embed this notice
      Josh (krnlg@mastodon.social)'s status on Friday, 02-Aug-2024 05:25:50 JST Josh Josh
      in reply to
      • Resuna
      • Steve Hayes

      @stevehayes @resuna @gerrymcgovern The AI wasn't having a tantrum, it was surely just reproducing plausible answers to a repeated question based on its test data. It doesn't know what a tantrum is.

      That's the difference, however much humans make mistakes - the AI isn't making mistakes, it doesn't have any concept of a mistake let alone knowledge or thinking.

      In conversation about 10 months ago permalink
    • Embed this notice
      Steve Hayes (stevehayes@mastodon.green)'s status on Friday, 02-Aug-2024 05:25:51 JST Steve Hayes Steve Hayes
      in reply to
      • Resuna

      @resuna @gerrymcgovern Occasional human mistakes? Think of the millions of MAGA followers. The point is that we don't really know what's going on in that AI simulation of an animal brain's neural network. Maybe we can never know - we'll let philosophers work on that one. But we can observe. The first one I remember reading about was an AI having a tantrum if it was asked the same question 15 times.

      In conversation about 10 months ago permalink
    • Embed this notice
      Resuna (resuna@ohai.social)'s status on Friday, 02-Aug-2024 05:25:52 JST Resuna Resuna
      in reply to
      • Steve Hayes

      @stevehayes @gerrymcgovern

      I kind of hate the comparison between the routine failures of this kind of software, and humans occasional mistakes, because they really aren't all that similar.

      In conversation about 10 months ago permalink
    • Embed this notice
      Steve Hayes (stevehayes@mastodon.green)'s status on Friday, 02-Aug-2024 05:25:53 JST Steve Hayes Steve Hayes
      in reply to
      • Resuna

      @resuna @gerrymcgovern It's the failings of AI that are the most interesting aspect. Especially when we look around and see humans doing much the same things. I'm sure that 90% of the columns in The Guardian could be written by AI and nobody would notice. Maybe they already are. They're just a churning mass of memes and tropes.

      In conversation about 10 months ago permalink
    • Embed this notice
      Resuna (resuna@ohai.social)'s status on Friday, 02-Aug-2024 05:25:54 JST Resuna Resuna
      in reply to

      @gerrymcgovern

      Hallucinations are the whole way they work. They make up plausible text, if it happens to be accurate that's accidental.

      In conversation about 10 months ago permalink
    • Embed this notice
      Gerry McGovern (gerrymcgovern@mastodon.green)'s status on Friday, 02-Aug-2024 05:25:55 JST Gerry McGovern Gerry McGovern
      in reply to

      ROSALSKY: Given that hype, should we expect AI to usher in revolutionary changes for the economy in the next decade?

      ACEMOGLU: No. No, definitely not. I mean, unless you count a lot of companies overinvesting in generative AI and then regretting it as a revolutionary change.

      ROSALSKY: Many AI researchers are saying we cannot end the problem of hallucinations any time soon, if not ever, with these models. That's because they don't know what's true or false.

      https://www.npr.org/transcripts/1197967800

      In conversation about 10 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: media.npr.org
        Is AI overrated? : The Indicator from Planet Money
        Are the promises made by AI boosters mostly hype, or are we actually underappreciating the transformative potential of AI? This week, The Indicator hosts a two-part debate on the hype around generative AI. Today, the second episode: Despite the tech world's love affair with the technology, is AI overrated? Related episodes: Is AI underrated? (Apple / Spotify) For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org. Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.