GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Alan Johnson (acjay@hachyderm.io)'s status on Wednesday, 24-Jul-2024 00:20:11 JST Alan Johnson Alan Johnson
    in reply to
    • Clive Thompson
    • Thomas 🔭🕹️
    • Gergovie

    @Gergovie @clive @thomasfuchs I think that's way too reductive. LLMs absolutely do something that *looks* like understanding and reasoning.

    The problem is that we don't have great ways to characterize what it is they *do*, so it's really hard to know when their output is good enough to use in place of actual logic and interpretation.

    In conversation about 11 months ago from hachyderm.io permalink
    • Embed this notice
      Gergovie (gergovie@piaille.fr)'s status on Wednesday, 24-Jul-2024 00:20:13 JST Gergovie Gergovie
      in reply to
      • Clive Thompson
      • Thomas 🔭🕹️

      @clive @thomasfuchs

      They have NO understanding NOR reasonning.
      Only text generator.

      In conversation about 11 months ago permalink
    • Embed this notice
      Clive Thompson (clive@saturation.social)'s status on Wednesday, 24-Jul-2024 00:20:14 JST Clive Thompson Clive Thompson
      • Thomas 🔭🕹️

      @thomasfuchs

      Yeah they really are not trustworthy in this regard

      It was one of the things I actually hope they would do well!

      But I haven’t had much luck with it and research metrics like this haven’t either

      In conversation about 11 months ago permalink
    • Embed this notice
      Clive Thompson (clive@saturation.social)'s status on Wednesday, 24-Jul-2024 00:32:43 JST Clive Thompson Clive Thompson
      • Thomas 🔭🕹️
      • Gergovie

      @thomasfuchs @acjay @Gergovie

      Yeah, that’s it

      In conversation about 11 months ago permalink
    • Embed this notice
      Alan Johnson (acjay@hachyderm.io)'s status on Wednesday, 24-Jul-2024 01:23:00 JST Alan Johnson Alan Johnson
      • Clive Thompson
      • Thomas 🔭🕹️
      • Gergovie

      @thomasfuchs @clive @Gergovie A similar argument could be made to debunk the notion that the human brain is capable of actual thinking. After all, it's just a bunch of neurons, preconfigured by genetics, trained on sensory data.

      To be clear, I don't think that LLMs "think" in the exact way as humans, but I do believe there's a very fuzzy boundary.

      In conversation about 11 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: x.gd
        URL短縮サービス X.gd
        from x.gd
        X.gdは長いURLを短く変換する完全無料で登録不要の短縮URL発行サービスです。QRコード発行やアクセス解析にも対応しています。
    • Embed this notice
      Clive Thompson (clive@saturation.social)'s status on Wednesday, 24-Jul-2024 02:45:07 JST Clive Thompson Clive Thompson
      in reply to
      • Thomas 🔭🕹️
      • Gergovie

      @Gergovie @acjay @thomasfuchs

      It can definitely be useful in a bunch of areas for sure

      I do wonder what’ll happen in a year or so from now — the enormous expense of training and inferencing on the foundation models doesn’t seem likely to produce profits anywhere close to recouping, to say nothing of 10xing

      I suspect there’ll be some hard conversations

      In conversation about 11 months ago permalink
    • Embed this notice
      Alan Johnson (acjay@hachyderm.io)'s status on Wednesday, 24-Jul-2024 02:45:08 JST Alan Johnson Alan Johnson
      in reply to
      • Clive Thompson
      • Thomas 🔭🕹️
      • Gergovie

      @clive @thomasfuchs @Gergovie It reminds me of Prolog a bit. When I first learned it, I was like "holy shit, this is incredible". But then you learn the fundamental limitations, and how the workarounds to those limitations undermine all the good parts. Then you understand why it remains a niche technology.

      It's possible we're already pretty close to the local maximum of LLMs as a technology. If so, I still do think it's pretty impressive.

      In conversation about 11 months ago permalink

      Attachments


    • Embed this notice
      Alan Johnson (acjay@hachyderm.io)'s status on Wednesday, 24-Jul-2024 02:45:09 JST Alan Johnson Alan Johnson
      in reply to
      • Clive Thompson
      • Thomas 🔭🕹️
      • Gergovie

      @clive @thomasfuchs @Gergovie I think we pretty much agree. It's mimicry of those things. It's extremely unclear that you can even compose LLMs with other subsystems in a rigorous way to address those shortcomings.

      In conversation about 11 months ago permalink
    • Embed this notice
      Alan Johnson (acjay@hachyderm.io)'s status on Wednesday, 24-Jul-2024 02:45:10 JST Alan Johnson Alan Johnson
      in reply to
      • Clive Thompson
      • Thomas 🔭🕹️
      • Gergovie

      @Gergovie @clive @thomasfuchs But because LLMs are so internally complex, we're reduced to discussing them by analogy, and I think that chronically leads to over- and underestimating their utility.

      In conversation about 11 months ago permalink
    • Embed this notice
      Clive Thompson (clive@saturation.social)'s status on Wednesday, 24-Jul-2024 02:45:10 JST Clive Thompson Clive Thompson
      in reply to
      • Thomas 🔭🕹️
      • Gergovie

      @acjay @thomasfuchs @Gergovie

      I think over and underestimating is a good way of putting it

      I’m not as confident as you that the statistical approach that underpins LLMs produces anything like what we could reasonably call understanding, though

      It may well be a *component* of understanding — making associations is key — but it’s not all clear that it can produce other elements of reasoning: logic, math, semantics, etc

      In conversation about 11 months ago permalink
    • Embed this notice
      Alan Johnson (acjay@hachyderm.io)'s status on Wednesday, 24-Jul-2024 02:45:11 JST Alan Johnson Alan Johnson
      in reply to
      • Clive Thompson
      • Thomas 🔭🕹️
      • Gergovie

      @Gergovie @clive @thomasfuchs The text that LLMs are trained on are an artifact of understanding and reasoning processes. And to the extent that the text outputs can capture the essence of those processes, LLMs mimic the processes themselves.

      In conversation about 11 months ago permalink
    • Embed this notice
      Clive Thompson (clive@saturation.social)'s status on Wednesday, 24-Jul-2024 04:58:43 JST Clive Thompson Clive Thompson
      in reply to
      • Thomas 🔭🕹️
      • Gergovie

      @Gergovie @thomasfuchs @acjay

      though to be fair “bacon topped ice cream“ is something McDonald’s probably should in reality have on the menu

      In conversation about 11 months ago permalink
    • Embed this notice
      Gergovie (gergovie@piaille.fr)'s status on Wednesday, 24-Jul-2024 04:58:44 JST Gergovie Gergovie
      • Clive Thompson
      • Thomas 🔭🕹️

      @thomasfuchs @clive @acjay

      https://www.bbc.com/news/articles/c722gne7qngo

      In conversation about 11 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: ichef.bbci.co.uk
        McDonalds removes AI drive-throughs after order errors
        The voice recognition system seems not to have recognised what customers were really ordering.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.