GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Rich Felker (dalias@hachyderm.io)'s status on Tuesday, 26-Nov-2024 20:46:22 JST Rich Felker Rich Felker
    • myrmepropagandist
    • RonaldTooTall
    • Hannu Ikonen MD
    • Zak Kolar

    @felipe @zak @futurebird @ronaldtootall @hannu_ikonen The world model you speak of corresponds to empirically testable things and is updated when it fails to do so. The language models don't and aren't.

    In conversation about 6 months ago from hachyderm.io permalink
    • Embed this notice
      Wyatt H Knott (whknott@mastodon.social)'s status on Wednesday, 27-Nov-2024 08:50:34 JST Wyatt H Knott Wyatt H Knott
      in reply to
      • myrmepropagandist
      • RonaldTooTall
      • Hannu Ikonen MD
      • Zak Kolar

      @dalias @felipe @zak @futurebird @ronaldtootall @hannu_ikonen This. The evidence of your senses is correlated to the effectiveness of your behaviors. Since LLMs don't HAVE behaviors, they don't have the functionality to create the feedback loops necessary for understanding.

      In conversation about 6 months ago permalink
    • Embed this notice
      crazyeddie (crazyeddie@mastodon.social)'s status on Wednesday, 27-Nov-2024 08:54:26 JST crazyeddie crazyeddie
      in reply to
      • myrmepropagandist
      • RonaldTooTall
      • Hannu Ikonen MD
      • Zak Kolar

      @dalias @felipe @zak @futurebird @ronaldtootall @hannu_ikonen They do and are though.

      That's the training part. The model is trained and then used. It may or may not be training while it's used.

      That training is fed a context, just like you do with experimentation. The model it tested against that context, as you do empirically. The model is then adjusted if it needs to be. This is exactly the empirical process.

      In conversation about 6 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Wednesday, 27-Nov-2024 08:54:26 JST Rich Felker Rich Felker
      in reply to
      • myrmepropagandist
      • RonaldTooTall
      • crazyeddie
      • Hannu Ikonen MD
      • Zak Kolar

      @crazyeddie @felipe @zak @futurebird @ronaldtootall @hannu_ikonen No it's not. This is a grossly inaccurate description of how LLMs are trained and used. The models users interact with are completely static. They are only changed when their overlords decide to change them, not by self discovery that they were wrong. They don't even have any conception of what "wrong" could mean because there is no world model only a language model.

      In conversation about 6 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.