GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Joseph Nuthalapati :fbx: (njoseph@social.masto.host)'s status on Friday, 13-Dec-2024 12:55:49 JST Joseph Nuthalapati :fbx: Joseph Nuthalapati :fbx:

    Some thoughts about LLMs

    1. "LLMs confidently give wrong answers/lies" - anybody who tried to get information from an LLM.

    LLMs have no feelings or emotions. So, they cannot "feel confident". We perceive their lack of doubt to be confidence.

    (1/3)

    In conversation about 5 months ago from social.masto.host permalink
    • Embed this notice
      alcinnz (alcinnz@floss.social)'s status on Friday, 13-Dec-2024 12:55:36 JST alcinnz alcinnz
      in reply to
      • Baldur Bjarnason

      @njoseph The mention of astrology & LLMs in the same toot brought my mind to @baldur 's blogpost providing a theory as to why so many people find them so convincing: https://softwarecrisis.dev/letters/llmentalist/

      In conversation about 5 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: softwarecrisis.dev
        The LLMentalist Effect: how chat-based Large Language Models rep…
        The new era of tech seems to be built on superstitious behaviour
    • Embed this notice
      Joseph Nuthalapati :fbx: (njoseph@social.masto.host)'s status on Friday, 13-Dec-2024 12:55:40 JST Joseph Nuthalapati :fbx: Joseph Nuthalapati :fbx:
      in reply to

      3. "When the LLM made up some stuff you didn't like, it was a hallucination" - Sam Altman probably?

      LLMs cannot think. Also, they had no sense organs until they recently became multi-modal. They have no sense of right/wrong/meaning/nonsense etc. All of that comes from the human interpreting the signals coming from the LLM. Does this make their users computer astrologers? It doesn't matter. Like a stopped clock, they are sometimes right and can do novel things so people keep using them.

      (3/3)

      In conversation about 5 months ago permalink
    • Embed this notice
      Joseph Nuthalapati :fbx: (njoseph@social.masto.host)'s status on Friday, 13-Dec-2024 12:55:44 JST Joseph Nuthalapati :fbx: Joseph Nuthalapati :fbx:
      in reply to

      2. "The ability to speak doesn't make you intelligent" - from a recently popular toot.

      The main ability of LLMs is to place token after token (a token is a part of a word) very quickly. To speak is to transform your thoughts into a form that others can understand. LLMs have no thoughts. Heck, they don't even understand words because they deal in tokens. Jar Jar Binks actually meets this definition of intelligence.

      (2/3)

      In conversation about 5 months ago permalink
    • Embed this notice
      Joseph Nuthalapati :fbx: (njoseph@social.masto.host)'s status on Saturday, 14-Dec-2024 23:51:51 JST Joseph Nuthalapati :fbx: Joseph Nuthalapati :fbx:
      in reply to
      • alcinnz
      • Baldur Bjarnason

      @alcinnz It's possible I've read some of @baldur 's writing last year and this influenced my understanding of LLMs. Also, the "stochastic parrots" paper.

      I wasn't aware that Baldur wrote an entire book on this subject. "The Intelligence Illusion" is such a perfect title. That's what I would've called it too.

      In conversation about 5 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.