GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Embed Notice

HTML Code

Corresponding Notice

  1. Embed this notice
    Lorenzo Stoakes (ljs@social.kernel.org)'s status on Thursday, 09-May-2024 00:12:11 JSTLorenzo StoakesLorenzo Stoakes
    in reply to
    • Vlastimil Babka
    • Petr Tesarik
    • Pavel Machek
    • ben 🇵🇸 ui
    @ptesarik @ben @pavel @vbabka the big problem is that people are very very bad at picking up on the kind of errors that an algorithm can generate.

    We all implicitly assume errors are 'human shaped' i.e. the kind of errors a human would make.

    An LLM can have a very good grasp of the syntax but then interpolates results in effect randomly as the missing component is a dynamic understanding of the system.

    As a result, they can introduce very very subtle bugs that'll still compile/run etc.

    People are also incredibly bad at assessing how much cost this incurs in practice.

    Having something that can generate such errors for only trivial tasks strikes me as being worse than having nothing at all.

    And the ongoing 'emperor's new clothes' issues with LLMs is this issue is insoluble. Hallucination is an unavoidable part of how they work.

    The whole machinery of the thing is trying to infer patterns from a dataset, so at a fundamental level it's broken by design.

    That's before we get on to the fact it's needs human input to work (you start putting LLM generated input in it completely collapses), so the whole thing couldn't work anyway on any long term scale.

    That's before we get on to the fact it steals software and ignores license, the carbon costs and monetary costs of compute, and a myriad of other problems...

    The whole problem with all this is it's a very very convincing magic trick and works so well that people are blinded to its flaws.

    See https://en.wikipedia.org/wiki/ELIZA_effect?useskin=vector
    In conversationabout a year ago from social.kernel.orgpermalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: upload.wikimedia.org
      ELIZA effect
      In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program's symbolic computations are described through terms such as "think", "know" or "understand." History The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patient"'s replies as questions: Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It's true. I'm unhappy. ELIZA: Do you think coming here will help you not to be unhappy? Though designed strictly as a mechanism to support "natural language conversation" with a computer, ELIZA's DOCTOR script was found to be surprisingly successful in eliciting...
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.