GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    David Chisnall (*Now with 50% more sarcasm!*) (david_chisnall@infosec.exchange)'s status on Wednesday, 01-Jan-2025 20:57:44 JST David Chisnall (*Now with 50% more sarcasm!*) David Chisnall (*Now with 50% more sarcasm!*)

    A lot of the current hype around LLMs revolves around one core idea, which I blame on Star Trek:

    Wouldn't it be cool if we could use natural language to control things?

    The problem is that this is, at the fundamental level, a terrible idea.

    There's a reason that mathematics doesn't use English. There's a reason that every professional field comes with its own flavour of jargon. There's a reason that contracts are written in legalese, not plain natural language. Natural language is really bad at being unambiguous.

    When I was a small child, I thought that a mature civilisation would evolve two languages. A language of poetry, that was rich in metaphor and delighted in ambiguity, and a language of science that required more detail and actively avoided ambiguity. The latter would have no homophones, no homonyms, unambiguous grammar, and so on.

    Programming languages, including the ad-hoc programming languages that we refer to as 'user interfaces' are all attempts to build languages like the latter. They allow the user to unambiguously express intent so that it can be carried out. Natural languages are not designed and end up being examples of the former.

    When I interact with a tool, I want it to do what I tell it. If I am willing to restrict my use of natural language to a clear and unambiguous subset, I have defined a language that is easy for deterministic parsers to understand with a fraction of the energy requirement of a language model. If I am not, then I am expressing myself ambiguously and no amount of processing can possibly remove the ambiguity that is intrinsic in the source, except a complete, fully synchronised, model of my own mind that knows what I meant (and not what some other person saying the same thing at the same time might have meant).

    The hard part of programming is not writing things in some language's syntax, it's expressing the problem in a way that lacks ambiguity. LLMs don't help here, they pick an arbitrary, nondeterministic, option for the ambiguous cases. In C, compilers do this for undefined behaviour and it is widely regarded as a disaster. LLMs are built entirely out of undefined behaviour.

    There are use cases where getting it wrong is fine. Choosing a radio station or album to listen to while driving, for example. It is far better to sometimes listen to the wrong thing than to take your attention away from the road and interact with a richer UI for ten seconds. In situations where your hands are unavailable (for example, controlling non-critical equipment while performing surgery, or cooking), a natural-language interface is better than no interface. It's rarely, if ever, the best.

    In conversation about 4 months ago from infosec.exchange permalink
    • Haelwenn /элвэн/ :triskell:, Doughnut Lollipop 【記録係】:blobfoxgooglymlem: and clacke and 2 others like this.
    • lfa :emacs: :tux: :freebsd: repeated this.
    • Embed this notice
      Wolf480pl (wolf480pl@mstdn.io)'s status on Thursday, 02-Jan-2025 01:20:44 JST Wolf480pl Wolf480pl
      in reply to
      • jarkman

      @jarkman @david_chisnall
      Because people actually do have a synchronized model of how other humans' brains works, and those who know you have a model that matches your particular brain. I think it's called "mirror neurons".

      In conversation about 4 months ago permalink
      Doughnut Lollipop 【記録係】:blobfoxgooglymlem: likes this.
    • Embed this notice
      jarkman (jarkman@chaos.social)'s status on Thursday, 02-Jan-2025 01:20:46 JST jarkman jarkman
      in reply to

      @david_chisnall I'm not so sure. I often express myself in natural language to ask people to do things, and that usually works out pretty well.

      So it's possible in principle, it's just not something that computers can do yet. Maybe one day they will.

      In conversation about 4 months ago permalink
      feld likes this.
      Visikde, Valdus and Mer-fOKxTOwl repeated this.
    • Embed this notice
      David Chisnall (*Now with 50% more sarcasm!*) (david_chisnall@infosec.exchange)'s status on Thursday, 02-Jan-2025 07:15:35 JST David Chisnall (*Now with 50% more sarcasm!*) David Chisnall (*Now with 50% more sarcasm!*)
      in reply to
      • Balthazar

      @baltauger In linguistics, the Whorf-Sapir hypothesis, also known as the Linguistic Relativity hypothesis, argues that language constrains thought. This was the idea behind Orwell's Newspeak. The strong variant argues that you cannot think an idea that your language cannot express (the goal of Newspeak), the weak variant argues that language guides thought. The strong variant is largely discredited because it turns out that humans are really good at just making up new language for new concepts. The weak variant is supported to varying degrees.

      I keep trying to persuade linguists to study it in the context of programming languages, where humans are limited in the things that they can extend because a compiler / interpreter also needs to understand the language. I think there are some very interesting research results to be found there.

      In conversation about 4 months ago permalink
      Haelwenn /элвэн/ :triskell:, clacke and Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br) like this.
    • Embed this notice
      Balthazar (baltauger@mastodon.gamedev.place)'s status on Thursday, 02-Jan-2025 07:15:41 JST Balthazar Balthazar
      in reply to

      @david_chisnall
      There is a recent game called Chants of Senaar that explores this "use-based language": a city where each caste (priests, soldiers, artists, scientists) have their own language that you need to learn, each language fit for expressing different concepts that were alien to other caste.

      In conversation about 4 months ago permalink
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      eribosot (eribosot@mastodon.social)'s status on Thursday, 02-Jan-2025 07:32:13 JST eribosot eribosot
      in reply to

      @david_chisnall

      "I'm sorry, I don't understand what you said, can you clarify?"

      See how that works?

      Yes, language is ambiguous, but when humans use it, it is also a two-way street, in which both parties are continuously tossing requests for information back and forth to remove the ambiguity and reach a common understanding.

      In conversation about 4 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Thursday, 02-Jan-2025 12:31:13 JST Rich Felker Rich Felker
      in reply to

      @david_chisnall 🔥🔥🔥 indictment of LLMs 👆

      "In C, compilers do this for undefined behaviour and it is widely regarded as a disaster. LLMs are built entirely out of undefined behaviour."

      In conversation about 4 months ago permalink
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Fluchtkapsel (fluchtkapsel@nerdculture.de)'s status on Thursday, 02-Jan-2025 12:33:12 JST Fluchtkapsel Fluchtkapsel
      in reply to

      @david_chisnall Natural language interfaces are akin to magic. It's all under the assumption that the intent is somehow, magically recognised. Funnily, there are cautionary tales re magic addressing exactly this aspect. Almost all stories involving wishes turn the phrasing against the one expressing their wishes (see djinns, fairies, etc.).

      In conversation about 4 months ago permalink
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Jimmy Havok (jhavok@mastodon.social)'s status on Thursday, 02-Jan-2025 12:34:44 JST Jimmy Havok Jimmy Havok
      in reply to
      • Resuna
      • Erik Jonker

      @ErikJonker @resuna @david_chisnall I deal with people's questions every day, and much of the time they aren't even sure what they are asking for. It takes a good deal of drilling down to get to what they need to know. A lot of what is involved is figuring out what they need to know to find out what they want to know. It's difficult for a human with shared experience, I'm skeptical that an LLM could manage it.

      In conversation about 4 months ago permalink
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      Erik Jonker (erikjonker@mastodon.social)'s status on Thursday, 02-Jan-2025 12:34:45 JST Erik Jonker Erik Jonker
      in reply to
      • Resuna

      @resuna @david_chisnall cobol is not a monstrosity as a programming language, it’s ofcourse legacy

      In conversation about 4 months ago permalink
    • Embed this notice
      Erik Jonker (erikjonker@mastodon.social)'s status on Thursday, 02-Jan-2025 12:34:45 JST Erik Jonker Erik Jonker
      in reply to
      • Resuna

      @resuna @david_chisnall …a natural language interface for a computer can have enormous benefits, a good example is an educational context, you can interact, ask question’s etc in a way not possible before

      In conversation about 4 months ago permalink
    • Embed this notice
      Resuna (resuna@ohai.social)'s status on Thursday, 02-Jan-2025 12:34:46 JST Resuna Resuna
      in reply to
      • Erik Jonker

      @ErikJonker @david_chisnall

      That's the point. Giving directions to a machine requires math. Using the term "language" to refer to machine instructions has led people down the wrong path over and over again and led to monstrosities like COBOL and Perl and "Congratulations, you have decided to clean the elevator!"

      In conversation about 4 months ago permalink
    • Embed this notice
      Erik Jonker (erikjonker@mastodon.social)'s status on Thursday, 02-Jan-2025 12:34:47 JST Erik Jonker Erik Jonker
      in reply to

      @david_chisnall ...that's also one of it's strengths, language is a completely different "beast" then math. Comparing it is useless. Language fulfills different functions then math. But just as important for human beings.

      In conversation about 4 months ago permalink
      feld likes this.
    • Embed this notice
      Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br) (lxo@gnusocial.jp)'s status on Thursday, 02-Jan-2025 16:59:51 JST Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br) Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br)
      in reply to
      • jarkman
      my experience, shared with many neurodivergents, is that neurotypicals and even other neurodivergents very often misunderstand us, and vice-versa, and the misunderstandings are occasionally very hard to recover from, becoming another source of discrimination against minorities. AFAICT minds that work in one way build thoughts in ways that don't carry over very well to minds that work in other ways, especially when there isn't awareness of and tolerance for the differences. I've known people who can understand and "translate" expressions of thoughts in ways that enable people with different mind structures to communicate more effectively. it's an amazing skill. I wonder if LLMs extend the experience of facing frequent misunderstandings to a majority of the people, or if they could help people translate between different mind structures, different perceptions of context, and avoiding triggers
      In conversation about 4 months ago permalink
      Mer-fOKxTOwl, jarkman and Valdus like this.
    • Embed this notice
      Fish of Rage (sun@shitposter.world)'s status on Thursday, 02-Jan-2025 17:26:31 JST Fish of Rage Fish of Rage
      in reply to
      @david_chisnall I’m using llms for a variety of uses every day now and they’re great, turns out you don’t need perfection a lot of the time
      In conversation about 4 months ago permalink
    • Embed this notice
      clacke (clacke@libranet.de)'s status on Thursday, 02-Jan-2025 19:30:11 JST clacke clacke
      in reply to
      • CommitStrip

      Time to share this lovely @CommitStrip again!

      commitstrip.com/en/2016/08/25/…

      @david_chisnall

      In conversation about 4 months ago permalink

      Attachments


      1. https://web.archive.org/web/20241124140212if_/https://www.commitstrip.com/wp-content/uploads/2016/08/Strip-Les-specs-cest-du-code-650-finalenglish.jpg
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Friday, 03-Jan-2025 02:21:25 JST feld feld
      in reply to
      @david_chisnall

      >> Wouldn't it be cool if we could use natural language to control things?
      > The problem is that this is, at the fundamental level, a terrible idea.

      This is a terrible take and you should really know better. It's not different than chastising people who use higher level programming languages or Dreamweaver to make a website instead of studying HTML.

      We can all agree that e.g., setting down a person with no development experience and asking them to design a missile defense system for your country using natural language is a terrible idea.

      We should all be able to agree that giving people a way to use natural language to build little apps, tools, and automations that solve problems nobody is going to build a custom solution for is a good thing.
      In conversation about 4 months ago permalink
    • Embed this notice
      SnowBlind2005 (snowblind2005@social.vivaldi.net)'s status on Friday, 03-Jan-2025 02:27:14 JST SnowBlind2005 SnowBlind2005
      in reply to

      @david_chisnall

      If it is not cool to use natural language to control things we should stop listening to you.

      In conversation about 4 months ago permalink
      feld likes this.
    • Embed this notice
      Yogthos (yogthos@social.marxist.network)'s status on Friday, 03-Jan-2025 02:29:13 JST Yogthos Yogthos
      in reply to

      @david_chisnall and yet, most human communication is done using natural language. Software developers talk to project managers every single day without using code to communicate what's happening.

      The notion that it's not possible to do clear communication without using a formal system doesn't hold water.

      There are also lots of solutions that can mitigate the problems you're describing. For example, the system can output the work plan in steps, and you can revise individual steps.

      In conversation about 4 months ago permalink
      feld likes this.
    • Embed this notice
      George Lund (georgelund@urbanists.social)'s status on Friday, 03-Jan-2025 02:31:45 JST George Lund George Lund
      in reply to

      @david_chisnall this argument doesn't make sense to me: graphical user interfaces exist because we don't expect ordinary people to have to program in order to interact with computers. Programming as general UX is rubbish. So it goes that a good conversational interface is inevitable and will one day will be a very common machine interface indeed.

      In conversation about 4 months ago permalink
      feld likes this.
    • Embed this notice
      David Chisnall (*Now with 50% more sarcasm!*) (david_chisnall@infosec.exchange)'s status on Friday, 03-Jan-2025 05:16:52 JST David Chisnall (*Now with 50% more sarcasm!*) David Chisnall (*Now with 50% more sarcasm!*)
      in reply to
      • feld

      @feld

      This is a terrible take and you should really know better. It's not different than chastising people who use higher level programming languages or Dreamweaver to make a website instead of studying HTML.

      I feel like you didn’t read past the quoted section before replying with a needlessly confrontational reply.

      It is very different. If you give someone a low-code end-user programming environment, they have a tool the helps them to unambiguously express their intent. It gives them a tool to do so concisely, often more concisely (at the expense of generality), which empowers the user. This is a valuable thing to do.

      We should all be able to agree that giving people a way to use natural language to build little apps, tools, and automations that solve problems nobody is going to build a custom solution for is a good thing.

      No, I disagree with that. Giving them a natural-language interface and you remove agency from them. The system, not the user, is responsible for filling in the blanks. And the system does so in a way that does not permit the user to learn. Rather than using the tool badly and then improving as a result of their failure, the system fills in the blanks in arbitrary ways.

      A natural-language interface and an easy-to-learn interface are not the same thing. There is enormous value in creating easy-to-learn interfaces that empower users but giving them interfaces that use natural language is not the best (or even a very good) way of doing this.

      In conversation about 4 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: thing.No
        thing.no in parked
      2. Domain not in remote thumbnail source whitelist: replydam.discoveryreplymedia.com
        Digital Services, Tecnologia e Consulenza | Reply
        from TamTamy Reply
        Reply è una società specializzata nella progettazione e realizzazione di soluzioni innovative nei settori dei Servizi Digitali, della Tecnologia e della Consulenza.
      Haelwenn /элвэн/ :triskell: likes this.
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Friday, 03-Jan-2025 05:16:52 JST feld feld
      in reply to
      @david_chisnall > No, I disagree with that. Giving them a natural-language interface and you remove agency from them.

      This just seethes with "no, everyone should learn to code" energy. People should not need to learn to code to do these things.

      You are thinking about this from inside your bubble. Perfect is the enemy of good.
      In conversation about 4 months ago permalink
    • Embed this notice
      Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br) (lxo@gnusocial.jp)'s status on Friday, 03-Jan-2025 06:31:37 JST Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br) Alexandre Oliva (moving to @lxo@snac.lx.oliva.nom.br)
      in reply to
      there's an argument to be made about processes that enable free, ambiguous expression of ideas in natural language, with progressive removal of ambiguities. figuring out what questions to ask to help users understand and remove the ambiguities is a trainable skill, and perhaps it can even be machine-learned. there's a risk that users would then find such processes very hard to use, because of the huge number of questions they need to understand and figure out how to answer. but as they learn how to express themselves unambiguously, the system becomes easier and easier to use. at the end, users who survive the process enough times may have learned a programming language.
      In conversation about 4 months ago permalink
    • Embed this notice
      silverwizard (silverwizard@convenient.email)'s status on Friday, 03-Jan-2025 06:45:50 JST silverwizard silverwizard
      in reply to
      • feld
      @david_chisnall @feld "The system, not the user, is responsible for filling in the blanks" is such an important and valuable idea that explains all of the issues with NLP systems! Thanks - I need this specific idea!
      In conversation about 4 months ago permalink
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Friday, 03-Jan-2025 06:50:03 JST feld feld
      in reply to
      • silverwizard
      @silverwizard @david_chisnall This assumes the system won't be able to recognize the existence of these gaps and ask you what actions it should take if they're encountered. There's no rule that says the system should parse your natural language prompt and return a final result immediately; I expect a mature system will converse with you about a complex problem before emitting the final result.
      In conversation about 4 months ago permalink
    • Embed this notice
      I am Water (slicerdicer@friedcheese.us)'s status on Friday, 03-Jan-2025 07:08:23 JST I am Water I am Water
      in reply to
      • silverwizard
      • feld
      @feld @silverwizard @david_chisnall I have a feeling in a decade people will look back at all these conversations much like e commerce. Saying what were we thinking. The problems are actually not what we think and the solutions are far more impactful.

      In all my testing of AI it’s only getting better and yes you have to have a conversation with it. Much like this entire thread to figure out what is what. That’s very natural to lots of humans. It’s not a leap for this to become how it is for these systems.
      In conversation about 4 months ago permalink
      feld likes this.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.