GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:41:15 JST Paul Cantrell Paul Cantrell
    • Jenniferplusplus

    This essay from @jenniferplusplus is very good, and very important.

    It’s good enough and important enough that I’m just going to QFT the heck out of it here on Mastodon until I annoy you into readying the whole thing.

    https://jenniferplusplus.com/losing-the-imitation-game/

    This essay isn’t the last word on AI in software — but what it says is the ground level for having any sort of coherent discussion about the topic that isn’t all hype and panic.

    1/

    In conversation about a year ago from hachyderm.io permalink

    Attachments

    1. No result found on File_thumbnail lookup.
      important.it
      description
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:43:59 JST Paul Cantrell Paul Cantrell
      in reply to

      “Artificial Intelligence is an unhelpful term. It serves as a vehicle for people's invalid assumptions. It hand-waves an enormous amount of complexity regarding what ‘intelligence’ even is or means.“

      “Our understanding of intelligence is a moving target. We only have one meaningful fixed point to work from. We assert that humans are intelligent. Whether anything else is, is not certain. What intelligence itself is, is not certain.”

      2/

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:44:27 JST Paul Cantrell Paul Cantrell
      in reply to

      “While the capabilities are fantasy, the dangers are real. These tools have denied people jobs, housing, and welfare. All erroneously. They have denied people bail and parole, in such a racist way it would be comical if it wasn't real.

      👇👇👇
      “And the actual function of AI in all of these situations is to obscure liability for the harm these decisions cause.”

      3/

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:46:44 JST Paul Cantrell Paul Cantrell
      in reply to

      “What [LLM] parameters don't represent is anything like knowledge or understanding. That's just not what LLMs do. The model doesn't know what those tokens mean. I want to say it only knows how they're used, but even that is over stating the case, because it doesn't •know• things. It •models• how those tokens are used.

      “…The model doesn't know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.”

      4/

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:47:22 JST Paul Cantrell Paul Cantrell
      in reply to

      “The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code.”

      Do you see where this is going? Have I convinced you to read the whole thing yet?

      https://jenniferplusplus.com/losing-the-imitation-game/

      5/

      In conversation about a year ago permalink
      Aral Balkan repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:49:34 JST Paul Cantrell Paul Cantrell
      in reply to

      Here it is: the One Weird Thing that people who aren’t programmers (or are bad programmings) just don’t understand about writing software. This is it. If you miss this, you’ll miss what LLMs can and can’t do for software development.

      6/

      In conversation about a year ago permalink

      Attachments


      1. https://media.hachyderm.io/media_attachments/files/112/006/828/590/073/101/original/e424061d714cadf5.png
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:50:46 JST Paul Cantrell Paul Cantrell
      in reply to

      “They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it.”

      7/

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:51:04 JST Paul Cantrell Paul Cantrell
      in reply to

      “No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do.

      “Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.”

      8/

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 11:56:17 JST Paul Cantrell Paul Cantrell
      in reply to

      You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely useful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

      Alas, that does not remotely resemble how people are pitching this technology.

      9/

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 12:00:02 JST Paul Cantrell Paul Cantrell
      in reply to

      I love, for example, this student’s reaction to having ChatGPT trying to write some of her paper:
      https://hachyderm.io/@inthehands/109491316523726437

      Indignant outrage is a powerful thought-sharpening tool!

      Alas, AI vendors are not pitching LLMs as indignant outrage generators.

      10/

      In conversation about a year ago permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Paul Cantrell (@inthehands@hachyderm.io)
        from Paul Cantrell
        My favorite outcome so far: a student remarked (paraphrasing here) that she didn’t realize how much she had to say in her paper until she saw how wrong the AI was, how much it missed the point. Observing her own reaction to BS about her topic made her realize she’d underestimated the extent of her own newly-forming knowledge. That…that is the sort of outcome an educator dreams of. #ai #chatgpt #education #writing #highered
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 12:05:11 JST Paul Cantrell Paul Cantrell
      in reply to

      I’ve heard from several students that LLMs have been really useful to them in that “where the !^%8 do I even start?!” phase of learning a new language, framework, or tool. Documentation frequently fails to share common idioms; discovering the right idiom in the current context is often difficult. And “What’s a pattern that fits here, never mind the correctness of the details?” is a great question for an LLM.

      Alas, the AI hype is around LLMs •replacing• thought, not •prompting• it.

      11/

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 12:23:51 JST Paul Cantrell Paul Cantrell
      in reply to

      The hard part of programming is •thinking about what you’re doing•, because the computer that runs your code isn’t going to do that.

      And as Jennifer points out in the essay, we do that by thinking about code. Not just about our abstract mental models, not just about our natural language descriptions of the code, but about the code itself. Where human understanding meets machine interpretation, •that’s• where the real work is, •that’s• what makes software hard:

      12/

      In conversation about a year ago permalink

      Attachments


      1. https://media.hachyderm.io/media_attachments/files/112/006/963/358/673/132/original/b77ad0bcaeaba6c7.png
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 12:49:54 JST Paul Cantrell Paul Cantrell
      in reply to

      Code is cost. It costs merely by •existing• in any context where it might run. Code is a burden we bear because (we hope) the cost is worth it.

      What happens if we write code with a tool that (1) decreases the cost-per-complexity of •generating• code while (2) vastly increasing the cost-per-complexity of •maintaining• that code? How do we use such a tool wisely? Can we?

      Useful conversation about that starts on this ground floor:

      https://jenniferplusplus.com/losing-the-imitation-game/

      /end

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 13:51:00 JST Paul Cantrell Paul Cantrell
      in reply to
      • Servelan

      @servelan
      I’ve had people argue to me that it basically doesn’t matter how bad AI-generated code is, because you just fiddle with the prompt / the output until it passes the tests, and we all know how to test, and…I’m just like, have you ever •created• actual software?

      In conversation about a year ago permalink
    • Embed this notice
      Servelan (servelan@newsie.social)'s status on Wednesday, 28-Feb-2024 13:51:01 JST Servelan Servelan
      in reply to

      @inthehands Using AI seems like the software would make it easy to say 'by design' and ignore a bug. And bugs can be anything from annoyances to security flaws...I've tested software and trying to be a pseudo user and advocate for a better product ran up against developer/PM reluctance to fix the code and that's *without* AI...

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 13:51:45 JST Paul Cantrell Paul Cantrell
      in reply to
      • hwestiii

      @hwestiii
      I mean, I give some ideas downthread about how having a bullshit generator can be useful. Just people aren’t talking about it in remotely those terms.

      In conversation about a year ago permalink
    • Embed this notice
      hwestiii (hwestiii@lor.sh)'s status on Wednesday, 28-Feb-2024 13:51:46 JST hwestiii hwestiii
      in reply to

      @inthehands this is my thing. nobody can present a coherent story about how will be useful beyond gulling the public with deepfakes.

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Wednesday, 28-Feb-2024 13:59:12 JST Paul Cantrell Paul Cantrell
      in reply to
      • Justin Myers

      @myersjustinc
      Very, very similar, yes. “Good writing is good thinking,” my English prof mom always says.

      In conversation about a year ago permalink
    • Embed this notice
      Justin Myers (myersjustinc@mastodon.sdf.org)'s status on Wednesday, 28-Feb-2024 13:59:13 JST Justin Myers Justin Myers
      in reply to

      @inthehands
      This reminds me exactly of the stuff I would say to people about "automated journalism" back when there was much wailing and gnashing of teeth over basic templated text. "The writing is the easy part!"

      (This was almost 10 years ago, and LLMs hadn't hit the mainstream yet. I was at The Associated Press as "news automation editor", which meant I got lots of panicked questions from across the news industry about what this meant for reporters.)

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Thursday, 29-Feb-2024 01:22:37 JST Paul Cantrell Paul Cantrell
      • datarama

      @datarama
      Yup. There’s a side industry of useful tools, probably larger than “code editor autocomplete” and smaller than “web search”…but that’s not what investors are looking for.

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Thursday, 29-Feb-2024 01:24:53 JST Paul Cantrell Paul Cantrell
      in reply to
      • John Quiggin
      • Jenniferplusplus

      @jenniferplusplus @johnquiggin
      I don’t have anything to add to what Jennifer wrote above and what I wrote downthread, except to keep in mind that GitHub link is marketing material.

      In conversation about a year ago permalink
    • Embed this notice
      Jenniferplusplus (jenniferplusplus@hachyderm.io)'s status on Thursday, 29-Feb-2024 01:24:54 JST Jenniferplusplus Jenniferplusplus
      in reply to
      • John Quiggin

      @johnquiggin @inthehands
      I have links, too
      https://visualstudiomagazine.com/articles/2024/01/25/copilot-research.aspx

      Also, I am an expert. And you're essentially saying that getting rid of maps and road signs makes drivers more productive.

      In conversation about a year ago permalink

      Attachments


    • Embed this notice
      John Quiggin (johnquiggin@aus.social)'s status on Thursday, 29-Feb-2024 01:24:55 JST John Quiggin John Quiggin
      in reply to
      • Jenniferplusplus

      @jenniferplusplus @inthehands That may be, as I'm not an expert. But lots of developers seem to feel the same way

      https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/

      In conversation about a year ago permalink
    • Embed this notice
      Jenniferplusplus (jenniferplusplus@hachyderm.io)'s status on Thursday, 29-Feb-2024 01:24:56 JST Jenniferplusplus Jenniferplusplus
      in reply to
      • John Quiggin

      @johnquiggin @inthehands It sounds like you're conceptualizing programmer productivity in an entirely backwards way.

      In conversation about a year ago permalink
    • Embed this notice
      John Quiggin (johnquiggin@aus.social)'s status on Thursday, 29-Feb-2024 01:24:57 JST John Quiggin John Quiggin
      in reply to
      • Jenniferplusplus

      @inthehands @jenniferplusplus

      Agree with the main point "non-trivial" is doing a fair bit of work here. For simple tasks, ChatGPT does a great job of turning conceptual sketches into code. That's already increasing programmer productivity and making basic coding accessible to non-programmers

      In conversation about a year ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Thursday, 29-Feb-2024 01:27:05 JST Paul Cantrell Paul Cantrell
      in reply to
      • OddOpinions5

      @failedLyndonLaRouchite
      Why do all you reply guys not actually read the things you’re replying to up to the point of comprehension?

      In conversation about a year ago permalink
    • Embed this notice
      OddOpinions5 (failedlyndonlarouchite@mas.to)'s status on Thursday, 29-Feb-2024 01:27:07 JST OddOpinions5 OddOpinions5
      in reply to

      @inthehands

      Right now, today, people are using AI to help draft written documents

      why do you all downplay this real world thing ?

      Corporate lawyers bill billions of dollars for work writing documents that could be partly replaced by AI

      etc
      etc
      you critics keep focusing on red herrings, IMO

      In conversation about a year ago permalink
    • Embed this notice
      Felix Neumann (fxnn@hachyderm.io)'s status on Monday, 04-Mar-2024 20:34:57 JST Felix Neumann Felix Neumann
      in reply to

      @inthehands

      Huh, this is a bit as if you would accuse a hammer to not be able to drive in a nail by itself.

      LLMs are tools, not miracles. If you want them to generate more than a single LOC in a useful way, you need to embed this in larger process, in a program of its own, which uses the LLM to reason about a solution to the problem, in many small steps.

      Projects like https://github.com/gpt-engineer-org/gpt-engineer are steps into this direction.

      In conversation about a year ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: opengraph.githubassets.com
        GitHub - gpt-engineer-org/gpt-engineer: Specify what you want it to build, the AI asks for clarification, and then builds it.
        Specify what you want it to build, the AI asks for clarification, and then builds it. - gpt-engineer-org/gpt-engineer
    • Embed this notice
      Exandra (alexbbrown@hachyderm.io)'s status on Tuesday, 05-Mar-2024 01:59:23 JST Exandra Exandra
      in reply to
      • Jenniferplusplus

      @inthehands @jenniferplusplus thanks very much for this thread. It helped clarify my understanding of why I'm not worried about my job going away. My job is not to write code, but to understand it, its consequences, and then what we might do about those.

      In conversation about a year ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.