GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:16:31 JST Paul Cantrell Paul Cantrell
    • Kevin Riggle

    Agreed with everything @kevinriggle wrote here. Another angle on this to try with people who simply do not understand what software engineering •is•:

    AI can generate code fast. Often it’s correct. Often it’s not, but close. Often it’s totally wrong. Often it’s •close• to correct and even appears to work, but has subtle errors that must be corrected and may be hard to detect.

    1/ https://ioc.exchange/@kevinriggle/113641234199724146

    In conversation about 5 months ago from hachyderm.io permalink

    Attachments

    1. No result found on File_thumbnail lookup.
      Kevin Riggle (@kevinriggle@ioc.exchange)
      from Kevin Riggle
      What I’m taking from this is that software engineers spend most of our time on engineering software, and writing code is (as expected) a relatively small portion of that work. Imagine this for other engineering disciplines. “Wow structural engineers seem to spend most of their time on meetings and CAD and relatively little time physically building bridges with their hands! This is something AI can and should fix. I am very smart” https://fortune.com/2024/12/05/amazon-developers-spend-hour-per-day-coding/
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:18:09 JST Paul Cantrell Paul Cantrell
      in reply to

      All the above is also true (though perhaps in different proportions) of humans writing code! But here’s the big difference:

      When humans write the code, those humans are •thinking• about the problem the whole time: understanding where those flaws might be hiding, playing out the implications of business assumptions, studying the problem up close.

      When AI write the code, none of that happens. It’s a tradeoff: faster code generation at the cost of reduced understanding.

      2/

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:22:13 JST Paul Cantrell Paul Cantrell
      in reply to

      The effect of AI is to reduce the cost of •generating code• by a factor of X at the cost of increasing the cost of •thinking about the problem• by a factor of Y.

      And yes, Y>1. A thing non-developers do not understand about code is that coding a solution is a deep way of understanding a problem — and conversely, using code that’s dropped in your lap greatly increases the amount of problem that must be understood.

      3/

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:27:09 JST Paul Cantrell Paul Cantrell
      in reply to

      Increase the cost of generating code by a factor of X; increase the cost of understanding by a factor of Y. How much bigger must X be than Y for that to pay off?

      Check that OP again: if a software engs spend on average 1 hr/day writing code, and assuming (optimistically!) that they only work 8 hr days, then a napkin sketch of your AI-assisted cost of coding is:

      1 / X + 7 * Y

      That means even if X = ∞ (and it doesnt, but even if!!), then Y cannot exceed ~1.14.

      Hey CXO, you want that bet?

      4/

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:31:21 JST Paul Cantrell Paul Cantrell
      in reply to

      This is a silly thumbnail-sized model, and it’s full of all kinds of holes.

      Maybe devs straight-up waste 3 hours a day, so then payoff is Y < 1.25 instead! Maybe the effects are complex and nonlinear! Maybe this whole quantification effort is doomed!

      Don’t take my math too seriously. I’m not actually setting up a useful predictive model here; I’m making a point.

      5/

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:34:17 JST Paul Cantrell Paul Cantrell
      in reply to

      Though my model is quantitatively silly, it does get at the heart of something all too real:

      If you see the OP and think it means software development is on the cusp of being automatable because devs only spend ≤1/8 of their time actually typing code, you’d damn well better understand how they spend the other ≥7/8 of their time — and how your executive decisions, necessarily made from a position of ignorance* if you are an executive, impact that 7/8.

      /end (with footnote)

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:38:13 JST Paul Cantrell Paul Cantrell
      in reply to

      * Yes, executive decisions are •necessarily• made from a position of ignorance. The point of having these high-level roles isn’t (or at least should not be) amassing power, but rather having people looking at things at different zoom levels. Summary is at the heart of good management, along with the humility to know that you are seeing things in summary. If you know all the details, you’re insufficiently zoomed out. If you’re zoomed out, you have to remember how many details you don’t know.

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:43:07 JST Paul Cantrell Paul Cantrell
      in reply to
      • Ruth Malan — Systems
      • Russell Garner

      @rgarner @RuthMalan
      Yup. All that.

      And Brooks’s maxim that there is no silver bullet still stands undefeated.

      In conversation about 5 months ago permalink
    • Embed this notice
      Russell Garner (rgarner@mastodon.social)'s status on Friday, 13-Dec-2024 06:43:08 JST Russell Garner Russell Garner
      in reply to
      • Ruth Malan — Systems

      @inthehands @RuthMalan every dev wants a greenfield project. LLMs shade even greenfield projects brown.

      But then it's not the devs that are asking for this* so much as a managerial class looking for the sort of silver bullet that brings down both pay and the amount of time dealing with a type of worker they find difficult.

      *not the ones who are any good, anyway

      In conversation about 5 months ago permalink
    • Embed this notice
      Jeff Miller (orange hatband) (jmeowmeow@hachyderm.io)'s status on Friday, 13-Dec-2024 06:44:00 JST Jeff Miller (orange hatband) Jeff Miller (orange hatband)
      in reply to

      @inthehands I appreciate how your line of argument chimes with Fred Brooks' "No Silver Bullet", along the lines of essential complexity of the problem and the solution matching up, except the comparison here being the embedded understanding (hopefully) in code written specifically for the problem, versus the embedded unchecked assumptions in generated code.

      Novel software libraries have a similar problem: easy to adopt, not necessarily easy to evaluate.

      In conversation about 5 months ago permalink
    • Embed this notice
      Jeff Miller (orange hatband) (jmeowmeow@hachyderm.io)'s status on Friday, 13-Dec-2024 06:44:53 JST Jeff Miller (orange hatband) Jeff Miller (orange hatband)
      in reply to

      @inthehands <3 NSB

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:47:29 JST Paul Cantrell Paul Cantrell
      in reply to
      • Seth Richards

      Yes, this from @sethrichards is what I’m talking about:
      https://mas.to/@sethrichards/113642032055823958

      I had a great moment with a student the other day dealing with some broken code. The proximate problem was a Java NullPointerException. The proximate solution was “that ivar isn’t initialized yet.”

      BUT…

      In conversation about 5 months ago permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Seth Richards (@sethrichards@mas.to)
        from Seth Richards
        @inthehands@hachyderm.io I agree with all of this, and I'd add: When I'm writing code, I'm *learning* about the problem as well through the process. When I fix a bug in my code, I (hopefully) learn not to make the same mistake again. When I help someone on the team fix a bug in their code, we both learn something. If we write documentation or a unit test to make sure the bug doesn't happen again, the organization "learns" something too. It's unclear to me whether AI is even capable of learning in this way.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:49:30 JST Paul Cantrell Paul Cantrell
      in reply to

      …It wasn’t initialized because they weren’t thinking about the lifecycle of that object because they weren’t thinking about when things happen in the UI because they weren’t thinking about the sequence of the user’s interaction with the system because they weren’t thinking about how the software would actually get used or about what they actually •wanted• it to do when it worked.

      The technical problem was really a design / lack of clarity problem. This happens •constantly• when writing code.

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 06:52:24 JST Paul Cantrell Paul Cantrell
      in reply to
      • Russell Garner

      A good point from @rgarner, looking at this through the lens of Brooks’s Law:
      https://mastodon.social/@rgarner/113642040777621582

      In conversation about 5 months ago permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        Russell Garner (@rgarner@mastodon.social)
        from Russell Garner
        @inthehands@hachyderm.io @RuthMalan and that thing about adding people to projects. An LLM isn't a person quite so much as the average of some.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 07:00:15 JST Paul Cantrell Paul Cantrell
      in reply to
      • Wulfy

      @n_dimension
      Please note the hypothetical downthread where I ask what happens if AI transcends human coding performance by a factor of infinity.

      For your line of reasoning to work, this thing we call AI has to do something that LLMs simply do not do, do not approach doing, are not on a trajectory toward doing, and are fundamentally architecturally incapable of doing. It’s like saying “eventually if this car goes fast enough, it will be a time machine!”

      In conversation about 5 months ago permalink
    • Embed this notice
      Wulfy (n_dimension@infosec.exchange)'s status on Friday, 13-Dec-2024 07:00:16 JST Wulfy Wulfy
      in reply to

      @inthehands

      You are 100% correct...

      ...right now.

      But consider this, upper limits of AI transcend human performance constraints.

      One of the possible paths for AI development is you have all these multiple agents working in orchestra.
      One is responsible for overall architecture.
      One for environmental impact.
      Etc etc.

      Thought provoking post.
      Thanks

      In conversation about 5 months ago permalink
    • Embed this notice
      Dr Andrew A. Adams #FBPE 🔶 (a_cubed@mastodon.social)'s status on Friday, 13-Dec-2024 07:16:19 JST Dr Andrew A. Adams #FBPE 🔶 Dr Andrew A. Adams #FBPE 🔶
      in reply to

      @inthehands
      Coding is teaching a really, really dumb student how to solve a problem.
      Teaching something is the best way to understand it properly.

      In conversation about 5 months ago permalink
    • Embed this notice
      Raphael (omegapolice@hachyderm.io)'s status on Friday, 13-Dec-2024 07:44:07 JST Raphael Raphael
      in reply to

      @inthehands Oh, why hello, Amdahl!

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 09:08:54 JST Paul Cantrell Paul Cantrell
      in reply to
      • Kevin Riggle

      @kevinriggle
      Yeah, I’ve heard that thought too. It’s tantalizing nonsense. I could write about this at length, and maybe one day I will, but the very short version is that automation is not even remotely the same thing as abstraction.

      In conversation about 5 months ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Friday, 13-Dec-2024 09:08:55 JST Kevin Riggle Kevin Riggle
      in reply to

      @inthehands The bet that a lot of these CXOs are making implicitly is that this will be like the transition from assembly to higher-level languages like C (I think most of them are too young and/or too disconnected to make it explicitly). And I'm not 100% sold on it but my 60% hunch is that it's not.

      In conversation about 5 months ago permalink
      Paul Cantrell repeated this.
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 09:16:24 JST Paul Cantrell Paul Cantrell
      in reply to
      • Raphael

      @OmegaPolice
      That’s it: Amdahl’s Law law except optimization actually creates large costs in the other parts of the system!

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 09:19:34 JST Paul Cantrell Paul Cantrell
      in reply to
      • Kevin Riggle

      @kevinriggle
      That’s exactly the line of thought, yes. And the thing that makes abstractions useful, if they are useful, is that they make good decisions about what doesn’t matter, what can be standard, and what requires situation-specific thought. Those decisions simultaneously become productivity boost, safety, and a metaphor that is a tool for thought and communication.

      In conversation about 5 months ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Friday, 13-Dec-2024 09:19:35 JST Kevin Riggle Kevin Riggle
      in reply to

      @inthehands Yes! Yes. This is it exactly.

      One can imagine a version of these systems where all the "source code" is English-language text describing a software system, and the Makefile first runs that through an LLM to generate C or Python or whatever before handing it off to a regular complier, which would in some sense be more abstraction, but this is like keeping the .o files around and making the programmers debug the assembly with a hex editor.

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 09:20:53 JST Paul Cantrell Paul Cantrell
      in reply to
      • Kevin Riggle

      @kevinriggle
      What happens when the semantics of your model are defined by probabilistic plagiarism, and may change every single time you use it? That might be good for something, I guess??? But it doesn’t remotely resemble what a high-level language does for assembly.

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 09:23:58 JST Paul Cantrell Paul Cantrell
      in reply to
      • Kevin Riggle

      @kevinriggle
      Yeah, one of the troubles with the systems is that basically every metaphor we can think of for what they are is misleading.

      In conversation about 5 months ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Friday, 13-Dec-2024 09:24:00 JST Kevin Riggle Kevin Riggle
      in reply to

      @inthehands Another bull-case argument about LLMs is that they are a form of autonomation (autonomy + automation), in the sense that the Toyota Production System uses it, the classic example being the automated loom which has a tension sensor and will stop if one of the warp yarns break. But we already have many such systems in software, made out of normal non-LLM parts, and also that's ... not really what's going on here, at least the way they're currently being used.

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Friday, 13-Dec-2024 09:27:31 JST Paul Cantrell Paul Cantrell
      in reply to
      • Kevin Riggle

      @kevinriggle
      Even if that gave semantically staple answers, which I’m not convinced it would, it’s still skips that all important step where there’s communication and reflection and consensus building.

      I suppose there’s some help in approaches where and LLM generates plausible answers and then some semantically reliable verification checks that the results aren’t nonsense. But we’re really stretching it.

      In conversation about 5 months ago permalink
    • Embed this notice
      Kevin Riggle (kevinriggle@ioc.exchange)'s status on Friday, 13-Dec-2024 09:27:32 JST Kevin Riggle Kevin Riggle
      in reply to

      @inthehands One could imagine using a fixed set of model weights and not retraining, using a fixed random seed, and keeping the model temperature relatively low. I'm imagining on some level basically the programming-language-generating version of Nvidia's DLSS tech here. But that's not what people are doing and I'm not convinced if we did that it would be useful

      In conversation about 5 months ago permalink
      Rich Felker repeated this.
    • Embed this notice
      Eleanor Saitta (dymaxion@infosec.exchange)'s status on Saturday, 14-Dec-2024 00:39:09 JST Eleanor Saitta Eleanor Saitta
      in reply to
      • Kevin Riggle

      @kevinriggle
      The primary job of a development team is the creation and maintenance of a shared mental model of what the software does and how it does it. Periodically, they change the code to implement changes to the mental model that have been agreed upon, or to correct places where the code does not match the model. An LLM cannot reason and does not have a theory of mind and as such cannot participate in the model process or meaningfully access that model — written documentation is at best a memory aid for the model — and thus cannot actually do anything that matters in the process. The executive class would prefer that other people in the org not be permitted to think, let alone paid for it, and therefore willfully confuses the output with the job.
      @inthehands

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Saturday, 14-Dec-2024 00:39:09 JST Paul Cantrell Paul Cantrell
      in reply to
      • Eleanor Saitta
      • Kevin Riggle
      • Jenniferplusplus

      @dymaxion @kevinriggle

      That’s all well said, and gets to what @jenniferplusplus was talking about here: https://jenniferplusplus.com/losing-the-imitation-game/

      In conversation about 5 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: jenniferplusplus.com
        Losing the imitation game
        AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.
    • Embed this notice
      Shafik Yaghmour (shafik@hachyderm.io)'s status on Saturday, 14-Dec-2024 05:14:59 JST Shafik Yaghmour Shafik Yaghmour
      in reply to
      • Kevin Riggle

      @inthehands @kevinriggle

      What I am seeing from studies is pretty troubling e.g. https://hachyderm.io/@shafik/113391494588227091

      Folks feel more productive but actual objective measures say otherwise.

      It is never about lines of code produced that is a silly measure by itself.

      It is really about how many errors make it into production. If using AI code generators means a lot more bad code makes into production you are making a very poor tradeoff.

      In conversation about 5 months ago permalink

      Attachments

      1. No result found on File_thumbnail lookup.
        itself.it
        This domain may be for sale!
      2. Domain not in remote thumbnail source whitelist: media.hachyderm.io
        Shafik Yaghmour (@shafik@hachyderm.io)
        from Shafik Yaghmour
        Attached: 1 image "DORA 2024: AI and Platform Engineering Fall Short": https://thenewstack.io/dora-2024-ai-and-platform-engineering-fall-short/ This graph looks pretty bad. What I think it shows, is that more subjective measures are positive but then when we get to measures that are more objective things look really bad. We have seen other research that shows this big split in data as well.
      3. Domain not in remote thumbnail source whitelist: otherwise.it
        Home
        from otheradmin
        Un altro modo di fare training, coaching e consulting
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Saturday, 14-Dec-2024 07:45:21 JST Paul Cantrell Paul Cantrell
      in reply to
      • Old Fucking Punk

      @lwriemen
      This is an overly pedantic quibble, though I agree with the underlying sentiment that people rush into coding too fast without thinking.

      Doing the work of filling the things left between the lines for the reader to infer above: coding something •while thinking• — assessing the results, letting the ideas talk back and surprise, treating design and implementation problems as prompts to think about goals and context — is a deep way of understanding a problem.

      In conversation about 5 months ago permalink
    • Embed this notice
      Old Fucking Punk (lwriemen@social.librem.one)'s status on Saturday, 14-Dec-2024 07:45:22 JST Old Fucking Punk Old Fucking Punk
      in reply to

      @inthehands My 30+ years as a dev disagree with the statement, "coding a solution is a deep way of understanding a problem", on semantics. Analyzing a solution is a deep way of understanding a problem. I've found that too many devs are in a rush to code and give short shrift to the analysis.

      In conversation about 5 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Wednesday, 25-Dec-2024 16:49:20 JST Rich Felker Rich Felker
      in reply to
      • Eleanor Saitta
      • Kevin Riggle
      • Jenniferplusplus
      • Sébastien Roccaserra 🐿️

      @sroccaserra @inthehands @dymaxion @kevinriggle @jenniferplusplus This is absolutely what it is and what we've been telling folks it is for years. Of course they won't listen. See: my display name.

      In conversation about 5 months ago permalink
    • Embed this notice
      Sébastien Roccaserra 🐿️ (sroccaserra@mastodon.social)'s status on Wednesday, 25-Dec-2024 16:49:21 JST Sébastien Roccaserra 🐿️ Sébastien Roccaserra 🐿️
      in reply to
      • Eleanor Saitta
      • Kevin Riggle
      • Jenniferplusplus

      @inthehands @dymaxion @kevinriggle @jenniferplusplus This is interesting. I have another concert: what if the goal of the GenAI narrative was mostly an accounting goal? Meaning, tech is / was worth investing in because the huge promise of future growth enables to boost the balance sheet & enables to pay nice dividends every year. So you have to regularly renew the narratives that justify promises of future growth. GenAI is just that, it doesn't even *need* to be a real productivity boost.

      In conversation about 5 months ago permalink
    • Embed this notice
      Paul Cantrell (inthehands@hachyderm.io)'s status on Thursday, 26-Dec-2024 04:23:06 JST Paul Cantrell Paul Cantrell
      in reply to
      • Rich Felker
      • Eleanor Saitta
      • Kevin Riggle
      • Jenniferplusplus
      • Sébastien Roccaserra 🐿️

      @dalias @sroccaserra @dymaxion @kevinriggle @jenniferplusplus

      A shockingly large portion of the world’s current problems trace back to an excess of investment money desperate for more things to invest in.

      In conversation about 5 months ago permalink
    • Embed this notice
      Jonathan Yu (jawnsy@mastodon.social)'s status on Thursday, 26-Dec-2024 11:34:37 JST Jonathan Yu Jonathan Yu
      in reply to
      • Rich Felker
      • Eleanor Saitta
      • Kevin Riggle
      • Jenniferplusplus
      • Sébastien Roccaserra 🐿️

      @inthehands @dalias @sroccaserra @dymaxion @kevinriggle @jenniferplusplus I'm not sure if the problem is excess money or an inefficient market (lots of dumb ideas get lots of funding while lots of good ideas don't.) Or maybe the problem is that we have lots of money but we're insufficiently creative with respect to how we allocate it?

      Probably, an issue is that people with control of capital chase returns for their own sake, instead of being a byproduct of doing something useful

      In conversation about 5 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Thursday, 26-Dec-2024 11:34:37 JST Rich Felker Rich Felker
      in reply to
      • Eleanor Saitta
      • Kevin Riggle
      • Jenniferplusplus
      • Sébastien Roccaserra 🐿️
      • Jonathan Yu

      @jawnsy @inthehands @sroccaserra @dymaxion @kevinriggle @jenniferplusplus The problem is excess money combined with wanting obscene returns on it, and without driving down the value of other holdings. This necessitates perpetual scams.

      In conversation about 5 months ago permalink
    • Embed this notice
      Rich Felker (dalias@hachyderm.io)'s status on Thursday, 26-Dec-2024 11:50:50 JST Rich Felker Rich Felker
      in reply to
      • Eleanor Saitta
      • Kevin Riggle
      • Jenniferplusplus
      • Sébastien Roccaserra 🐿️
      • Jonathan Yu

      @jawnsy @inthehands @sroccaserra @dymaxion @kevinriggle @jenniferplusplus For example a prosocial business activity to do with obscene amounts of money would be building ultra affordable housing at scale and renting or selling it at minimal profit. At scale this would still be very profitable. But it would drive down value of the billionaire class's assets.

      In conversation about 5 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.