We spend roughly 10x as much time reading code as we do writing it. A tool or technique that makes you twice as "productive" at writing code *at best* makes you 5% more productive over all. Making your code easier to understand will have 10x the impact. But that doesn't sell tools or put developers out of work, so you won't be reading about it in Forbes.
@jasongorman that it doesn’t sell tools I’m not so sure. A tool that would help me ensure readable code, however that would work, I would find a very interesting proposition
@thirstybear That's the thing: if it doesn't have the information, it can't generate an explanation. It excels at explaining code that's easy to understand. The real selling point is brownie points for comments. Like at university.
@jasongorman I'll give it that, but it really depends on what you define as *explaining* the code. I saw that technically all observations were correct, but none of them are helpful. If I'd ask real intelligence (a.k.a. a person), they'd (hopefully) explain the function in more broad strokes and go deeper and deeper until I understand what I need. 🙂
@jasongorman I hate to say it (because I very much dislike the LLM/AI hype), but ... they're surprisingly good at answering questions about code one shows them too. Not perfect, but they can provide inspiration and places to start looking. (If one knows how to ask a question, obvs.)
@jasongorman Your old waterfall post is hilarious; I had not seen that. For one's own sanity, a refuctoring transpiler could be handy (a sort of "enfuctor" or "fuctorizer" or "infucorator" or "undefuctorizer") so one could maintain two copies of code with minimal work, like a fraudulent business might maintain two sets of books. This is like the old code obscuring tools, except of course it should appear that the code is not *intentionally* obscure.
Additionally, the transpiler could be designed to make outputted tests more fragile and interdependent. So all the tests still pass, but minimal changes that really should not break the tests, break substantially all of them.
@jasongorman It just occurred to me when a programmer takes the clear ideas in their head and creates code with bad variable names and function names that a lossy transformation is taking place. It is not possible for a person or an LLM to subsequently offset that loss by processing the code. The information has been lost.
@jasongorman This might be confounding multiple things. We might spend so much time reading code because: the code is hard to read and because writing code is hard too.
I think that LLMs actually help for both cases, but they need us to rethink how we write and read code. LLMs help a lot to make code consistent stylistically (API patterns and documentation phrasing) and they can generate a lot of code that can be thrown away on the path to consistent abstractions.
@mnl The only drawback I see is what I've experienced recently of junior devs interacting with those tools, unaware that it's feeding them BS. It's the same problem as copy/paste programming. *Exactly* the same, in fact.
@jasongorman That approach centers the developer, instead of expecting the machine to write and comprehend code. The LLM is there to allow the developer to be more mindful about the creation and consumption of their code, as a linter for documentation and specification as well as a quick prototype playground.
I hope we see more UX going in that direction instead of pretending that a chatbot can write something decent based on a single line prompt.
@guusdk@jasongorman I was going to ask the same thing. I'm really not sure this is true, it certainly doesn't "feel" true, unless you count reading the code you are actively writing. Certainly this depends on the kind of project.
@jasongorman there's some maths missing here. If I double my writing rate then I double the amount I have to read, so I become 90% worse overall not 5% better.
@jasongorman@leeg Yeah. Our job is not to write code. Code in itself is not an asset. Code is a liability we take on in order to solve business problems.