@feld
maybe not yet. or maybe you're asking it to produce exceedingly trivial code. either way, you must be at least reading and checking it for bugs -- something that's a lot easier to do with code one has written oneself.
that said, you've completely missed the point of my post.
the fact that LLMs perform well enough on code generation that *anybody* wants to use them instead of coding means that we are doing coding in a fundamentally antihuman, gatekeepy way.
statistical models like LLMs work well in a narrow novelty range (since they are novelty-minimizing engines). that novelty range is well below the one we would want programming to live in.
we're supposed to all be refactoring code to avoid duplication, using third party libraries to avoid duplication of effort, writing code that's dense enough that it can be understood and consulted without putting too much load on short-term memory but spread out enough that new maintainers can reason about it. if we were doing that, LLMs couldn't write code based on arbitrary prompts, because most lines of code would be so specific to their top-level requirements that the only patterns that LLMs could learn would be generic -- it would not be able to do any better than a typeahead that checked the current token against a list of reserved words.