the fact that some people find LLMs useful for writing code is not a credit to LLMs but an indictment of the average signal to noise ratio of code: it means that most code is confusing boilerplate -- boilerplate because a statistical model can only reliably reproduce patterns that reoccur many times across its training corpus, and confusing because otherwise-intelligent people capable of convincing a hiring manager that they are competent programmers find it easier to ask a statistical model to produce an incorrect prototype they must debug than write the code themselves. we all know from experience that most code is bad, but for LLMs to be able to write more-or-less working code at all indicates that code is much worse than we imagine, and that even what we consider 'good' code is from a broader perspective totally awful. (in my opinion, it is forced to be this way because of the way we design programming languages and libraries.)