@johnpettigrew ...true, but there also some tasks, like analysing code (on various aspects), which have quite a lowrisk, although you can still not rely on it ofcourse
Essentially, after a while, even a good programmer will start to become less and less critical of the LLM's output, even when they shouldn't be. This has the potential to lead to very serious bugs. You could probably catch this with a very thorough code review step but good code review is hard to do.
@ErikJonker This seems to be the live question: which subset of tasks that coders do are actually made quicker or better by using LLMs, given that you have to thoroughly check every character and every step of logic in what they suggest. I know several folk who swear by it but, personally, I'm too much of a control freak to take the chance.
If you haven’t seen it, @pluralistic’s essay on the Generative AI Bubble makes a nice distinction between low- vs. high-value applications of AI, along with fault-tolerant vs. fault-intolerant applications of AI.
Fault-intolerant applications are more expensive to deploy safely, and it is hard to replace fault-intolerant jobs: Supervising the AI is itself a full-time job.