I love making hard things easy, using abstractions to automate complex tasks. But that's not what AI is. I can't peel back that complexity, I can't use the output of an AI to generalize to unanticipated examples of complexity. The boilerplate code emitted by an LLM will always be boilerplate instead of a code generator that abstracts out what makes that code boilerplate in the first place.