My personal theory is that (long-term, at least) the clearest value-prop for LLMs will be rewriting highly structured generated by explainable/reproducable systems into conversational/stylistically tailored language. Humans will still have to be in the loop, but this is already where the RAG/KG stuff feels like it’s driving.
Tons of energy, right now at least, seems to be focused on making LLMs do “analysis” or “research” etc, where their essential nature is a detriment.