@MolemanPeter I'd lean towards yes. Even LLM-based that aim for truth/accuracy (as opposed to inoffensiveness/risk-mitigation) are doing so by integrating other tools and approaches — maintaining a parallel "repository of truth" in the form of documentation or ontological relationships, building answers with that system, then using the LLM to turn it into conversational text.