@zenkat @glyph all your verbose LLM apologia does not change the fact that LLMs work statistically, always, and *don't know what facts are*. They can't not hallucinate, because that's the *only* thing they're capable of - even when the result looks correct.