Every output of an LLM is reached through the same operation. Labelling some as hallucinations falsely privileges certain outputs over others. Every output is, in fact, a computational hallucination; their perceived alignment or otherwise in relation to our world view is purely a hermeneutic imposition, an anachronistic interpretation of a string of statistically predicted tokens.
Some now propose we accept LLM hallucinations explaining how its hallucinations are reached https://www.arxiv.org/pdf/2505.17120