@tchambers the issue is that LLMs aren't being used the way they're supposed to.
They're literally just guessing word placement by probability based on their training data.
They're good for summarizing a certain amount of data, checking grammar, etc. (Word-specific activities)
If they'd be used correctly, "hallucination" wouldn't exist because then everyone would know that they aren't supposed to be used for verifiable information.
To be fair, they're literally marketed correctly. "ARTIFICIAL Intelligence" is exactly that, artificial. It's not "Actual Intelligence" because there's no intelligence there, it's ALL probability based guessing.