telling an llm 'do not hallucinate' does not work b/c not hallucinating would require for it to be able to
(1) examine its own output
(2) extract the semantic meaning of the same
(3) compare that meaning to some external body of data
when the reason it 'hallucinates' is precisely that it does not have those capabilities
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
haliey welch-fargo (esvrld@normal.style)'s status on Saturday, 03-May-2025 03:00:17 JST haliey welch-fargo