and likewise for anything else you might want to tell an llm, like 'follow this style guide', 'do not claim you have capabilities you don't', or even telling it to not use a certain word or expression in its answer. because it's not a machine that 'knows' anything. what you're doing — all you're doing — when prompting an llm is requesting it output a string of tokens that would be statistically likely to follow from your prompt, per its corpus. and that's all it does