@inthehands @paninid It may be manuel intervention, because it's a common fault.
Or it might be, that it's a common fault and so there's much content on the Internet laughing about this. And the training data of the LLM are newer and contain this content. And in some magical way this leads to the LLM scoring this combination of words as more relevant to the question.
The important thing in the end is: The LLM doesn't _understand_ anything.
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
Philipp Weiß (thewhite969@chaos.social)'s status on Friday, 25-Apr-2025 21:24:22 JST Philipp Weiß