@Flux I tend to think it's more like "What output would best fool this human into thinking I understood the question and know the answer?"
Fun thing, this is exactly how an incompetent human would behave in a job interview, so I call LLMs "Artificial Incompetence."