There is no reason we can't attempt to model these things. But with LLMs people react to the content generated by them as if all of this structure that's normally behind a human-written paragraph exists.
We infer and project the emotions, the goals, the logic and the ability to categorize. But none of these things are present and it results in deeply strange errors that not even an ant would ever make.