@inthehands @twsh I’m not convinced this framing is useful.
I agree with points 1 and 2, and even accept 3 as a logical conclusion. But there’s a counterexample: AIs produce coherent language. It’s often hard to tell the nature of the author of a piece of text produced by a LLM, sometimes impossible. So from the reader’s point of view the text says things. The fact that AI has no communicative intent, or that it has no coherent world model is irrelevant.
This is not unique to AI. People on a regular basis gleam meaning that was not intended by a speakers/writers.