@emilymbender The obvious counter to this is that LLM are not strictly given _text_ but also _communications_ and in particular through RLHF _actual communications and feedback on its "utterances"_.
I agree with the basic point that LLM are far more appearance than substance, and that people are very good at finding meaning where there is none, but there are pathways open for actual LLM as they exist or might exist that are not present in your example.
@emilymbender I wonder if, in general, it is fair to conclude that only because it is not imaginable that we can do something, some other learning system (as an LLM) cannot do the thing?
Also, reminds me of the time an astronomer thought he could see canals on Mars which appeared to be created by an intelligence - there aren't, of course, so another guy later observed that sure, the lines he thought he saw in his telescope were indeed signs of intelligence, but on the opposite end of the scope...