@only_ohm This is a common AI fan fallacy, and yes we are quite sure of that.
The topic is somewhat too large for a toot, but the important ingredients are ability to experiment on physical world and develop a model based on results of that, consequences (which is how the above impacts survival or future capabilities), concepts that are not modelable as patterns of language tokens, etc.