@markgritter
All well said. And re this:
“we are very far off from any sort of expert-level LLMs”
Combine that with the observation that modern LLMs are trained on orders of magnitude more input text than any human brain, and it suggests to me that “very far off” in that sentence means something about intrinsic structure, not just scale. LLMs are at best doing some proper subset of whatever “intelligence” is in humans; something huge is missing. Larger models won’t change that.