I think over and underestimating is a good way of putting it
I’m not as confident as you that the statistical approach that underpins LLMs produces anything like what we could reasonably call understanding, though
It may well be a *component* of understanding — making associations is key — but it’s not all clear that it can produce other elements of reasoning: logic, math, semantics, etc