A chinese room computer would still lack understanding the concept behind a word, symbol or term.
By concept I mean the blurry networks of associations, our brain activates each time we hear a word. (sorry, no native speaker; in german, it's the distinction between Symbol, Begriff (concept) and Ding (the real or abstract thing))
LLMs could reproduce analogies if it had them in their training set, but since they don't have the blurry abstractions (concepts) we have, every analogy or pun they'd produce would have the same probabilty. They have no real world reference to weight their sense.
While a (very sparse) kind of concept might be present in their pobability matrix, I'm also sceptic wether LLMs could ever introspect these layers. Neither when asked, nor as an inner "though process".