@Gergovie @clive @thomasfuchs I think that's way too reductive. LLMs absolutely do something that *looks* like understanding and reasoning.
The problem is that we don't have great ways to characterize what it is they *do*, so it's really hard to know when their output is good enough to use in place of actual logic and interpretation.