The assertion that LLMs are "capable of surprisingly sophisticated reasoning" is supported with a link an article @willknight wrote on the "Sparks of AGI" paper + criticism of it.
Extruding synthetic text is not reasoning. If the extruded text looks like something sensible, it is because we have made sense of it. I find it dismaying that even critical journalists like @willknight feel a need to repeat these tropes.