Apparently Apple has published a paper on how LLMs don't do reasoning. This isn't a surprise if you know what LLMs are, but might be helpful to defusing some of the mainstream idiocy going on with them. https://arxiv.org/pdf/2410.05229
Heard about via https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and