No shit “this decline [in the capability of LLMs to perform logical reasoning through multiple clauses] is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.”
Paper from Apple engineers showing that genAI models don’t actually understand what they read but just regurgitate what they’ve seen like a puppy wanting to please its owner:
https://arxiv.org/pdf/2410.05229