The core strength of LLMs is that they can synthesize text that sounds convincing because it is well-formed, follows appropriate communication conventions, and echos relevant fragments of information.
A major weakness of LLMs is that they are fundamentally incapable of guaranteeing factual or logical correctness, or even of improving correctness relative to their inputs. They are bullshit generators.
2/3