@eviltofu Disclaimer: This is the simplified versiont that removes a lot of nuance.
Usually there are prebuilt prompts to ask it to confirm its own work. Additionally it develops its own prompts to prompt itself to investigate other aspects of the original line of reasoning. There are a few approaches to this "chain of reasoning" approach such as Mixture of Experts, Chain of Thoughts, Tree of Thoughts, etc.
But at the base, it's still an LLM that predicts the next most likely word.