@amszmidt If you try to explain llms in a Markov chain context, an LLM is very complex Markov chain where the "states" are words or tokens, and the "transitions" are the probabilities of predicting the next word based on the current state and context.
Note how many qualifiers take it far and beyond a Markov chain. There is a cms deep kms wide similarity only.
One could argue that a person editing a document is doing exactly this.