I'm sorry if this is upsetting to some people, but yes, LLMs are in fact just spicy autocomplete. I get it, machine learning models look like magic. They can do things that up until now only humans have ever been able to do, so the natural impulse is to treat them as human-like and ascribe things like memory, learning, understanding, creativity, even self awareness to them
Of those things machine learning models only have 2: memory and learning. But even those happen before any of us ever contact the model. The memory and learning, as much as you can call it that, is in the training on datasets, but by the time anyone interacts with an LLM those are done and the results are set in stone. It will never remember more while responding to you, it will never truly remember anything