@malwaretech yeah careful with the analogies, there's evidence that LLMs do not just retrieve from memory. At least, there is evidence of reasoning and compositionality. Besides, you could implement literally the retrieval model you describe and QA performance would be far worse than modern LLMs.