It's not that there's some epistemic model that's gone wrong, it's that the whole technical problem they solve is how to output *plausible* sounding text. If they happen to solve that by plagiarizing correct text, that's as much as solution as grinding up a bunch of unrelated texts to make a response.
To put it bluntly, if an LLM tells you that something is true, you have received precisely no evidence either way as to the correctness of that claim.