Over longer periods of time LLMs will actually get worse: they’ll involuntarily start to steal the shit other LLMs write as input for their own neural networks.