this is interesting, but i don't quite agree. i don't think this is model collapse, per se. i believe when you do "search" with an LLM, what you are actually doing is RAG: they are not constantly re-training their models on the online content they added to their index over the last 48hrs, they are querying their vectorized index of that content with your vectorized search terms, dumping that context into the LLM, and returning a long, chatty result. https://www.theregister.com/2025/05/27/opinion_column_ai_model_collapse/