@thedodger Ja, das ist erwiesen:
https://www.nature.com/articles/s41586-024-07566-y
> We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear.
und
> The described process demonstrates that fine-tuning of language models does not curb the effects of model collapse
Greifbar: durch KI-Inzucht werden alle Hunde Golden Retriever.