The Curse of Recursion: Training on Generated Data Makes Models Forget

arXiv:2305.17493v2 (cs)

[Submitted on 27 May 2023 (

v1

), last revised 31 May 2023 (this version, v2)]

Download a PDF of the paper titled The Curse of Recursion: Training on Generated Data Makes Models Forget, by Ilia Shumailov and 5 other authors

Download PDF
Abstract: Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

Submission history

From: Ilia Shumailov [

view email

]


[v1]

Sat, 27 May 2023 15:10:41 UTC (1,773 KB)


[v2]

Wed, 31 May 2023 10:39:26 UTC (1,847 KB)


Full-text links:

Access Paper:

Current browse context:

cs.LG

a export BibTeX citation Loading...

BibTeX formatted citation

×

Bookmark

Bibliographic Tools

Bibliographic Explorer Toggle

Code, Data, Media Demos Related Papers

IArxiv recommender toggle

About arXivLabs

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

https://arxiv.org/abs/2305.17493v2