AI Models Are Cannibalizing Each Other—and It Might Destroy Them.

  • 2025-07-01 08:00:00
  • Vice

The "Model collapse” is a phenomenon in which models used in machine learning, such as the Large Language Models so much in vogue nowadays, gradually degrade due to errors resulting from training based on unsuitable content, such as the results of another model, or previous versions of the same subject. Considering the massive spread of artificial intelligence, therefore, the tech world seems to be getting closer and closer to a model collapse of frightening proportions.

Because of the disproportionate use of this technology, the most widely deployed Large Language Models have begun to cannibalize each other, consuming content generated by their counterparts rather than material created and constructed by humans - what, technically, they should be able to replicate “perfectly.”

The consequences of such a phenomenon include a general and ongoing glitching of artificial intelligences, results that are less and less accurate and increasingly offensive. Do we really need a model collapse to remind the world what life was like before AI?