Back to Hub

Model Collapse & Data Poisoning.

Simulates the catastrophic decay of AI model intelligence as they begin training on their own synthetic output, leading to "Model Collapse" and information entropy.

## The Inbreeding of Intelligence

If you train a model on text written by an AI, it will learn the AI's quirks. If you then train a *third* model on the output of the *second*, the errors compound exponentially. This is known as 'Model Collapse.'

### FAQ

**Q: Why does synthetic data 'poison' the well?**
A: Information Entropy. AI models are probability engines; they tend to gravitate toward the most likely outcome, removing the 'tails' of the distribution. Over generations of self-training, the model loses the ability to perceive rare, creative, or complex information. It becomes a 'Hapsburg AI'—functionally useless and riddled with semantic defects. This tool models the impending 'Clean Data Crisis,' where the cost of verifying that data was written by a human will become the most expensive part of the AI supply chain.