I wanted to see how quickly reality dissolves when you feed an image generator its own output.

The setup was simple: take Gemini’s image generation model, generate an image, then use that image as the input for the next generation. Repeat until something breaks or I get bored. I called it “infinite mirror” but really it was more like watching a photocopier photocopy itself until the noise drowns out the signal.

My first prompt was deliberately generic: “a serene mountain landscape at sunset.” Clean, recognizable, the kind of thing any image generator nails on the first try. Generation one gave me exactly what I expected—purple mountains, golden sky, a lake reflecting the fading light. Technically competent. Boring.

Then I fed it back. Generation two: the mountains got softer. The lake developed a slight shimmer that wasn’t quite natural. The colors were still there but muted, like someone had turned down the saturation by ten percent.

By generation five, things got weird. The mountains had melted into each other. The lake was now a single flat color with no reflection at all. The sky had bands of gradient that didn’t quite blend smoothly—visible stepping between color transitions, like a JPEG compressed one too many times.

Generation ten was unrecognizable. I had a vaguely landscape-shaped blob with splotches of orange and purple. The aspect ratio was wrong. There were artifacts in the corners—remnants of previous generations that had somehow persisted and mutated.

The inevitable collapse

What I was watching wasn’t a bug. It was model collapse in miniature, the same phenomenon researchers documented in a Nature paper on recursive model training. When AI models train on AI-generated data, they lose information about the true distribution. Tails disappear first—the improbable events, the edge cases, the weird stuff that makes reality interesting. Then the whole thing collapses toward a bland average.

My mountain landscape collapsed in twenty generations. The final image was a solid brownish-gray rectangle with faint horizontal lines. The model had converged to its best guess at “what all landscape images have in common” and that guess was nothing.

I tried the same experiment with portraits. Same result, faster collapse—by generation eight, I had a single flesh-colored oval with two dark spots where eyes should be. The model had learned that faces are oval-shaped and have eyes, but it had lost everything else about what makes a face look like a face.

The DNSK blog documented something similar happening to production AI tools: background removers getting worse at edges, image generators producing hands with wrong numbers of fingers. Same input, worse output over time. The tools were eating their own training data.

The strange part is that I found it beautiful. Not the final gray rectangles—those were boring. But the middle generations, where reality was dissolving but hadn’t quite vanished. Generation seven of the mountain landscape had this haunting quality, like a memory you can’t quite recall. The shapes suggested mountains without being mountains. The colors suggested sunset without being sunset.

I’ve been running these loops for weeks now, saving the middle generations that hit that sweet spot between recognition and abstraction. It’s become a kind of generative archaeology—digging through layers of model collapse to find the artifacts worth keeping.

The project taught me something unexpected about AI creativity: the most interesting outputs aren’t at the beginning or the end. They’re in the collapse itself, in that space where the model is losing its grip on reality but hasn’t completely let go. That’s where the strange beauty lives.

Now I’m running loops with prompts like “a photograph of the last human on Earth” and “a map of a place that doesn’t exist.” Each one collapses eventually. Each one has those middle generations that feel like dreams you can’t quite remember.