When Mark Zuckerberg announced Vibes in late September, the platform seemed designed to answer a question nobody had asked. Users would scroll through an endless feed of AI-generated videos, short-form content synthesized entirely by machines, created from text prompts and remixed without human hands ever touching a camera.
The response was immediate and nearly universal. “Gang nobody wants this,” read one of the top comments on Zuckerberg’s announcement. TechCrunch’s headline deployed the term that had been circulating in creative communities for months: “Meta launches Vibes, a short-form video feed of AI slop.” On The Daily Show, comedian Michael Costa put the situation more bluntly, describing Vibes as a feed for “fat little pigs” and suggesting that Meta wanted to “watch you eat yourself to death.”
The visceral disgust is evident. “AI slop,” or AI-generated content, feels less like art and more like the runoff, waste product of systems optimized for volume rather than quality. AI slop is the uncanny valley applied across entire ecosystems of content.
Yet, five days after Vibes launched, OpenAI released Sora’s video generation platform to the public. Within forty-eight hours, Sora hit number one on the App Store. The backlash was identical, but adoption was immediate. Whatever people said they wanted, whatever revulsion they expressed, tech executives were betting that consumer behavior would diverge from shouted normative preference. And the economics were too compelling: AI content costs very little to produce in comparison to traditional video creation, can be generated continuously, and keeps users scrolling. Whether it’s good, in any meaningful sense, has become beside the point.
But what if that initial revulsion, the response before the rationalization, represents genuine wisdom? Twenty-four centuries ago, Plato warned that consuming imitations of truth corrupts our capacity to recognize actual truth; repeated exposure to copies of copies trains us to prefer shadows over reality. His theory of “mimesis” rests on a hierarchy of distance from reality, with each removal representing not just aesthetic degradation but a kind of spiritual pollution, a corruption of what he called the soul’s capacity for understanding.
The warning seems abstract. But recent research in computer science suggests that Plato may have been diagnosing something that is now measurable. AI models trained recursively on their own outputs undergo irreversible degradation, losing rare patterns while converging toward statistical averages. The mathematics confirms what the ancient hierarchy predicted: copies of copies collapse toward mediocrity; the collapse is built into the imitation process itself. The world is being degraded from its own copies, and we are growing comfortable with the flight from reality.
Imitations All the Way Down
In Book X of the Republic, written around 375 BC, Plato makes a claim that sounds almost petty in its specificity: “All poetical imitations are ruinous to the understanding of the hearers, and that the knowledge of their true nature is the only antidote to them.” Not that poetry is sometimes misleading, or that bad poetry corrupts, but consuming imitations of something good inherently damages our understanding of what good means. The structure of Plato’s argument helps elucidate something peculiar about machine-generated content: why the medium itself, independent of quality, might matter.
Plato’s theory rests on his hierarchy of reality. Whatever one makes of his metaphysics—and philosophers have spent millennia debating whether his Forms actually exist—the structure proves useful for understanding what happens when machines learn from machines. At the top are the Forms: perfect, eternal ideas accessible through philosophical inquiry. Below that are physical objects, imperfect copies of these Forms. A carpenter crafting a bed imitates the Form of bed-ness, working from an understanding of what makes a bed a bed. At the bottom are artistic representations: the painter’s image of a bed, which captures only the appearance of one particular bed from one particular angle. This makes the painting “thrice removed from truth,” or an imitation of an imitation of the ideal.
The distance from the original Form is essential. The carpenter needs to understand the function, structure, and purpose of a chair to transform a piece of wood into a sturdy, reliable, and comfortable place to sit. The painter needs only to capture how light falls on wood, how fabric drapes, and what the eye sees from a single perspective. Art, in Plato’s framework, imitates appearances rather than engaging with reality. Each imitation means less understanding and less connection to what makes something what it is.
AI-generated content extends this descent in ways Plato couldn’t have imagined, but his hierarchy anticipates. Machine learning models don’t train on literal physical objects or even on direct observations. Models learn from digital datasets, such as photographs, descriptions, and prior representations, that are themselves already copies. When an AI generates an image of a bed, AI isn’t imitating appearances the way a painter does but extracting statistical patterns from millions of previous copies: photographs taken by photographers who were already working at one removal from the physical object, processed through compression algorithms, tagged with descriptions written by people looking at the photographs rather than the beds. The AI imitates imitations of imitations.
And then these AI-generated outputs become training data for the next generation of models, also known as synthetic data. AI training on AI: copies of copies of copies of copies. Each iteration moves further down Plato’s hierarchy, further away from anything resembling reality; a mathematical severing from the real.
Last year, Nature published research that reads like experimental confirmation of Platonic metaphysics. Ilia Shumailov and colleagues at Cambridge and Oxford tested what happens under recursive training—AI learning from AI—and found a universal pattern they termed model collapse. The results were striking in their consistency. Quality degraded irreversibly. Rare patterns disappeared. Diversity collapsed. Models converged toward narrow averages.
A language model trained on Wikipedia text degraded after nine generations into mechanical nonsense: “architecture. In addition to being home to some of the world’s largest populations of black-tailed jackrabbits, white-tailed jackrabbits, blue-tailed jackrabbits, red-tailed jackrabbits, yellow-.” The sentence trails off into absurdity, the model having lost any capacity for coherent continuation. Image generation showed the same pattern: distinct handwritten digits blurred into indistinguishable forms as the model averaged everything toward prototypes. The digits didn’t just become worse; they became the same.
The researchers showed the inevitable consequences of using recursive training data. More fundamentally, models learn patterns by systematically excluding outliers, which makes the rare and most meaningful data impossible to generate. Plato warned that consuming such copies of reality, no matter how well-crafted, grooms us toward mediocrity through habituation.
Other researchers have found variations on the theme. Rice University scientists called the phenomenon “Model Autophagy Disorder,” invoking mad cow disease as a metaphor. The comparison is apt: both involve recursive self-destruction through corrupted copying mechanisms, prions in one case and statistical patterns in the other. After five generations of synthetic training, their face generation models produced images that all looked like the same person, with bizarre gridlike artifacts spreading across the features like digital corruption. Researchers at Stanford and Berkeley found that GPT-4’s code generation ability dropped 81 percent over three months, precisely the period when AI-generated content began proliferating online and presumably entering training datasets.
This addresses a common objection: that the medium doesn’t matter, that art is art regardless of how it’s produced. But with AI, the medium determines what can be created because the process is recursive imitation. Statistically, AI cannot produce genuine outliers. Rare patterns get averaged away by design. A photographer can seek unusual subjects, strange angles, and can deliberately work against convention. In optimizing for what is most probable, AI learns to forget what is least expected. The black swans disappear first. The mere potential volume of AI-generated content compounds the problem: AI produces a thousand outputs per hour at near-zero cost while a human produces one. The cheap doesn’t just compete with the expensive; it floods quality out entirely.
As Plato’s hierarchy explains, the painter engaging with a physical bed is at least working from something real, however imperfectly perceived. The AI training on images of beds never touches reality but only patterns extracted from previous representations. When AI trains on AI, the connection to the real world diminishes.
Habituation as Education
Plato’s deeper concern wasn’t about epistemology, but culture. Repeated exposure to bad imitation, Plato argues, corrupts the soul through habit. The claim appears in Book III of the Republic (395d), where he’s discussing education in his ideal city. “Did you never observe,” he writes, “how imitations, beginning in early youth and continuing far into life, at length grow into habits and become a second nature, affecting body, voice, and mind?”
Culture, for Plato, is education (Republic 514a; Laws 653b-c). Music, poetry, visual art, and theatrical performance aren’t neutral entertainment but formative experiences that train your character. What we repeatedly encounter shapes who we become. Exposure to artistic forms, whether ordered or chaotic, simple or complex, truthful or imitative, trains the soul toward corresponding dispositions. Plato is claiming human formation: simplified, homogenized, and imitative forms train preferences for simplification, homogenization, and imitation. Complex, rare, truthful engagement trains capacity for complexity, appreciation of rarity, and orientation toward truth.
Culture isn’t a mirror that reflects existing values but the medium through which values and preferences are initially formed. The ethical dimension emerges here. If culture educates, then what we consume matters not just for pleasure or aesthetic judgment but for who we become capable of being. The question stops being whether AI-generated content is “as good as” human-created content in some abstract aesthetic sense. The question becomes what consuming content that is mathematically constrained to exclude novel output does to our capacity to perceive, appreciate, and desire anything else.
In model collapse, the tails of distributions disappear first: low-probability events, rare patterns, edge cases, minority data, outliers. The Cambridge researchers explicitly note that “low-probability events are often relevant to marginalized groups” and are “also vital to understand complex systems.” Rare medical conditions may be forgotten by diagnostic AI. Minority consumer preferences disappear in favor of bestsellers. Image generators asked for “dog” will produce golden retrievers and labs instead of rare breeds, because golden retrievers and labs appear most frequently in training data. Long-tail scientific papers, despite potential importance, may be excluded from model understanding because those papers are cited less frequently than mainstream work.
But the deeper problem is what consumption of this reduced content does to us. If we habitually encounter mediocre representations, we learn to prefer average representations. Not through conscious choice or explicit persuasion but through the mechanism Plato identified as habituation: repeated exposure training the soul, or, in contemporary neuroscience terms, the neural architecture, toward corresponding dispositions.
The mere-exposure effect, documented across hundreds of studies, demonstrates that repeated presentations create preference without conscious cognition. Simply encountering something multiple times makes us like it more, reaching maximum strength within ten to twenty presentations. Processing fluency research proves that averaged, prototypical features feel immediately more pleasing than distinctive ones, with effects operating within seventeen to fifty milliseconds of viewing, faster than conscious awareness. The brain prefers what it can process easily, and prototypes are, by definition, what the brain has learned to process most easily. Perceptual narrowing shows that environmental exposure reshapes neural discrimination abilities through synaptic pruning. Populations lose the capacity to perceive distinctions they don’t regularly encounter. It’s not just that we prefer what we see; we become unable to fully perceive what we don’t see.
Most concerning, research specifically examining human-AI feedback loops found that AI systems amplify biases through mechanisms operating below conscious awareness. In emotion recognition tasks, humans showed a 53 percent bias toward certain categories. AI trained on this data amplified the bias to 65 percent. Then, when humans interacted with the biased AI, their own bias increased to 61 percent over time. The conclusion: “AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgment escalate into much larger ones.” Crucially, participants underestimated the substantial impact even when explicitly warned about the effect.
What Plato called habituation, neuroscience measures as synaptic pruning and preference formation. The process isn’t neutral. AI-generated content systematically purged of long-tail rarity and optimized for processing fluency creates feedback loops: AI-averaged content leads to repeated exposure, which creates preference for convergent features, which generates demand for more AI content, which trains future models on even more homogenized data, which accelerates collapse. The cycle compounds.
Plato emphasizes “beginning in early youth” because formation during development has outsized effects. Children encountering predominantly AI-generated content from early ages are being trained in preferences for the averaged, the prototypical, and the easily processed. They’re not learning to appreciate complexity, rarity, or difficulty. Children are not developing the capacity to discriminate subtle differences or to value what is unusual. The soul, in Plato’s framework, becomes shaped to desire what it repeatedly encounters. Neural architecture, in the contemporary neuroscience framework, becomes pruned to discriminate what it regularly processes. Either way, the result is the same: populations trained to prefer shadows.
A Harmful Miseducation
So, is AI slop bad for me? Yes, but the answer requires precision. Not all AI-generated content is equally harmful. Human-curated, AI-assisted work can maintain or even enhance quality through active collaboration, preserving cognitive engagement and creative agency.
When humans generate options with AI, select thoughtfully, and refine substantially, the results often surpass what either could produce alone. Early research at OpenAI demonstrated the same pattern. In 2022, the team behind InstructGPT showed that a 1.3 billion parameter model trained with reinforcement learning from human feedback outperformed the original 175 billion parameter GPT-3 model without it. Users preferred the smaller model’s responses across a wide range of tasks, illustrating that human guidance, even applied to a smaller system, can outweigh sheer computational scale.
Empirical studies of AI-assisted artists found similar effects. Examining over four million artworks from fifty thousand users, researchers Eric Zhou and Dokyun Lee found that artists who adopted AI tools produced pieces rated about fifty percent more favorably than their pre-AI work, but only when they actively curated. While a subset of artists generated genuinely novel work by selecting and refining from multiple AI outputs, average novelty declined as most users passively accepted automated generations.
Controlled writing experiments published in Science Advances confirmed the same tendency. Writers given curated AI suggestions produced stories rated 8 to 26 percent higher in quality and creativity than those using unfiltered generations or none at all. The effect was strongest for less experienced writers. Critically, only writers who actively selected from multiple AI suggestions showed improvements, while participants passively accepting single outputs gained no advantage; cognitive engagement, not automation, amplifies creativity.
But you will not encounter primarily human-curated AI content. You will encounter seemingly infinite feeds of unfiltered, fully automated generation optimized for engagement rather than quality. The economic incentives are overwhelming. Inference costs for some AI systems have fallen by more than ninety percent through hardware optimization, such as AWS Inferentia’s ability to run models far more efficiently than standard GPUs. Once trained, generative models operate with vanishingly low marginal costs, drawing only on electricity and compute rather than human labor, a dynamic Andreessen Horowitz describes as bringing the marginal cost of creation toward zero. And unlike human creators, AI systems can generate text, images, and video at speeds many orders of magnitude beyond human capacity. Platforms optimize for this scale because the economics reward output over originality: each new imitation costs less to generate than to resist.
Platforms choose automation not because they misunderstand the quality difference but because the costs of lower quality are externalized to users while the benefits of scale accrue to shareholders. The result is simple: the cheap overwhelms the expensive, the automated drowns out the curated, the collapsed replaces the diverse.
The high-volume AI feeds you actually encounter, not the carefully curated, human-guided work that exists in niche or premium contexts, train your preferences toward sameness through mechanisms faster than conscious thought. Processing fluency makes average content feel pleasing within fifty milliseconds. Perceptual narrowing reshapes neural discrimination through synaptic pruning. The mere exposure effect peaks within ten to twenty presentations.
You will learn to prefer what you are given, and what you are given is recursive imitation, content systematically purged of rarity and optimized for immediate engagement.
So yes, AI slop is bad for you. Not because AI-generated content is immoral to consume or inherently inferior to human creation, but because the act of consuming AI slop reshapes your perception. It dulls discrimination, narrows taste, and habituates you to imitation. The harm lies less in the content itself than in the long-term training of attention and appetite.
Plato warned that imitations corrupt the soul unless we recognize them for what they are. That awareness, he believed, was the only antidote to deception. In our case, recognition may be all that remains.
You can curate carefully, seek out human-made or human-guided work, and limit exposure to automated feeds. These choices matter. They preserve awareness, the capacity to notice the difference between what is real and what is merely fluent. But such choices exist within systems built to maximize engagement, where each new imitation costs less to generate than to resist.
The window for resistance is this one: the moment before habituation completes, before the average becomes preferable to the original. You may understand precisely how and why AI slop degrades perception, and still be unable to avoid it. That, perhaps, is the deeper cruelty of the present, that our loss will not come through ignorance but through recognition too late to matter. The danger was never ignorance. It’s the quiet comfort of knowing something is synthetic and scrolling anyway.