The Algorithm Is Screaming at Your Child
YouTube discovered that children will watch anything. It decided that was a business model.
There is a video on YouTube of a cartoon elephant swimming in a pool.
The elephant enters the water at a physically impossible angle. Its trunk briefly detaches. A cheerful synthetic voice announces the letters of the alphabet in no particular order. The elephant's face cycles through four expressions in two seconds. The video is forty-one seconds long. It has nine million views.
This is not a failure of the content moderation system. This is the content moderation system working correctly and leaving the video alone because it does not technically violate any policy that currently exists.
A New York Times investigation published last week documented what happens after a toddler watches a single CoComelon video on YouTube. Within a fifteen-minute session, more than forty percent of the recommended Shorts contained AI-generated visuals. The clips featured warped faces, extra limbs, garbled text, and no narrative structure whatsoever. None ran longer than thirty seconds. The channels behind them were largely anonymous. They were earning ad revenue until the Times asked YouTube for comment, at which point five channels were suspended from the Partner Program. Reactive, not proactive. The journalism did the moderation.
This is worth sitting with. YouTube's algorithm had the information needed to identify these channels. It had view duration, engagement patterns, channel age, upload frequency, and content fingerprinting. It chose not to act on any of it because the videos were performing.
Here is what a good children's video actually does.
Ellen Doherty, the chief creative officer at Fred Rogers Productions, explained the structure of a Daniel Tiger episode to the Times. Two short stories per episode. Songs that reinforce themes and can be memorized by both child and parent. Pacing calibrated to children who do not yet have cinematic language. Long pauses built in deliberately, because the pause is where learning happens.
"That spark of human connection is everything," she said.
A forty-one-second AI elephant has no pauses. It cannot afford pauses. A pause is a moment in which the child might look away, and the algorithm needs the child not to look away.
This is the distinction that matters. Good children's media is built around what children need. AI slop is built around what the algorithm rewards. These are not compatible goals, and the algorithm does not pretend otherwise.
Spend fifteen years in security operations and you develop a specific relationship with signals. Not all signals carry equal weight. Some signals are loud and fast and mean nothing. Some signals are quiet and slow and mean everything. The ability to distinguish between them is the job.
What YouTube is feeding children is all noise and no signal.
Developmental psychologists call the phenomenon displacement. It is not only that the AI content is bad. It is that the AI content occupies the time and attention that would otherwise go to something that builds something. Reading. Play. Interaction with other humans. Even passive media with real narrative structure. The sheer volume of low-quality content is crowding out the conditions under which development actually happens.
The volume is not accidental. Creators, many operating anonymously, have built reliable income streams on this content. The barrier is low. The upload cadence is mechanical. The feed keeps filling.
Here is what the AI elephant knows about your child: nothing.
It was not designed with your child in mind. It was designed by someone who watched a tutorial about passive income and downloaded a video generation tool. The tutorial did not include a section on object permanence or language acquisition or the developmental importance of narrative repetition. It included a section on SEO and thumbnail optimization.
The elephant's face cycles through four expressions in two seconds because four expressions in two seconds tests better in the first three seconds of watch time than one expression held for the duration of the video. The alphabet is announced in no particular order because order requires structure and structure requires intention, and the creator has no intention beyond the next monetization threshold.
This is not cynicism. This is the business model, stated plainly.
The American Academy of Pediatrics recommends that media use for children under two be very limited. About sixty percent of parents with a child under two report the child watches YouTube. About one-third report the child watches it every day.
YouTube told researchers there is nothing to worry about.
YouTube's algorithm continues to recommend AI elephant content.
McCall Booth, a developmental psychologist at Georgetown University, raised the concern that will outlast this news cycle. Children exposed to improbable but aesthetically realistic content may develop cognitive frameworks that include those impossibilities as baseline. The elephant detaches its trunk. The alphabet is non-sequential. Physics is decorative. These are not lessons anyone intended to teach.
But the algorithm is not in the business of intentions.
There is a reason we talk about screaming at children as a failure of parenting. It is not because screaming fails to get attention. It gets attention immediately and completely. The reason screaming fails is that attention without meaning teaches only that the world is loud.
The AI elephant gets attention. It holds attention. By every metric the algorithm tracks, it succeeds.
The metrics do not track what happens to the child after the video ends.
YouTube suspended five channels after the Times ran its investigation. It removed three hyperrealistic videos from YouTube Kids. It declined to extend the AI disclosure requirement to animated children's content.
Five channels. The investigation identified five channels.
The feed is not five channels deep.
The parents who figured this out first did what parents always do when institutions fail them. They built playlists of vetted content by hand. They removed the app entirely. They made individual decisions about individual videos while the platform that profits from their children's attention issued a statement.
None of this is their fault. They did not build the algorithm. They did not set the monetization policy. They did not decide that disclosure requirements would not apply to cartoons. They opened an app that told them it was safe for children and found out, eventually, that "safe for children" is a promise the platform made to itself.
The algorithm is not screaming at your child because it is malicious. It is screaming because screaming works, and because no one at YouTube is paid to care what happens after the screen goes dark.
That distinction will not matter to the child.
The Slopfather covers AI-generated content and the systems that distribute it. If this made you feel something, pass it along.

