Why Smart People Share Slop
This was never a story about being fooled. It is a story about never having a reason to look.
SF
Stevan W. Pierce Jr.
March 21, 2026 · The Slopfather
A fake food delivery service went viral on Reddit not long ago. AI-generated photos. AI-written copy. Thousands of upvotes from real accounts with long posting histories. People who absolutely know better hit share anyway.
Everyone’s instinct is to ask how they missed it.
That is the wrong question.
The actual question
Your brain is not built to verify everything. It cannot be. You absorb hundreds of pieces of information every single day. If you stopped to fact-check each one, you would not make it to lunch.
So your brain shortcuts. It pattern-matches. It asks: does this look like a thing I already trust? If yes, it files it and moves on. That is not a flaw in your intelligence. That is your intelligence working correctly under load.
The problem is that AI learned to make things that look exactly like things you already trust.
The slop did not beat anyone’s intelligence. It beat the part of the brain that decides whether to use intelligence at all.
The Reddit post had the right format. The right subreddit. An account with history. Early upvotes from other accounts with history. By the time most people saw it, the post had already passed through fifty people who did not flag it. That is fifty reasons not to look closely. The fifty-first person saw fifty votes of implied approval and started reading from inside a false sense of safety.
None of those fifty people were stupid. They were just earlier in the same trap.
Worth noting: Hard Fork, Feb. 13, 2026
Kevin Roose and Casey Newton had a New York Times reporter on to talk about writers using AI to pump out romance novels. Readers are buying those books thinking a human wrote them. Not because the books fooled any serious test. Because nobody ran one. When you buy a book, you assume it was written by the person whose name is on the cover. The platform does not correct you. The assumption just becomes your experience.
The fatigue nobody is talking about
Hard Fork’s March 13 episode introduced something called “AI brain fry.” A researcher named Julie Bedard studied what happens to workers who deal with AI output all day. They get worn down. They stop checking things. Not because they stopped caring. Because checking everything is not something a human body can sustain at the volume AI produces.
Think about the last time you were genuinely exhausted at the end of a workday. Now imagine that state is the condition under which you are expected to spot AI-generated misinformation in your social feed.
That is the actual situation.
I spent fifteen years in cybersecurity watching this exact pattern destroy otherwise solid security programs. You build a system that generates alerts. The alerts multiply. The team starts missing things. Not because the team got worse. Because the volume broke the team’s ability to care about each individual alert. The industry has a name for it: alert fatigue. The solution was never “be more careful.” The solution was always fixing the system that generates the alerts.
Also worth noting: Hard Fork, March 13, 2026
Casey Newton found out Grammarly had used his name and identity in an AI feature without asking him. A tech journalist who has been reading the AI industry closely for over a decade. He did not catch it until after the fact. That is not carelessness. That is what it looks like when the attack surface for this stuff gets bigger than any one person’s ability to watch it.
The part nobody wants to admit
Some of the shares were not even about the content.
People share things to perform an identity. The person who shares a good deal is being the person who finds good deals. The person who shares the hot restaurant recommendation is being the person who knows about hot restaurants. The content is a costume. The share is the statement.
Checking whether the restaurant actually exists would have ruined the whole point.
This is not new human behavior that AI created. AI just made it cheaper to produce the costume.
The Verdict
Smart people share slop because the platforms built systems where social proof looks identical to actual proof. AI learned to manufacture social proof at scale. And humans can only run manual verification on so much before the tank runs empty. This was never about intelligence. It was about infrastructure that was designed, tested, and optimized to make verification feel unnecessary.
Telling smart people to be more careful is not a fix. It is an assignment of blame that lets the platform off the hook.
The slop is not the problem. The slop is the symptom. The problem is a system that profits every time you share something, and has no financial interest in whether that something is real.
The Slopfather covers the AI content flood so you do not have to swim in it alone. If you shared something that turned out to be slop, the designers say thank you! You were the infrastructure working as designed.
