What Is AI Slop? The Patterns AI Agents Leave Behind
What even counts as AI slop? Some say anything an agent wrote. Some say only the stuff that breaks. Some do not care as long as it compiles. Here is what we landed on after 25 real projects, and the specific patterns we look for.
What counts as AI slop? Some people say it is anything an agent wrote. Some say only the stuff that actually breaks. Some do not care as long as it compiles. Here is what we landed on after 25 real projects. You know it when you see it. A PR where everything technically works but reading it feels like wading through filler. That is AI slop. It is not broken code. It is code that is slightly worse than what a human would write, repeated across hundreds of lines, compounding with every merge.
Ask anyone, or any team, shipping (or vibecoding, like we say now) with Claude Code, Cursor, Opencode, Codex, or whatever agent they are on. The patterns are the same everywhere.
The patterns
Comments that say nothing. // Initialize the database connection sitting above initDB(). These are not helpful. They are visual noise. Worse, they drift. Six months from now the function gets renamed but the comment does not, and now it is actively lying to you.
Silent failures. Empty catch blocks, or catch blocks with just console.log(err). The agent generates these because it is trying to make the code compile, not because it thought about error handling. You end up with apps that fail silently in production and nobody knows why. "Oh sh**, it works now. But I can tell it's not scalable." Right.
Naming that gives up. data, data2, result, temp, helper_func. These are placeholders that never got replaced. When you are debugging at 2am and every variable is called data, the naming is not a style issue. It is a productivity issue.
Type system escape hatches. as any is the agent's way of saying "I don't know the type." as unknown as HTMLElement is the agent's way of saying "I really don't know the type." Both throw out the safety TypeScript is supposed to give you.
And that is before you get to the rest. Dead code. Unused functions. Useless variables. Duplicate helpers that should have been one reusable function. console.log statements sitting in production (for real). Half renamed variables. JSDoc paragraphs nobody asked for. Your agents are leaving all of it behind.
Why it is hard to catch in review
Each instance is small. A trivial comment here. A generic name there. Nobody rejects a PR over one comment. But across a 60-file PR, these add up. And since AI-generated PRs tend to be large, the reviewer is already overwhelmed. "I don't even review anymore, I just deploy. I pushed 10K lines yesterday." That is the reality for most teams now.
The other problem is that it looks professional. The indentation is perfect, the imports are sorted, the function signatures are reasonable. It looks like good code at a glance. You have to read carefully to notice the catch block is empty. Nobody is reading carefully. Nobody is reading at all.
What to do
Do not stop using AI agents. They are genuinely useful. But treat their output the way you would treat a junior developer's code. Review it properly. Rename the variables. Fill in the error handling. Delete the useless comments. Or run npx aislop scan and let it flag the obvious stuff so you can focus on the logic. Your coding agent should have guardrails. AGENTS.md is not enough. You have to hold your agent accountable.
Try it on your repo
Run it. You will see the patterns in your own code. Star the AI Slop CLI on GitHub if you want the next release in your feed.