Run one command. See the damage.
aislop is a CLI that scores the code your agent wrote. Open source, MIT. Works in any Node/JS/TS/Python/Go/Rust/Ruby/PHP repo.
Copy this. Run it in your repo
You'll get something like this:
[ok] Formatting: done (0 issues, 426ms) [ok] Linting: done (0 issues, 396ms) [!] Code Quality: done (2 warnings, 812ms) [!] AI Slop: done (4 warnings, 455ms) [ok] Security: done (0 issues, 1.3s) > Code Quality [WARN] [auto] Unused export (2) src/lib/format-bytes.ts:12 src/utils/retry.ts:8 [WARN] Function 'buildFixRender' has 127 lines (max: 80) src/commands/fix-render.ts:14 > AI Slop [WARN] [auto] Narrative comment block (JSDoc preamble) (2) src/lib/auth.ts:86 src/routes/views.ts:9 [WARN] [auto] Unused import 'Cache' src/services/users.ts:3 [WARN] 'as any' bypasses type safety src/api/normalize.ts:47 87 / 100 Healthy 0 errors · 6 warnings · 4 fixable 84 files · 5 engines · 1.9s → Run npx aislop fix to auto-fix 4 issues → Run npx aislop fix --claude to hand off the rest to an agent
Score, grouped issues per engine, count of auto-fixable items, and a pointer to what to do next. No config needed — defaults Just Work.
Fix what's safe
Applies the mechanical fixes — formatters, unused imports, trivial and narrative comments, dead patterns. Re-scans afterward so you see the new score. Anything that needs judgment (like as any) is left for you or your agent.
Gate it in CI
Interactive wizard. Writes .aislop/config.yml and (optionally) drops .github/workflows/aislop.yml. Commit both. From then on, every PR runs through the gate and can't merge below your threshold. Full CI setup in CI / CD.
Who this is for
- Solo devs shipping with agents — Claude Code / Codex / Cursor generate a lot of code, most of which you won't read line by line. aislop is the second pair of eyes that does read it.
- Teams standardizing on AI-assisted code — when everyone on the team prompts differently, the output diverges. A shared score + required CI check keeps the bar in one place.
- Tech leads who want a "quality gate" that isn't just tests — tests prove the code runs. aislop proves the code isn't slop — unused deps, 1200-line files, narrative JSDoc, vulnerable packages,
evalfallbacks.
If you read PRs by scrolling to the bottom and clicking approve, this is for you.
What's actually checked
Six engines, 0–100 score. Each engine delegates to best-in-class tooling where it exists (Biome, oxlint, ruff, knip, cargo, gofmt, rubocop, govulncheck, pip-audit, …) plus aislop's own detectors for the AI-slop patterns:
- Dead code, unused imports, unused exports, unused declarations (function / variable / class)
- Narrative JSDoc preambles, decorative separators, cross-reference commentary, trivial "// Import X" comments
- Oversized files and functions (with a soft tolerance; template-literal-dominated functions are exempt)
as any/@ts-ignore, swallowed catches,console.logleftovers,TODOstubs- Dependency vulnerabilities,
eval, SQL / shell injection patterns - Your own architecture rules (opt-in): forbidden imports, layer separation, required patterns