Skip to main content
New aislop v0.5.0. New CLI, own AST fix engine, stable output, better experience Read more →
← Blog
Announcement · 5 min read · reads

Why We Built scanaislop: The Problem with Five Linters and No Standard

Why build another linter when we already have five? Some teams chain them together. Some give up and deploy. Here is what we landed on after one too many PRs where tests passed but the code was still a mess.

Why build another linter when every team already has five? Fair question. Some teams chain ESLint, Prettier, Biome, oxlint, and ruff together and call it a standard. Some only run whatever the IDE runs and ship. Some have stopped noticing. Here is what we landed on, and the PR that made it obvious nothing we had was catching the problem.

I was reviewing a PR last year. Sixty files. TypeScript and Python. The code worked, tests passed, CI was green. Something was off. Half the catch blocks were empty. There were comments like // get the user sitting above getUser(). Three different variables called data, data2, and result. The author had clearly accepted most of the agent's suggestions without touching them. I left about forty comments. It took an hour. The next PR was the same.

The tooling gap

We already had ESLint, Prettier, and ruff in CI. None of them flagged any of this. That is not a knock on those tools. They do what they are designed to do. ESLint catches syntax and style. Prettier formats. ruff lints Python fast. But none of them ask whether a comment is actually useful, or whether a catch block is doing anything. Those are judgment calls, and they are exactly the calls AI generators get wrong the most.

Five tools, five configs

The other problem was simpler. Our monorepo had TypeScript, Python, Go, and some Rust. Each language had its own linter, its own formatter, its own CI step. Onboarding a new dev meant explaining four different tool configs before they could push code. I wanted one command that could scan everything and hand me a number. Not a dashboard. Not a platform. Just a CLI that runs locally and in CI.

What aislop does

npx aislop scan walks your project, detects the languages, runs the right checks in parallel, and outputs a score from 0 to 100. No config to start. No API keys. Runs offline except for optional dependency audits. npx aislop fix auto-resolves what it can. Formatting. Unused imports. Dead code. The rest it reports so you can clean it up yourself or hand it to an agent with aislop fix --claude.

The scoring is deterministic. Same code, same score, every time. No LLM in the loop. That matters. The score is the artifact. If an LLM were in the middle of it, the number would drift and the gate would be useless.

Where it's going

Architecture rules are next. Import bans, layering enforcement, that kind of thing. After that, better Expo and React Native support and deeper Python checks. It is MIT licensed and on GitHub. If you have ideas, open an issue. If it breaks on your repo, open an issue. We will pin it to the next validation matrix.

Try it on your repo

$ npx aislop scan

Run it. You will get a number. Star the AI Slop CLI on GitHub if you want the next release in your feed.