AI-SLOP Detector v2.6.1: It now audits itself (and that’s the point) #31
flamehaven01
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
AI code can look “production-ready” while implementing almost nothing.
It passes lint.
It follows clean architecture.
It even ships.
And yet—when you ask what logic is actually here—the answer is often surprisingly thin.
That gap is what I call AI slop.
AI-SLOP Detector is a deterministic static analyzer that doesn’t ask “is this safe?”
It asks: “Is there real logic here—or just convincing scaffolding?”
What’s new in v2.6.1
v2.6.1 is a trust release.
Less marketing, more auditability.
The theme is simple:
1) Configuration Sovereignty (YAML-driven dependency intents)
In v2.6.1, the dependency “meaning” layer is no longer hardcoded.
They were externalized into:
src/slop_detector/config/known_deps.yamlWhy this matters:
Example (trimmed):
The hallucinated-dependency detector now loads this YAML dynamically, instead of relying on implicit assumptions embedded in code.
2) Quality improvements(test suite + coverage push)
This release is heavily test- and coverage-driven:
✅ 165 tests
✅ 85% overall coverage
✅ CI Gate coverage: 0% → 88%(with 37 new tests)
Translation: the detector is no longer just “smart” — it’s harder to regress.
3) Question Generator is now pinned by tests(stable review UX)
The Question Generator turns findings into actionable code-review prompts.
In v2.6.1, it gained a dedicated test suite:
8 new test cases
significantly improved module coverage
If you ship review UX, it must be deterministic.
This patch locks its behavior release-to-release.
4) VS Code extension sync
The VS Code extension was synchronized to v2.6.1
so editor feedback stays aligned with the core analyzer.
Real-world proof: I ran the detector on itself
If a slop detector can’t survive its own criteria, it’s just a mascot.
So I ran AI-SLOP Detector against the ai-slop-detector codebase.
Executive result
This is the point:
What it still flagged (because it should)
Even in a CLEAN repo, it surfaced real issues and anti-patterns, including:
except:(swallows KeyboardInterrupt/SystemExit)pass,..., TODO/FIXME patterns (in test fixtures)And yes—this includes calling out my own words:
it flagged jargon terms like “production-ready” inside the CLI layer.
Synthetic slop fixtures: the detector should explode
The repo contains intentionally-bad “slop fixtures” for validation.
Example:
generated_slop.pyscored 96.77 Deficit Score and triggered heavy jargon inflation(“neural”, “state-of-the-art”, “optimized”, “transformer”, etc.).
A good analyzer should:
Quick start
CI Gate modes (soft / hard / quarantine)
If you try it
Run it on a repo where AI-written code “looks complete”.
If the detector:
misses convincing emptiness, or
creates noisy false positives
open an issue with a sanitized snippet + your expectation.
I’m actively tuning the rules + fixtures to match real-world review pain.
##RFC
What’s the fastest reliable signal, in your team,
that a PR is scaffolding dressed as implementation?
Beta Was this translation helpful? Give feedback.
All reactions