AI-SLOP Detector v3.5.0 — Every Claim, Verified Against Source Code #41
flamehaven01
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I published a LinkedIn post about AI-SLOP Detector's self-calibration system and download numbers. Someone asked the reasonable question: "Can you actually back that up?"
Yes. Here's the source.
This isn't a feature announcement. It's a line-by-line audit of seven claims against the actual codebase. Every VERDICT links to a real file and real line numbers. The repo is public — go check it yourself.
What was claimed
.slopconfig.yaml)All seven. No fabrications. No inflated numbers. Here's the proof.
Claim 1: "Every scan is recorded"
Source:
src/slop_detector/history.py, lines 116–180Auto-invoked on every CLI run. The only opt-out is
--no-history. Each scan writes to SQLite at~/.slop-detector/history.dband stores:deficit_score,ldr_score,inflation_score,ddc_usage_ration_critical_patterns,fired_rulesgit_commit,git_branch,project_idSchema is now at v5, auto-migrated on startup through every release from v2.9.0 to v3.5.0.
VERDICT: TRUE. The record() call is real. The schema is versioned. The behavior is not optional.
Claim 2: "Every re-scan becomes signal"
Source:
src/slop_detector/history.py, lines 221–246Source:
src/slop_detector/ml/self_calibrator.py, lines 301–309Single-scan files produce no calibration events. Only repeat scans generate
improvementorfp_candidatelabels. The threshold is hardcoded in SQL, not assumed.VERDICT: TRUE. The repeat-scan requirement is enforced at the query level, not in documentation.
Claim 3: "Updates only when the signal is strong enough"
Source:
src/slop_detector/ml/self_calibrator.py, lines 37–54 (constants) and 251–262 (enforcement)Gate 1 — confidence gap check (line 251):
Gate 2 — score delta check (line 262):
Two independent guards. Both must pass before any weight update applies.
VERDICT: TRUE. Ambiguous signal is rejected twice before touching configuration.
Claim 4: "Leaves behind a visible policy every time it changes"
Source:
src/slop_detector/ml/self_calibrator.py, docstring line 17–18Return CalibrationResult; optionally write to .slopconfig.yaml via --apply-calibrationWhen
--apply-calibrationis passed andstatus == "ok", optimal weights are written to.slopconfig.yaml. Plain-text YAML. Human-readable. Git-versionable. Every calibration change is a diff.VERDICT: TRUE. The policy artifact is explicit. You can
git blameit.Claim 5: "Explicit limits govern calibration"
Source:
src/slop_detector/ml/self_calibrator.py, lines 37–54No ML model. No learned bounds. Every constraint is a named constant with a comment explaining why it exists. The calibration space is a bounded grid, not an open optimization landscape.
VERDICT: TRUE. Every limit is auditable. Nothing is opaque.
Claim 6: "Detects empty implementations, phantom dependencies, disconnected pipelines"
These are the three canonical defect patterns AI code generation produces at scale. Each has a dedicated module.
src/slop_detector/metrics/ldr.py— LDRCalculator detectspass,...,raise NotImplementedError,TODOsrc/slop_detector/metrics/hallucination_deps.py— AST-based import vs usage analysis viaHallucinatedDependencydataclasssrc/slop_detector/metrics/ddc.py— DDC (Declared Dependency Completeness) usage ratiosrc/slop_detector/patterns/python_advanced.py— Jensen-Shannon Divergence on 30-dim AST histograms, JSD < 0.05 = cloneThe clone detection is worth noting. JSD on AST histograms catches structural duplication that string similarity misses entirely. LLMs produce a lot of this — same function logic, slightly renamed.
VERDICT: TRUE. Each defect class has a named module with a working implementation.
Claim 7: "~1.4K downloads in the past week"
Source: pypistats.org API (
mirrors=false), queried 2026-04-15"~1.4K" is within 0.5% of 1,407. Mirrors excluded means bot traffic is stripped — these are real install invocations.
VERDICT: TRUE. Verified against pypistats in real time. The number is not rounded up.
Why this format exists
Most open-source project posts make claims. Few back them up with file paths and line numbers.
That gap is the same problem AI-SLOP Detector is built to close. AI-generated code makes claims too — functions that look complete, imports that look used, pipelines that look connected. Static analysis finds the gap between what the code says and what it does.
This post applies the same standard to the project's own marketing copy. If a claim can be verified, it should be. If it can't, it shouldn't be made.
The codebase is public: github.com/flamehaven01/AI-SLOP-Detector
Pull requests welcome. Audits welcome more.
Verified by static code analysis + pypistats API, 2026-04-15
Beta Was this translation helpful? Give feedback.
All reactions