AI concentration audit service for articles with:
- multi-input ingestion (
text,txt,docx,pdf) - multi-agent reflection chain (
Detector,Challenger,Arbiter) - adaptive early-stop policy with configurable max rounds
- multi-provider model routing via LiteLLM
- single-page frontend (
React + Vite + Tailwind + Zustand)
Backend:
- Python 3.11+
- FastAPI
- LangGraph
- LiteLLM
- Pydantic / Pydantic Settings
- python-docx / pypdf
Frontend:
- React + TypeScript
- Vite
- Tailwind CSS
- Zustand
- Axios
Testing:
- Pytest (backend unit tests)
python -m venv .venv
. .venv/Scripts/activate
pip install -e .[dev]Create .env from .env.example.
For real model calls:
- set
AI_LENS_MOCK_MODE=false - configure provider credentials in
.env:AI_LENS_OPENAI_API_BASEAI_LENS_OPENAI_API_KEYAI_LENS_DASHSCOPE_API_BASEAI_LENS_DASHSCOPE_API_KEY
Example for an OpenAI-compatible proxy (such as cliproxyapi):
AI_LENS_MOCK_MODE=false
AI_LENS_PROVIDER_MODELS=openai/your-model-name
AI_LENS_OPENAI_API_BASE=https://your-proxy-base-url/v1
AI_LENS_OPENAI_API_KEY=your-skuvicorn app.main:app --reloadKey endpoints:
GET /api/v1/health
GET /api/v1/options
POST /api/v1/auditAudit request is multipart/form-data with:
text(optional)file(optional,txt/docx/pdf)max_rounds(optional)provider(optional, must be in/api/v1/options)genre(optional:auto|academic|general_media)
Example:
curl -X POST http://127.0.0.1:8000/api/v1/audit \
-F "text=This is the article content to audit." \
-F "max_rounds=1"cd web
npm install
npm run devFrontend dev server: http://127.0.0.1:5173
Proxy target: http://127.0.0.1:8000
app/services/extraction.py: file extraction and parse errorsapp/services/segmentation.py: language detect and paragraph splitapp/agents/graph.py: LangGraph loop and stop conditionsapp/agents/executor.py: role execution (LiteLLM + mock mode)app/services/audit_service.py: backend orchestrationweb/src/store/useAuditStore.ts: frontend state and request flowweb/src/App.tsx: single-page layout and result presentation
flowchart TD
A[User Input: text or file] --> B[FastAPI /api/v1/audit]
B --> C[Extraction: txt/docx/pdf to plain text]
C --> D[Segmentation: language + paragraph IDs]
D --> E[Genre Resolver: auto or manual]
E --> F[LangGraph Coordinator]
F --> G[Detector Agent]
G --> H[Challenger Agent]
H --> I[Arbiter Agent]
I --> J{Early Stop?}
J -- No --> G
J -- Yes --> K[Aggregate document score + risk]
K --> L[Response JSON: document + paragraphs + trace]
L --> M[Frontend: summary cards + paragraph details]
Runtime steps:
- User submits pasted text or uploads a document.
- Backend extracts and normalizes text, then splits into paragraphs.
- System resolves genre (
academicorgeneral_media) and selects role-specific prompts. Detector -> Challenger -> Arbiterruns in adaptive reflection rounds.- Stop when early-stop policy is satisfied or max rounds reached.
- Return document-level score/risk and paragraph-level trace for UI rendering.
Early stop requires both:
- stable decisions (small score delta and unchanged label signature)
- confidence threshold reached (
document_confidence >= AI_LENS_CONFIDENCE_THRESHOLD)
The graph also includes dispute stagnation protection to avoid useless loops.
