Skip to content

ccccccmd/AI-Lens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Lens

AI concentration audit service for articles with:

  • multi-input ingestion (text, txt, docx, pdf)
  • multi-agent reflection chain (Detector, Challenger, Arbiter)
  • adaptive early-stop policy with configurable max rounds
  • multi-provider model routing via LiteLLM
  • single-page frontend (React + Vite + Tailwind + Zustand)

Preview

AI-Lens App Preview

1. Tech Stack

Backend:

  • Python 3.11+
  • FastAPI
  • LangGraph
  • LiteLLM
  • Pydantic / Pydantic Settings
  • python-docx / pypdf

Frontend:

  • React + TypeScript
  • Vite
  • Tailwind CSS
  • Zustand
  • Axios

Testing:

  • Pytest (backend unit tests)

2. Backend Setup

python -m venv .venv
. .venv/Scripts/activate
pip install -e .[dev]

Create .env from .env.example.

For real model calls:

  • set AI_LENS_MOCK_MODE=false
  • configure provider credentials in .env:
    • AI_LENS_OPENAI_API_BASE
    • AI_LENS_OPENAI_API_KEY
    • AI_LENS_DASHSCOPE_API_BASE
    • AI_LENS_DASHSCOPE_API_KEY

Example for an OpenAI-compatible proxy (such as cliproxyapi):

AI_LENS_MOCK_MODE=false
AI_LENS_PROVIDER_MODELS=openai/your-model-name
AI_LENS_OPENAI_API_BASE=https://your-proxy-base-url/v1
AI_LENS_OPENAI_API_KEY=your-sk

3. Backend Run

uvicorn app.main:app --reload

Key endpoints:

GET /api/v1/health
GET /api/v1/options
POST /api/v1/audit

Audit request is multipart/form-data with:

  • text (optional)
  • file (optional, txt/docx/pdf)
  • max_rounds (optional)
  • provider (optional, must be in /api/v1/options)
  • genre (optional: auto|academic|general_media)

Example:

curl -X POST http://127.0.0.1:8000/api/v1/audit \
  -F "text=This is the article content to audit." \
  -F "max_rounds=1"

4. Frontend Setup

cd web
npm install
npm run dev

Frontend dev server: http://127.0.0.1:5173
Proxy target: http://127.0.0.1:8000

5. Architecture

  • app/services/extraction.py: file extraction and parse errors
  • app/services/segmentation.py: language detect and paragraph split
  • app/agents/graph.py: LangGraph loop and stop conditions
  • app/agents/executor.py: role execution (LiteLLM + mock mode)
  • app/services/audit_service.py: backend orchestration
  • web/src/store/useAuditStore.ts: frontend state and request flow
  • web/src/App.tsx: single-page layout and result presentation

6. How It Works

flowchart TD
    A[User Input: text or file] --> B[FastAPI /api/v1/audit]
    B --> C[Extraction: txt/docx/pdf to plain text]
    C --> D[Segmentation: language + paragraph IDs]
    D --> E[Genre Resolver: auto or manual]
    E --> F[LangGraph Coordinator]

    F --> G[Detector Agent]
    G --> H[Challenger Agent]
    H --> I[Arbiter Agent]
    I --> J{Early Stop?}
    J -- No --> G
    J -- Yes --> K[Aggregate document score + risk]

    K --> L[Response JSON: document + paragraphs + trace]
    L --> M[Frontend: summary cards + paragraph details]
Loading

Runtime steps:

  1. User submits pasted text or uploads a document.
  2. Backend extracts and normalizes text, then splits into paragraphs.
  3. System resolves genre (academic or general_media) and selects role-specific prompts.
  4. Detector -> Challenger -> Arbiter runs in adaptive reflection rounds.
  5. Stop when early-stop policy is satisfied or max rounds reached.
  6. Return document-level score/risk and paragraph-level trace for UI rendering.

7. Stop Policy

Early stop requires both:

  1. stable decisions (small score delta and unchanged label signature)
  2. confidence threshold reached (document_confidence >= AI_LENS_CONFIDENCE_THRESHOLD)

The graph also includes dispute stagnation protection to avoid useless loops.

About

AI-LENS , Article AI Concentration AuditorPaste text or upload a document to run a multi-agent reflective audit. Results are summary-first and provide paragraph-level trajectories from Detector, Challenger, and Arbiter.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors