An intelligent learning companion that teaches Python concepts through Bloom's Taxonomy
Progressive pedagogy from memorization to mastery • Multi-agent AI system • CLI and API interfaces
Bloom Tutor is a multi-agent AI system that guides learners through Python concepts using Bloom's Taxonomy—a pedagogical framework that progresses from basic recall (Remember) through Understand, Apply, Analyze, Evaluate, to creative synthesis (Create).
Unlike traditional chatbots, Bloom Tutor adaptively assesses your understanding and adjusts its teaching strategy, ensuring you develop deep comprehension rather than surface-level knowledge.
- Adaptive Pedagogy: Progresses through Bloom's 6 levels based on demonstrated mastery
- Multi-Agent System: Specialized Judge, Tutor, and Feedback agents collaborate for effective learning
- Session Resumption: Pause and resume learning sessions without losing progress
- Interactive CLI: Rich terminal interface with streaming responses
- Resilient Architecture: Automatic LLM fallback chains across providers (OpenAI, Anthropic, Google)
- Cost Control: Token-based rate limiting with sliding window algorithm
- Observability: Per-agent tracing via LangFuse with detailed token/cost tracking
- RESTful API: FastAPI backend with streaming SSE support
Prerequisites: Python 3.12+, Redis, OpenAI API key
# Backend Setup (5 minutes)
uv sync # Install dependencies
cp .env.example .env # Configure (add your OPENAI_API_KEY)
docker-compose up -d redis # Start Redis
# Run CLI
python -m app.cli.main --topic "Python Lists"
# Or run API server
uvicorn app.main:app --reload # http://localhost:8000/docs
# Frontend (Optional - requires Node.js)
cd frontend
npm install && npm run dev # http://localhost:5173📖 For detailed setup, troubleshooting, and API usage, see QUICKSTART.md
| Layer | Technologies |
|---|---|
| Backend | FastAPI, Uvicorn, Pydantic |
| LLM Orchestration | PydanticAI with multi-provider support |
| State & Caching | Redis (sessions, rate limiting) |
| Observability | LangFuse (tracing, token tracking) |
| CLI | Typer, Rich, Prompt-toolkit |
| Package Manager | uv |
Three specialized AI agents collaborate to create an adaptive learning experience:
graph TD
A[User Input] --> B{Should Judge?}
B -- Yes --> C[👨⚖️ Judge Agent<br/>Evaluates mastery]
C --> D[🤔 Decision Policy<br/>ADVANCE/RETRY/HINT]
C --> F[🗣️ Feedback Agent<br/>Student-friendly translation]
B -- No --> D
F --> E[🧑🏫 Tutor Agent<br/>Next question/explanation]
D --> E
E --> A
G[🧠 Redis State<br/>Sessions & History] <-.-> C
G <-.-> D
G <-.-> E
style C fill:#f9f,stroke:#333,stroke-width:2px
style E fill:#9cf,stroke:#333,stroke-width:2px
style F fill:#9f9,stroke:#333,stroke-width:2px
How it works:
- Judge Agent evaluates if the user has mastered the current Bloom level
- Feedback Agent translates technical assessments into encouraging, student-friendly feedback
- Decision Policy determines next action: advance to higher level, retry, provide hint, or step back
- Tutor Agent crafts the next question or explanation based on the decision
- Redis persists session state, enabling resumption and context retention
Token-Based Rate Limiting
- Sliding window algorithm prevents abuse while managing API costs
- Pre-flight budget checks avoid unnecessary LLM calls
- Configurable limits per hour/day with per-user tracking
LLM Resilience
- Automatic failover across multiple providers (OpenAI → Anthropic → Google)
- Configurable fallback chains per agent type
- Provider-agnostic design for maximum uptime
Observability
- Per-agent LangFuse tracing with token/cost tracking
- Rich metadata tags (
user_id,session_id,bloom_level) - Fallback chain tracking for reliability insights
For implementation details, see docs/
Key architectural decisions are documented with rationale, alternatives, and tradeoffs:
graph TB
subgraph "Core Infrastructure"
ADR004[ADR-004: PydanticAI<br/>Type-safe LLM orchestration]
ADR002[ADR-002: Redis<br/>State management]
ADR001[ADR-001: Langfuse<br/>Observability]
end
subgraph "Reliability & Performance"
ADR003[ADR-003: Multi-Provider<br/>Fallback chains]
end
subgraph "Learning System"
ADR005[ADR-005: Bloom's Taxonomy<br/>Progression model]
end
ADR004 -->|orchestrates| ADR003
ADR003 -->|calls| LLM[OpenAI/Gemini/DeepSeek]
ADR004 -->|traces to| ADR001
ADR005 -->|uses| ADR004
ADR005 -->|stores state in| ADR002
ADR001 -->|monitors| ADR003
style ADR001 fill:#e1f5ff,stroke:#01579b
style ADR002 fill:#fff3e0,stroke:#e65100
style ADR003 fill:#f3e5f5,stroke:#4a148c
style ADR004 fill:#e8f5e9,stroke:#1b5e20
style ADR005 fill:#fce4ec,stroke:#880e4f
📋 Decision Index:
| ADR | Decision | Why? | Impact |
|---|---|---|---|
| 001 | Langfuse for Observability | 2-3 days vs weeks for custom | Cost tracking, A/B testing |
| 002 | Redis for State | 2-5ms latency, TTL support | Session resumption, 5K+ users |
| 003 | Multi-Provider Fallback | 99.8% uptime guarantee | $0.048/conversation |
| 004 | PydanticAI over LangChain | 100% type safety | 30min/agent vs 2+ hours |
| 005 | Bloom's Taxonomy | Pedagogically sound | 75% reach Apply level |
➡️ View all ADRs with detailed alternatives analysis, success metrics, and implementation plans.
- QUICKSTART.md - Setup, API usage, troubleshooting
- docs/adr/ - Architecture Decision Records (why we made key choices)
- docs/refactor-v2.md - Staff engineer code review and refactoring roadmap
- docs/ - Implementation details (observability, reliability, memory, etc.)
- prompts/ - System design, domain knowledge
- API Docs -
http://localhost:8000/docs(when running)
Contributions welcome! Please:
- Fork and create a feature branch
- Run tests:
pytest tests/ -v - Follow conventional commits:
feat:,fix:,docs: - Submit a pull request
MIT License - see LICENSE file for details.
Status: ✅ Phase 1.5 Complete • Version: 0.1.0 • Last Updated: 2026-01-30