An AI-powered system that automatically grades exams, identifies learning gaps, and creates personalized study plans. Built with Google's Agent Development Kit (ADK).
Student submits exam → AI grades it → Identifies weaknesses → Creates personalized study plan
Key Features:
- ✅ Automated grading with detailed feedback
- ✅ Identifies recurring learning gaps across multiple exams
- ✅ Creates personalized study recommendations
- ✅ Role-based access (students see their data, teachers see all students)
- ✅ Processes both text and image-based exams
- ✅ Tracks learning progress over time
- Python 3.13+
- uv (Python package manager)
- Google API Key (Get one here)
- Clone and setup:
cd ai_agent_capstone
uv sync
source .venv/bin/activate # On Windows: .venv\Scripts\activate- Configure API Key:
# Create .env file in feedback_agent/ directory
cat > feedback_agent/.env << 'EOF'
GOOGLE_GENAI_USE_VERTEXAI=0
GOOGLE_GEMINI_BASE_URL="https://generativelanguage.googleapis.com"
GOOGLE_API_KEY=your_api_key_here
MODEL_NAME="gemini-1.5-flash"
EOFImportant: Replace your_api_key_here with your actual Google API key.
- Verify installation:
python -c "import google.adk; print('✅ ADK installed successfully')"Try the interactive demo:
python demo.pyOr use the conversational interface:
cd feedback_agent
adk webThen open http://localhost:8000 in your browser.
Demo Options - Select from 5 demos:
- Basic Exam Processing - See how grading works
- Role-Based Access Control - Student vs Teacher permissions
- Image Processing - Upload exam photos
- Metrics Tracking - Monitor system performance
- Complete Workflow - End-to-end demo
import asyncio
from feedback_agent.agent import FeedbackSystem
async def grade_exam():
# Initialize system
system = FeedbackSystem()
# Register student
student_id = system.register_student("John Doe")
# Process exam
result = await system.process_exam(
student_id=student_id,
exam_content="1. What is 5 + 3? Answer: 8",
answer_key="1. 8",
subject="Mathematics",
user_id="teacher_001"
)
# View results
print(f"Score: {result['total_score']}/{result['max_score']}")
print(f"Weaknesses: {result['weaknesses']}")
print(f"Recommendations: {result['recommendations']}")
asyncio.run(grade_exam())# Get recurring weaknesses across multiple exams
weaknesses = system.get_student_recurring_weaknesses(student_id)
# Calculate learning velocity
velocity = system.get_student_learning_velocity(student_id)
print(f"Improvement rate: {velocity['improvement_rate']:.2f}% per exam")
# Get personalized recommendations
recommendations = system.get_student_review_recommendations(student_id)from feedback_agent.agents.image_processing_agent import ImageProcessingAgentRefactored
agent = ImageProcessingAgentRefactored()
result = await agent.process_image(image_path="path/to/exam.png")
# Extract data from image
print(f"Subject: {result['subject']}")
print(f"Questions: {result['exam_content']}")
print(f"Answers: {result['answer_key']}")Run all tests to verify everything works:
# Run all tests
python -m pytest tests/ -v
# Or test specific components
python tests/test_refactored_agent.py # Basic agent pipeline
python tests/test_authorization.py # Role-based access
python tests/test_image_processing.py # Image processing
python tests/test_memory.py # Cross-session tracking
python tests/test_observability.py # Metrics trackingChoose the right model for your use case in feedback_agent/.env:
| Model | Rate Limit | Best For |
|---|---|---|
gemini-1.5-flash |
15 req/min | Recommended - Fast, good quality |
gemini-1.5-pro |
2 req/min | Higher quality, slower |
gemini-2.0-flash-exp |
15 req/min | Experimental features |
gemini-2.5-pro |
2 req/min | Best quality, very restricted |
For batch operations (evaluations, multiple exams), use gemini-1.5-flash.
system = FeedbackSystem(
enable_metrics=True,
metrics_file="exam_metrics.jsonl"
)
# View metrics
system.print_metrics_summary()ai_agent_capstone/
├── feedback_agent/
│ ├── agents/ # Agent implementations
│ │ ├── grading_agent.py
│ │ ├── analysis_agent.py
│ │ ├── recommendation_agent.py
│ │ └── image_processing_agent.py
│ ├── agent.py # Main system (FeedbackSystem, pipeline, root_agent)
│ ├── conversational_agent.py # ADK Web UI conversational agent
│ ├── conversational_tools.py # 8 tools for conversational interface
│ ├── auth.py # Authentication
│ ├── authorization.py # Access control
│ ├── database.py # Data persistence
│ ├── memory.py # Cross-session tracking
│ ├── plugins.py # Metrics & logging
│ └── custom_llm.py # Custom Gemini wrapper
├── tests/ # Test suites
├── evals/ # Evaluation framework
├── docs/ # Documentation
│ ├── ARCHITECTURE.md
│ ├── CAPSTONE_REPORT.md
│ └── HOW-TO-USE.md
├── .adr/ # Architecture Decision Records
├── demo.py # Interactive demo
└── README.md # Project overview
- Compares student answers against answer keys
- Provides detailed feedback on mistakes
- Supports partial credit
- Identifies specific topics where student struggles
- Tracks severity (low, medium, high)
- Detects recurring patterns across exams
- Creates targeted study plans
- Prioritizes based on weakness severity and frequency
- Includes time estimates for each learning objective
- Tracks student progress over time
- Calculates learning velocity and improvement rates
- Identifies subjects mastered vs struggling
- Students: View only their own data
- Teachers: View all students, class statistics
- Admins: Full system access
- Processes text-based exams
- Extracts content from images (PNG, JPEG, GIF, WebP)
- Auto-detects exam structure
- Custom metrics plugin for exam processing
- Token usage tracking
- Performance monitoring
- Structured logging
- Natural language interaction via
adk web - Role-based authentication (teacher/student)
- 8 tools for complete exam processing workflow
- Image upload support (drag & drop or URLs)
The conversational interface provides a user-friendly way to interact with LearnPath Agent through natural language.
cd feedback_agent
adk webOpen http://localhost:8000 in your browser.
At the start of each conversation, identify yourself:
User: Hi, I'm Ms. Johnson, a teacher
Agent: Welcome, Ms. Johnson! As a teacher, you can:
- Grade exams from images or text
- View any student's results
- See class-wide analytics
What would you like to do?
| Action | How to Request |
|---|---|
| Grade image exam | Upload image + "Grade this for [student name]" |
| Grade text exam | Provide questions, answers, and answer key |
| View student results | "Show me [student name]'s results" |
| Class analytics | "Show me class statistics" |
| List students | "List all students" |
| Action | How to Request |
|---|---|
| View own results | "Show me my results" |
| Get recommendations | "What should I study?" |
You can upload exam images in several ways:
- Drag & Drop: Drag an image directly into the chat
- Paste URL: Provide a public URL to an image
- File Path: When running locally, provide a local file path
Example:
User: [uploads exam.png] Grade this math exam for Alice
Agent: I've processed the exam. Alice scored 8/10 (80%).
Areas for improvement:
- Quadratic Equations (medium)
- Factorization (low)
- ARCHITECTURE.md - System design and architecture
- CAPSTONE_REPORT.md - Project report
- .adr/ - Architecture Decision Records
The system uses a hybrid multi-agent pipeline:
Input → GradingAgent → LoopAgent(Analysis + Validation) → ParallelAgent(3 Recommenders) → SynthesisAgent → Output
Pipeline Stages:
- GradingAgent: Compares answers against answer key, calculates scores
- LoopAgent (Quality Assurance):
- AnalysisAgent: Identifies conceptual weaknesses (not question text)
- ValidationAgent: Ensures analysis quality, retries if needed (up to 5x)
- ParallelAgent (Recommendations): Runs 3 agents simultaneously:
- Study Materials Agent: Curates learning resources
- Practice Problems Agent: Generates targeted exercises
- Learning Strategy Agent: Develops study techniques
- SynthesisAgent: Combines parallel outputs into unified learning plan
- MemoryService: Tracks patterns across multiple exams
Built using Google ADK best practices:
- Runner pattern for session management
- State flow with output_key pattern
- LoopAgent for quality assurance with validation
- ParallelAgent for efficient recommendation generation
- Custom plugins for observability
Run the evaluation suite:
python evals/run_evaluation.pyCurrent Performance:
- Grading Accuracy: 55%
- Analysis Quality: 23% (needs improvement with better models)
- Recommendation Relevance: 89%
See EVALUATION_SUMMARY.md for detailed results.
Problem: RESOURCE_EXHAUSTED errors
Solution: Switch to gemini-1.5-flash in .env file
MODEL_NAME="gemini-1.5-flash"Problem: ModuleNotFoundError: No module named 'feedback_agent'
Solution: Activate virtual environment and reinstall
source .venv/bin/activate
uv syncProblem: PERMISSION_DENIED or INVALID_ARGUMENT
Solution:
- Get API key from https://aistudio.google.com/apikey
- Update
feedback_agent/.envwith correct key - Ensure no extra quotes or spaces
This is a capstone project for educational purposes. For detailed technical information:
- See ARCHITECTURE.md for system design
- Review .adr/ for architectural decisions
- Check CAPSTONE_REPORT.md for project overview
Demonstrates best practices from Kaggle AI Agents course:
| Concept | Implementation |
|---|---|
| Agent Basics | Hybrid pipeline: Sequential + Loop + Parallel agents |
| Tools | 8 conversational tools with ToolContext |
| Runner & Sessions | SessionService for state management |
| State Management | output_key pattern for agent communication |
| LoopAgent | Quality assurance with ValidationAgent (up to 5 retries) |
| ParallelAgent | 3 specialized recommendation agents running concurrently |
| Multimodal | Gemini vision for image processing (ADK Web UI upload) |
| Observability | Custom ExamMetricsPlugin + structured logging |
Educational project for Kaggle AI Agents course capstone.
- Kaggle AI Agents Course instructors
- Google Agent Development Kit (ADK) team
- Google Gemini API