|
1 | | -# 🎓 ModalX: The Smart Faculty Grading Assistant |
2 | | -**AI-Powered Presentation Assessment System** |
| 1 | +# ModalX-AI Challenge: Presentation Assessment System |
3 | 2 |
|
4 | | -> 🔴 **Live Demo:** [ml.muntasirislam.com/modalx](https://ml.muntasirislam.com/modalx) |
| 3 | +**Team Name:** NL Circuits |
| 4 | +**Team Members:** Muntasir Islam, Nazmus Sakib |
| 5 | +**Student IDs:** 0242220005131010, 0242220005131017 |
| 6 | +**Event:** ModalX-AI Challenge (Daffodil International University) |
5 | 7 |
|
6 | 8 | --- |
7 | 9 |
|
8 | | -## 💡 The Problem We Are Solving |
9 | | -Faculty members and lecturers face a massive challenge: **Grading Fatigue.** |
| 10 | +## Project Overview |
10 | 11 |
|
11 | | -Imagine sitting through 50 student presentations in a single day. By the 40th student, it becomes incredibly difficult to remain objective, track every pause, or measure eye contact accurately. Grading becomes subjective, and providing detailed, personalized feedback for every single student is nearly impossible due to time constraints. |
| 12 | +ModalX is an automated grading system for student presentations. It replaces subjective human grading with data-driven metrics. The system analyzes three key areas: |
| 13 | +1. **Audio:** Speaking pace, volume, and pitch variation. |
| 14 | +2. **Visual:** Eye contact, body posture, and slide detection (analyzing both slide content and visuals). |
| 15 | +3. **Emotion:** Vocal tone and confidence levels. |
12 | 16 |
|
13 | | -**ModalX is built to be the Lecturer's "Helping Hand".** |
14 | | -It automates the tedious parts of grading (measuring pace, volume, slide density, and posture) so the faculty member can focus on what matters most: the **content and the idea**. |
| 17 | +This repository is organized into three folders corresponding to the competition phases. |
15 | 18 |
|
16 | 19 | --- |
17 | 20 |
|
18 | | -## ⚙️ How It Helps the Faculty |
19 | | -We designed this system to act as an objective, "Responsible AI" grader that follows a standard rubric. |
20 | | - |
21 | | -### 1. Automated Rubric & Evidence |
22 | | -Instead of just giving a grade like "B+", ModalX generates a **Detailed PDF Report** for every student. This serves as: |
23 | | -* **Proof of Assessment:** A physical record of why a student received a specific grade. |
24 | | -* **Objectivity:** The AI doesn't get tired. It grades the first student and the last student with the exact same criteria. |
25 | | - |
26 | | -### 2. The "Context-Aware" Grading Engine |
27 | | -Lecturers encounter different types of presentations. A "One-size-fits-all" AI fails here. We built a **"Smart Pivot"** engine that adapts: |
28 | | - |
29 | | -* **Scenario A: The Speech.** If the student is on camera, the AI uses `MediaPipe` to grade **Eye Contact** and **Posture Stability**. |
30 | | -* **Scenario B: The Screen Share.** If the student is showing slides (and their face is hidden), the AI automatically switches to **OCR Mode**. It reads the slides using `Tesseract` to check for **Readability** and **Text Density** (avoiding "Death by PowerPoint"). |
31 | | - |
32 | | -### 3. Deep Audio Forensics |
33 | | -We go beyond simple transcription. |
34 | | -* **Confidence scoring:** Using RMS Energy to detect mumbling vs. projection. |
35 | | -* **Hesitation Analysis:** Distinguishing between a "Dramatic Pause" (good) and "Nervous Stuttering" (needs improvement). |
| 21 | +## Repository Structure |
| 22 | + |
| 23 | +### 1. Phase_1_Speech_Analysis |
| 24 | +This folder contains the logic for processing audio. |
| 25 | +* **Focus:** Speech Clarity, Confidence, and Delivery. |
| 26 | +* **Key Metrics:** |
| 27 | + * **Words Per Minute (WPM):** measures speaking speed. |
| 28 | + * **Pitch Variation:** detects if the voice is monotone. |
| 29 | + * **Pause Ratio:** measures hesitation. |
| 30 | + * **Emotion Analysis:** detects emotions and measures confidence scales. |
| 31 | +* **Tech Stack:** OpenAI Whisper, Librosa, TensorFlow. |
| 32 | + |
| 33 | +### 2. Phase_2_Visual_Analysis |
| 34 | +This folder contains the logic for computer vision (Face, Body movement analysis, and Slide content/visual analysis). |
| 35 | +* **Focus:** Body Language, Engagement, and Slide Contents. |
| 36 | +* **Key Metrics:** |
| 37 | + * **Eye Contact:** tracks if the speaker is looking at the camera. |
| 38 | + * **Posture Stability:** checks for slouching, movement, hand movements, and gestures. |
| 39 | + * **Slide Detection:** differentiates between a person speaking and screen-shared slides; measures slide content density and visual content ratios. |
| 40 | +* **Tech Stack:** MediaPipe Holistic, OpenCV. |
| 41 | + |
| 42 | +### 3. Phase_3_Full_System |
| 43 | +This is the final integrated application. It combines Phase 1 and Phase 2, adds the Emotion Engine, and generates the final report. |
| 44 | +* **Focus:** System Integration and Reporting. |
| 45 | +* **Features:** |
| 46 | + * **Emotion Engine:** A CNN model that detects happy, nervous, or neutral tones. |
| 47 | + * **Content Analysis:** Scans the transcript for professional vocabulary. |
| 48 | + * **PDF Report:** Automatically generates a scorecard with a final grade. |
| 49 | +* **Main File:** `app.py` (Streamlit Dashboard). |
36 | 50 |
|
37 | 51 | --- |
38 | 52 |
|
39 | | -## 📂 Project Architecture (Phase by Phase) |
40 | | -This repository documents our development journey for the ModalX AI Challenge: |
41 | | - |
42 | | -* **`Phase_1_Speech_Analysis/`**: The core audio processing logic. We built the "Prosody Engine" here to measure pitch and pacing. |
43 | | -* **`Phase_2_Visual_Analysis/`**: The computer vision layer. This contains the logic for the "Smart Pivot" (Face vs. Slide detection). |
44 | | -* **`Phase_3_Full_System/`**: The production-ready web application hosted on the live server. |
45 | | - |
46 | | ---- |
| 53 | +## Installation Guide |
47 | 54 |
|
48 | | -## 🛠️ Deployment & Usage |
49 | | -The project is currently hosted live, but you can run a local instance for testing. |
| 55 | +Follow these steps to set up the project on your local machine. |
50 | 56 |
|
51 | | -**1. Clone & Install** |
| 57 | +**1. Clone the Repository** |
52 | 58 | ```bash |
53 | | -git clone https://github.com/muntasir-islam/ModalX-AI-Challenge.git |
54 | | -pip install -r requirements.txt |
| 59 | +git clone [https://github.com/muntasir-islam/ModalX-AI-Challenge.git](https://github.com/muntasir-islam/ModalX-AI-Challenge.git) |
| 60 | +cd ModalX-AI-Challenge |
0 commit comments