File tree Expand file tree Collapse file tree 3 files changed +160
-0
lines changed
Expand file tree Collapse file tree 3 files changed +160
-0
lines changed Original file line number Diff line number Diff line change 1+ # Deploying ModalX v2 on Hugging Face Spaces (FREE GPU)
2+
3+ ## Quick Deploy Steps
4+
5+ ### 1. Create a Space
6+
7+ 1 . Go to [ huggingface.co/spaces] ( https://huggingface.co/spaces )
8+ 2 . Click ** "Create new Space"**
9+ 3 . Settings:
10+ - ** Name:** ` modalx-v2 `
11+ - ** SDK:** ` Streamlit `
12+ - ** Hardware:** ` T4 small ` (FREE GPU!)
13+ - ** Visibility:** Public or Private
14+
15+ ### 2. Upload Files
16+
17+ Upload these files from ` modalx_v2/ ` to your Space:
18+
19+ ```
20+ app.py
21+ backend.py
22+ requirements.txt (use spaces/requirements.txt)
23+ README.md (use spaces/README.md for HF format)
24+ models/
25+ ├── __init__.py
26+ ├── transformer_emotion.py
27+ ├── action_unit_detector.py
28+ ├── gesture_stgcn.py
29+ ├── prosody_analyzer.py
30+ ├── content_bert.py
31+ └── slide_vit.py
32+ weights/
33+ └── (your trained .pt files)
34+ ```
35+
36+ ### 3. Using Git
37+
38+ ``` bash
39+ # Clone your space
40+ git clone https://huggingface.co/spaces/YOUR_USERNAME/modalx-v2
41+ cd modalx-v2
42+
43+ # Copy files
44+ cp -r /path/to/modalx_v2/* .
45+ cp spaces/README.md .
46+ cp spaces/requirements.txt .
47+
48+ # Push
49+ git add .
50+ git commit -m " Initial deploy"
51+ git push
52+ ```
53+
54+ ### 4. Wait for Build
55+
56+ The Space will build automatically. Takes ~ 5-10 minutes.
57+
58+ ---
59+
60+ ## Free GPU Limits
61+
62+ | Feature | Limit |
63+ | ---------| -------|
64+ | GPU | T4 (16GB VRAM) |
65+ | RAM | 16GB |
66+ | Storage | 50GB |
67+ | Timeout | 15 min idle |
68+ | Cost | ** FREE** |
69+
70+ ---
71+
72+ ## Troubleshooting
73+
74+ ### Out of Memory
75+ Use ` whisper.load_model("tiny") ` instead of "base"
76+
77+ ### Slow Cold Start
78+ First load takes ~ 2 minutes to download models
79+
80+ ### Import Errors
81+ Check that ` protobuf>=3.20.0,<5.0.0 ` is in requirements.txt
82+
83+ ---
84+
85+ ## Your Space URL
86+
87+ After deployment:
88+ ```
89+ https://huggingface.co/spaces/YOUR_USERNAME/modalx-v2
90+ ```
91+
92+ Share this link for the competition demo!
Original file line number Diff line number Diff line change 1+ ---
2+ title : ModalX v2.0
3+ emoji : 🧠
4+ colorFrom : purple
5+ colorTo : blue
6+ sdk : streamlit
7+ sdk_version : 1.28.0
8+ app_file : app.py
9+ pinned : false
10+ license : mit
11+ hardware : t4-small
12+ ---
13+
14+ # ModalX v2.0 - Deep Learning Presentation Grader
15+
16+ AI-powered presentation analysis using 6 deep learning models:
17+
18+ - 🎭 ** Emotion** : Transformer + Multi-Head Attention
19+ - 👤 ** Facial AU** : ResNet-50 + LSTM
20+ - 🙌 ** Gesture** : ST-GCN Network
21+ - 🎙️ ** Prosody** : CNN-BiLSTM
22+ - 📝 ** Content** : DistilBERT
23+ - 🖼️ ** Slides** : Vision Transformer
24+
25+ ## Usage
26+
27+ 1 . Enter student name and ID
28+ 2 . Upload video or paste YouTube/Drive URL
29+ 3 . Click "Analyze"
30+ 4 . Download PDF report
31+
32+ ## Competition
33+
34+ ModalX-AI Challenge @ Daffodil International University
35+
36+ ** Team:** NL Circuits
Original file line number Diff line number Diff line change 1+ # Hugging Face Spaces Requirements
2+ # GPU-enabled deployment
3+
4+ streamlit >= 1.28.0
5+ torch >= 2.1.0
6+ torchvision >= 0.16.0
7+ torchaudio >= 2.1.0
8+ transformers >= 4.35.0
9+ tokenizers >= 0.14.0
10+ sentencepiece >= 0.1.99
11+ librosa >= 0.10.1
12+ openai-whisper >= 20231117
13+ soundfile >= 0.12.1
14+ scipy >= 1.11.0
15+ opencv-python-headless >= 4.8.0
16+ mediapipe >= 0.10.7
17+ Pillow >= 10.0.0
18+ timm >= 0.9.12
19+ einops >= 0.7.0
20+ moviepy >= 1.0.3
21+ yt-dlp >= 2023.11.16
22+ gdown >= 4.7.1
23+ pytesseract >= 0.3.10
24+ fpdf2 >= 2.7.6
25+ numpy >= 1.24.0
26+ pandas >= 2.1.0
27+ scikit-learn >= 1.3.0
28+ plotly >= 5.18.0
29+ matplotlib >= 3.8.0
30+ tqdm >= 4.66.0
31+ requests >= 2.31.0
32+ protobuf >= 3.20.0 ,< 5.0.0
You can’t perform that action at this time.
0 commit comments