A comprehensive real-time face recognition system that detects emotions, identifies people, tracks mood scores, and visualizes emotional states using a Valence-Arousal graph.
- Real-time face detection and recognition
- Emotion analysis with 7 basic emotions (happy, sad, angry, fear, surprise, disgust, neutral)
- Identity recognition using a local database of face images
- Mood score tracking (-100 to +100 scale)
- Valence-Arousal visualization showing emotional positions in 2D space
- Live mood graphing with historical data
- Easy face database management with GUI controls
Real-time emotion detection with mood tracking and valence-arousal visualization
- Python 3.8 or higher
- Webcam/Camera
- Windows/macOS/Linux
You can set up the project in multiple ways:
# Windows
setup.bat# macOS/Linux
chmod +x setup.sh
./setup.shpython setup.pygit clone https://github.com/BITtech05/Emotion-Face-Detection.git
cd emotion-recognition-system# Create virtual environment
python -m venv emotion_env
# Activate virtual environment
emotion_env\Scripts\activate
# Verify activation (should show path to your venv)
where python# Create virtual environment
python3 -m venv emotion_env
# Activate virtual environment
source emotion_env/bin/activate
# Verify activation (should show path to your venv)
which python# Upgrade pip first
python -m pip install --upgrade pip
# Install all requirements
pip install -r requirements.txt# Create the local images directory (if it doesn't exist)
mkdir local_images
# Add face images to the local_images folder
# Name them like: john_doe.jpg, jane_smith.pngpython emotion_recognition_system.pyA virtual environment isolates your project dependencies from your system Python installation.
- Dependency Isolation: Prevents conflicts between project dependencies
- Version Control: Maintain specific package versions for your project
- Clean Development: Easy to reset or recreate environment
- Deployment: Ensures consistent environments across different systems
-
Navigate to your project directory:
cd path/to/your/emotion-recognition-system -
Create the virtual environment:
# Windows python -m venv emotion_env # macOS/Linux python3 -m venv emotion_env
-
Activate the virtual environment:
# Windows Command Prompt emotion_env\Scripts\activate # Windows PowerShell emotion_env\Scripts\Activate.ps1 # macOS/Linux source emotion_env/bin/activate
-
Verify activation: Your command prompt should now show
(emotion_env)at the beginning:(emotion_env) C:\your\project\path>
-
Upgrade pip:
python -m pip install --upgrade pip
-
Install project dependencies:
pip install -r requirements.txt
When you're done working on the project:
deactivateIf something goes wrong with your environment:
# Remove the old environment
rm -rf emotion_env # macOS/Linux
rmdir /s emotion_env # Windows
# Create and set up new environment
python -m venv emotion_env
source emotion_env/bin/activate # or emotion_env\Scripts\activate on Windows
pip install -r requirements.txt-
Open the images folder:
- Click "Open Images Folder" in the application, or
- Navigate to the
local_imagesdirectory in your project folder
-
Add face photos:
- Use clear, well-lit photos
- One face per image
- Frontal view works best
- Supported formats: .jpg, .jpeg, .png, .bmp
-
Name your files correctly:
john_doe.jpg jane_smith.png alex_johnson.jpeg mary_williams.jpg- Use underscores instead of spaces
- The filename becomes the person's display name
-
Refresh the database:
- Click "Refresh Database" in the application
- The system will load all faces and show count in the interface
- Start the camera
- Position the person in front of the camera
- Click "📸 Save New Face" button
- Enter the person's name when prompted
- If multiple faces are detected, select which one to save
- The face will be automatically added to your database
-
Launch the application:
python emotion_recognition_system.py
-
Start camera feed:
- Click "Start Camera" button
- Grant camera permissions if prompted
-
Monitor results:
- View live video feed with face detection boxes
- Check emotion analysis in the Detection panel
- Monitor mood scores in real-time graph
- Observe emotional positions on Valence-Arousal plot
- Live camera feed with face detection rectangles
- Shows person name, dominant emotion, and mood score overlay
- Database status and loaded faces
- Current detections with detailed emotion breakdown
- Mood scores and age estimates
- 2D emotional space visualization
- X-axis: Valence (negative ← → positive)
- Y-axis: Arousal (low ← → high)
- Shows current emotional positions for detected faces
- Real-time mood tracking over time
- Scale: -100 (worst) to +100 (best)
- Historical data for up to 60 seconds
- Color-coded lines for different people
- Happy: Joy, contentment, amusement
- Sad: Sorrow, melancholy, disappointment
- Angry: Frustration, irritation, rage
- Fear: Anxiety, worry, apprehension
- Surprise: Shock, amazement, unexpected reaction
- Disgust: Revulsion, distaste, aversion
- Neutral: Calm, composed, no strong emotion
- +100 to +50: Very happy/positive state
- +50 to +20: Happy/good mood
- +20 to -20: Neutral mood
- -20 to -50: Sad/negative mood
- -50 to -100: Very sad/depressed state
- Top Right: High arousal, positive emotions (excitement, joy)
- Top Left: High arousal, negative emotions (anger, fear)
- Bottom Right: Low arousal, positive emotions (calm happiness)
- Bottom Left: Low arousal, negative emotions (sadness, depression)
# Reinstall DeepFace
pip uninstall deepface
pip install deepface==0.0.79- Check if another application is using the camera
- Try different camera indices in the code (change
cv2.VideoCapture(0)tocv2.VideoCapture(1)) - Ensure camera permissions are granted
- Use high-quality, well-lit photos for the database
- Ensure faces are clearly visible and frontal
- Add multiple photos of the same person from different angles
- Clean the camera lens
- Close other applications using the camera
- Reduce the analysis frequency in the code
- Use a more powerful computer for better performance
# For Windows with compatible GPU
pip install tensorflow-gpu==2.13.0
# For CPU-only systems
pip install tensorflow-cpu==2.13.0# Reinstall OpenCV
pip uninstall opencv-python
pip install opencv-python==4.8.1.78# Create fresh environment
deactivate
rm -rf emotion_env
python -m venv emotion_env
source emotion_env/bin/activate # Windows: emotion_env\Scripts\activate
pip install -r requirements.txtThe system uses a multi-threaded architecture:
- Video Thread: Captures frames and updates GUI display
- Analysis Thread: Performs face detection and emotion analysis
- Main GUI Thread: Handles user interactions and updates
Main application class that coordinates all functionality.
analyze_frame(): Performs face detection and emotion analysiscalculate_mood_score(): Converts emotion probabilities to mood scorecalculate_valence_arousal(): Maps emotions to 2D emotional spaceidentify_person_from_region(): Matches faces against local database
emotion_history: Historical emotion data for graphingmood_history: Historical mood scores for trackinglocal_face_data: Database of known faces
- Frame analysis runs every 1.5 seconds to balance accuracy and performance
- Video display updates at ~60 FPS for smooth user experience
- Multiple detection backends (OpenCV, MTCNN, RetinaFace) for better accuracy
- Efficient memory management with deque for historical data
In analysis_loop() method, change:
time.sleep(1.5) # Analyze every 1.5 secondsIn __init__() method, adjust:
self.positive_emotions = {'happy': 1.0, 'surprise': 0.3}
self.negative_emotions = {'sad': -1.0, 'angry': -0.9, ...}In update_video_display() method, modify:
max_width, max_height = 500, 400 # Display resolution- Fork the repository
- Create a feature branch (
git checkout -b feature/new-feature) - Commit your changes (
git commit -am 'Add new feature') - Push to the branch (
git push origin feature/new-feature) - Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- DeepFace for face analysis
- OpenCV for computer vision
- Matplotlib for data visualization
If you encounter any issues or have questions:
- Check the Issues page
- Create a new issue with detailed description
- Include error messages and system information
- Initial release with emotion detection
- Identity recognition system
- Mood score tracking
- Valence-Arousal visualization
- Real-time graphing capabilities