Skip to content

FAI.CE Team Submission #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 20 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
venv
16 changes: 15 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,21 @@
# FER May Hakathon
# Team FAI.CE: FER May Hakathon

Facial Emotion Detection Hackathon Project, Create a model and test it uing 5 to 10 sec videos to detect emotions

Team Members:
Mohamed Ratiq
Bhavika Kaliya
Alora Tabuco

## Run Locally
streamlit run app.py --server.enableXsrfProtection false
(This is only to allow file uploads when running locally)

## View Deployment Link
https://fer-may-hackathon-faice.streamlit.app/

Please note that due to limitations with streamlit cloud - the performance is relatively slow on deployment.

# Facial Emotion Recognition

<div id="top"></div>
Expand Down
90 changes: 90 additions & 0 deletions app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
import streamlit as st
import cv2
import tempfile
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import load_model

# This can be changed between cnnModel.h5, emotion_recognition_model.h5, and mobilenet.h5
model_path = 'mobilenet.h5'
emotion_model = load_model(model_path)

# Emotion labels
emotion_labels = ['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']

def main():
# Title of the app
st.title("Video Input App with Face Detection")

with st.expander("Demo Video"):
st.video('assets/demo.webm')

# File uploader for video input
video_file = st.file_uploader("Upload a video file", type=["mp4", "mov", "avi"])

if video_file is not None:
# Create a temporary file to save the uploaded video
tfile = tempfile.NamedTemporaryFile(delete=False)
tfile.write(video_file.read())

# Play the video and perform face detection
st.write("Processing video for face detection...")
process_video(tfile.name)

# Clean up: remove the temporary file
tfile.close()
os.unlink(tfile.name)

def process_video(video_path):
# Load OpenCV's pre-trained Haar Cascade face detector
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')

video_capture = cv2.VideoCapture(video_path)

stframe = st.empty()

while video_capture.isOpened():
ret, frame = video_capture.read()
if not ret:
break

# Convert the frame to grayscale
gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Detect faces
faces = face_cascade.detectMultiScale(gray_frame, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))

# Draw rectangles around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

# Extract the face ROI
face = gray_frame[y:y+h, x:x+w]
# Resize the face to 48x48 pixels
face_resized = cv2.resize(face, (224, 224))
# Normalize the pixel values
face_normalized = face_resized / 255.0
# Expand dimensions to match model input shape
face_rgb = np.stack((face_normalized,) * 3, axis=-1)
face_input = np.expand_dims(face_rgb, axis=0)

# Predict the emotion
emotion_prediction = emotion_model.predict(face_input)
emotion_label = emotion_labels[np.argmax(emotion_prediction)]

# Draw a rectangle around the face and put the emotion label
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.putText(frame, emotion_label, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)


# Convert the frame back to RGB
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

# Display the frame
stframe.image(rgb_frame, channels="RGB")

video_capture.release()

if __name__ == "__main__":
main()
Binary file added assets/demo.webm
Binary file not shown.
Binary file added cnnModel.h5
Binary file not shown.
Binary file added emotion_recognition_model.h5
Binary file not shown.
1 change: 1 addition & 0 deletions fer-mobilenet.ipynb

Large diffs are not rendered by default.

Loading