A powerful web application leveraging the DeepFace library to perform detailed facial analysis. This tool provides a simple interface to upload an image and gain insights such as age, gender, emotion, and race.
- Facial Attribute Analysis: Predict age, gender, dominant emotion, and race from detected faces.
- Face Detection: Automatically identify and locate faces within an uploaded image.
- Simple Web Interface: An intuitive and easy-to-use UI built with Streamlit for image uploads and results visualization.
- RESTful API: A robust backend built with FastAPI provides endpoints for programmatic access to the analysis engine.
- Containerized: Fully containerized with Docker for easy setup, deployment, and scalability.
- Backend: Python, FastAPI
- Frontend: Streamlit
- Core ML Library: DeepFace
- Containerization: Docker
- Process Management: Supervisor
Here is an overview of the project's directory structure:
FaceInsight/
├── README.md # This file
├── requirements.txt # Python dependencies
├── Dockerfile # Docker configuration
├── .dockerignore # Docker ignore patterns
├── supervisord.conf # Process manager config for Docker
├── backend/ # FastAPI backend
│ ├── app/
│ │ ├── main.py # FastAPI application entrypoint
│ │ ├── routers/ # API route definitions
│ │ └── services/ # Business logic for face analysis
├── frontend/ # Streamlit frontend
│ └── app.py # The web interface application
├── uploads/ # Default directory for uploaded images
├── results/ # Directory to store processing results
└── .venv/ # Python virtual environment (ignored by git)
You can run this project either locally with a Python environment or using Docker.
- Python 3.8+
- Docker (for containerized setup)
Follow these steps to run the application on your local machine.
-
Clone the repository:
git clone https://github.com/LAFFI01/Face_Insight_DeepFace.git cd Face_Insight_DeepFace -
Create and activate a virtual environment:
python -m venv .venv source .venv/bin/activate # On Windows, use: .venv\Scripts\activate
-
Install the dependencies:
pip install -r requirements.txt
-
Run the backend server: The FastAPI backend server will start.
uvicorn backend.app.main:app --host 0.0.0.0 --port 8000 --reload
You can access the API documentation at
http://localhost:8000/docs. -
Run the frontend application: In a new terminal, run the Streamlit app.
streamlit run frontend/app.py
The web interface will be available at
http://localhost:8501.
You can either pull the pre-built image from Docker Hub or build it from the source code.
This is the easiest way to run the application without needing to build it yourself.
-
Pull the image:
docker pull laffi01/faceinsight-app:latest
-
Run the container: This command starts the application and makes it accessible on your local machine.
docker run --rm -p 8501:8501 -p 8000:8000 \ -v ./uploads:/app/uploads \ -v ./results:/app/results \ --name faceinsight-app \ laffi01/faceinsight-app:latest
Follow these steps if you want to build the image from the source code.
-
Build the Docker image:
docker build -t faceinsight . -
Run the container:
docker run --rm -p 8501:8501 -p 8000:8000 \ -v ./uploads:/app/uploads \ -v ./results:/app/results \ --name faceinsight-app \ faceinsight:latest
Once the container is running, you can access the services:
- Frontend (Streamlit): http://localhost:8501
- Backend API Docs (FastAPI): http://localhost:8000/docs
- Navigate to the Streamlit web interface in your browser.
- Click the "Browse files" button to upload an image containing a face.
- The application will process the image and display the detected facial attributes, including the predicted age, gender, emotion, and race.
- The analyzed image with annotations will be shown on the page.
You can also interact with the backend API directly. This is useful for integrating the face analysis service into other applications.
Here is an example of how to send an image to the /face/analyze endpoint using curl:
curl -X POST -F "file=@/path/to/your/image.jpg" http://localhost:8000/face/analyze- Replace
/path/to/your/image.jpgwith the actual path to your image file. - The API will return a JSON object containing the analysis results.
Contributions are welcome! If you have suggestions for improvements or want to add new features, please feel free to create a pull request or open an issue.
This project is licensed under the MIT License. See the LICENSE file for more details.