A fullstack web application that translates sign language to text using computer vision and machine learning.
Untitled.design.2.mp4
- Real-time webcam capture
- Sign language recognition using MediaPipe
- Modern React frontend with Tailwind CSS
- Flask backend for processing
- Navigate to the backend directory:
cd backend- Create and activate virtual environment:
python -m venv venv
venv\Scripts\activate # On Windows
source venv/bin/activate # On Unix/MacOS- Install dependencies:
pip install flask flask-cors mediapipe opencv-python numpy- Run the Flask server:
python app.py- Navigate to the frontend directory:
cd frontend- Install dependencies:
npm install- Run the development server:
npm run dev- Open your browser and navigate to
http://localhost:5173 - Allow camera access when prompted
- Click "Start Capturing" to begin sign language recognition
- Perform sign language gestures in front of the camera
- The translation will appear below the video feed
- Frontend: React, Vite, Tailwind CSS
- Backend: Python, Flask
- Computer Vision: MediaPipe, OpenCV
- API Communication: Axios