Skip to content

kawacukennedy/sign_language_detector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

ASL Real-Time Recognition on macOS

This app uses your MacBook's camera to detect American Sign Language (ASL) letters in real-time using OpenCV, MediaPipe, and a pre-trained Keras model. It overlays the predicted letter on the video feed and can optionally convert recognized text to speech.

Features

  • Real-time webcam video with hand detection
  • ASL letter prediction (A–Z) using a trained model
  • Overlay of predicted letter and sentence on video
  • Optional data collection mode for new signs
  • Support for both hands (if detected)
  • Text-to-speech output (pyttsx3)

Requirements

  • macOS (tested on Apple Silicon and Intel MacBooks)
  • Python 3.8+

Installation

  1. Clone this repository (or copy the code files):

    git clone <your-repo-url>
    cd sign_language
  2. Install dependencies:

    pip install opencv-python mediapipe tensorflow pyttsx3 numpy
    • If you have issues with pyttsx3 on macOS, try:
      pip install pyobjc
  3. Prepare your model:

    • Place your trained Keras model as asl_model.h5 in the project directory.
    • The model should take a 63-element input (21 hand landmarks × 3 coordinates) and output 26 classes (A–Z).
    • You can collect data using the app's data collection mode and train your own model with Keras.

Running the App

python asl_realtime.py
  • Press q to quit.
  • Press c to toggle data collection mode (saves hand landmark data to collected_data.csv).
  • The label for data collection is hardcoded as 'A' (edit in the code as needed).

Notes

  • The app uses the default webcam (index 0). If you have multiple cameras, you may need to change the index in the code.
  • For best results, use a well-lit environment and keep your hand within the camera frame.
  • The app overlays the predicted letter and the running sentence on the video feed.
  • Text-to-speech will read out the sentence every few seconds.

Troubleshooting

  • If you get errors related to the webcam, ensure camera permissions are enabled for Terminal/Python in System Preferences > Security & Privacy > Privacy > Camera.
  • If you get errors with pyttsx3, try installing pyobjc as above.
  • If you see asl_model.h5 not found, ensure your model is in the correct location.

License

MIT

About

This app uses your MacBook's camera to detect American Sign Language (ASL) letters in real-time using OpenCV, MediaPipe, and a pre-trained Keras model. It overlays the predicted letter on the video feed and can optionally convert recognized text to speech.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages