Skip to content

niscollect/ishara

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 

Repository files navigation

Ishara: Real-Time Sign Language to Text Translator


SignSpeak is a web-based application that translates hand gestures into text in real time. Unlike traditional sign language datasets that are limited to alphabets or numbers, SignSpeak is designed for real-world communication. It recognizes full words such as "hello", "thank you", "please", etc., and dynamically threads them into meaningful sentences β€” enabling smoother, natural interaction for people who rely on sign language.

πŸ”Š This tool is developed with accessibility in mind, especially for the Deaf and Hard-of-Hearing community, helping bridge communication gaps in everyday conversations.

πŸ› οΈ Features βœ… Real-time hand gesture recognition via webcam

βœ… Word-level sign detection (not just letters or digits)

βœ… Sentence stitching from recognized words

βœ… Web-based: No installation required, runs directly in the browser

βœ… MediaPipe + TensorFlow.js integration for efficient and fast landmark detection and classification

βœ… Robust against background variations and performs in natural settings

🧠 Tech Stack Frontend: HTML, CSS, JavaScript

ML Framework: TensorFlow.js

Hand Tracking: MediaPipe Hands

Model Training: Teachable Machine + custom post-processing

πŸš€ How It Works Hand Detection: Uses MediaPipe to identify 21 hand landmarks in real-time.

Gesture Classification: Landmarks are fed into a Teachable Machine model trained on a curated dataset of meaningful hand signs.

Prediction Smoothing: Filters out noisy predictions by tracking class confidence and stability over frames.

Sentence Building: Accumulates predicted words and threads them into readable sentences (e.g., "Please help me").

Display: Recognized sentence is shown on screen in real-time for clear communication.

πŸ“Έ Screenshots Detection in Action Sentence Output

πŸ” Project Motivation Most open-source datasets and models focus on character-based sign language, which is slow and unnatural for actual communication. This project addresses that by focusing on word-level gestures and contextual sentence formation.

Our aim was to build something closer to how real signers communicate β€” fluid, quick, and expressive β€” not letter-by-letter spelling.

πŸ“‚ Folder Structure bash Copy Edit β”œβ”€β”€ index.html β”œβ”€β”€ style.css β”œβ”€β”€ script.js β”œβ”€β”€ model/ # Exported Teachable Machine model β”œβ”€β”€ media/ # Screenshots and GIFs β”œβ”€β”€ README.md πŸ§ͺ Try It Out Clone this repo:

bash Copy Edit git clone https://github.com/yourusername/signspeak.git Open index.html in a browser.

Allow webcam permissions.

Start signing!

πŸ“š Learnings & Takeaways Understanding of computer vision workflows for gesture recognition

Integrated MediaPipe with TF.js effectively for performance gains

Improved UX by smoothing predictions and building contextual output

Worked under real-world constraints like background interference and webcam quality

Bridged theoretical ML into usable, accessible tech for a real-world audience

πŸ’‘ Future Enhancements Add more signs (verbs, emotions, commands)

Multilingual gesture datasets

Voice output for recognized sentences (speech synthesis)

Mobile support and PWA deployment

Option to export conversations as text logs

🀝 Contributors Clark (Developer, ML Model Trainer, Integration Engineer)

[Your team members if any]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages