Skip to content

Latest commit

ย 

History

History
114 lines (76 loc) ยท 3.74 KB

File metadata and controls

114 lines (76 loc) ยท 3.74 KB

๐Ÿฆ– DyNaGO โ€“ Dynamic Natural Gesture Operations

DyNaGO is a real-time AI-powered, Human Computer Interface employing gesture recognition. It uses computer vision and machine learning to enable users to control their machines using natural, dynamic hand gesturesโ€”no special hardware required.

Whether for accessibility, low-interaction environments, or futuristic UI prototyping, DyNaGO delivers a lightweight, modular, and efficient solution for gesture-based computing.


โœจ Features

  • ๐Ÿ”ง SVM + MediaPipeโ€“based gesture classification
  • โšก Dynamic velocity vector analysis for real-time gesture detection
  • ๐ŸŽฎ System command mapping: volume control, tab switching, app launch, and more
  • ๐Ÿ–ฅ๏ธ Fully functional on standard webcams
  • ๐Ÿงฑ Modular architecture โ€“ easily expandable with new gestures or models
  • ๐Ÿงช Trained on 4,200+ gesture samples across 6 static classes

๐Ÿง  Dataset & Training Summary

  • Total Samples: 4291
  • Gestures: fist, two_fingers, three_fingers (2 types), pinch, point
  • Normalization: wrist-centered + scaled to unit sphere
  • Accuracy: 92.3%
  • Best Class: point (99.4%)
  • Weakest Class: pinch (72.3%)

Confusion Matrix Preview:


๐Ÿ— System Architecture

  1. Initialization โ€“ Load webcam, environment, set base gesture
  2. Static Gesture Detection โ€“ Classify using MediaPipe landmarks + SVM
  3. Motion Vector Analysis โ€“ Track gesture trajectory using velocity between frames
  4. Action Mapping โ€“ Trigger system functions via OS hooks / APIs

๐Ÿ›  Usage

Installation

git clone https://github.com/KreativeThinker/DyNaGO
cd DyNaGO
python -m venv .venv
source .venv/bin/activate
pip install poetry
poetry install

Commands

Command Task
poetry run capture Capture training samples with label
poetry run normalize Normalize and prepare dataset for training
poetry run train_static Train SVM model
poetry run dev Launch dynamic gesture predictor

>_ See all commands: pyproject.toml


๐Ÿ“ˆ Experiment Highlights

Gesture Accuracy AUC Confusions
point 99.4% 1.00 minor confusion with fist
pinch 72.3% 0.95 major confusion with palm and point
three_fingers 87.3% 1.00 some confusion with two_fingers

๐Ÿ“Š See full report: Experiment Analysis


๐ŸŽฅ Demo


๐ŸŒฑ Future Work

  • Better configuration file
  • Hybrid dynamic gesture detection with light weight SVM + Velocity Vector Analysis
  • Complete cursor control
  • Real-time inference optimization (GPU support)
  • Multi-gesture chaining (command macros)
  • Browser-based version via TensorFlow.js
  • Integrated Audio Agent with custom function execution (branch voice)

๐Ÿ‘จโ€๐Ÿ’ป Author

Built by Anumeya Sehgal
โœ‰ Email: anumeyasehgal@proton.me
๐ŸŒ LinkedIn: anumeya-sehgal


๐Ÿ“œ License

MIT License โ€“ Free for use, distribution, and enhancement.