You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -17,44 +17,135 @@ The project was completed in 24 hours as part of final HackUNT19, the University
17
17
## General info
18
18
19
19
The theme at HACK UNT 19 was to use technology to improve accessibility by finding a creative solution to benefit the lives of those with disability.
20
-
We wanted to make it easy for 70 million deaf people across the world to be independent of translators for there daily communication needs.
20
+
We wanted to make it easy for 70 million deaf people across the world to be independent of translators for there daily communication needs, so we designed the app to work as a personal translator 24*7 for the deaf people.
21
21
22
22
23
23
## Screenshots
24
-

24
+

25
25

26
26

27
27

28
28
29
29
## Technologies and Tools
30
-
* Python - version 3.5
30
+
* Python
31
31
* TensorFlow
32
32
* Keras
33
+
* OpenCV
33
34
34
35
## Setup
35
-
Describe how to install / setup your local environement / add link to demo version.
36
+
37
+
* Use comand promt to setup environment by using requirements_cpu.txt and requirements_gpu.txt
38
+
`pyton -m pip r using requirements_cpu.txt`
39
+
40
+
This will help you in installing all the libraries required for the project.
41
+
36
42
37
43
## Process
38
44
45
+
* Run set_hand_hist.py to set the hand histogram for creating gestures.
46
+
* Once you get a good histogram, save it in the code folder, or you can use the histogram created by us that can be found [here]().
47
+
* Added gestures and lable them using OpenCV which uses webcam feed by running `create_gestures.py` and stores them in a database. Alternately, you can use the gestures created by us [here]()
48
+
* Add different variations to the captured gestures by flipping all the images by uing `flip_images.py`
49
+
* Run `load_images.py` to split all the captured gestures into training, validation and test set.
50
+
* To view all the gestures, run `display_all_gestures.py`
51
+
* Train the model using Keras by running `cnn_keras.py`
52
+
* Run `fun_util.py`. This will open up tghe gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
53
+
39
54
## Code Examples
40
-
Some examples of usage:
41
55
42
56
````
57
+
# Model Traiining using CNN
58
+
59
+
import numpy as np
60
+
import pickle
61
+
import cv2, os
62
+
from glob import glob
63
+
from keras import optimizers
64
+
from keras.models import Sequential
65
+
from keras.layers import Dense
66
+
from keras.layers import Dropout
67
+
from keras.layers import Flatten
68
+
from keras.layers.convolutional import Conv2D
69
+
from keras.layers.convolutional import MaxPooling2D
List of features ready and TODOs for future development
48
-
* Awesome feature 1
49
-
* Awesome feature 2
50
-
* Awesome feature 3
139
+
Our model was able to predict the 44 characters in the ASL with a prediction accuracy >95%.
51
140
52
141
To-do list:
53
-
* Wow improvement to be done 1
54
-
* Wow improvement to be done 2
142
+
* Deploy the project on cloud and create an API for using it.
143
+
* Increase the vocabulary of our model
144
+
* Incorporate feedback mechanism to make the model more robust
145
+
* Add more sign languages
55
146
56
147
## Status
57
-
Project is: _in progress_. Although we did finish the project and make a su and why?
148
+
Project is: _finished_. Our team was the winner of the UNT Hackaton 2019. You can find the our final submission post on [devpost](http://bit.ly/2WWllwg).
58
149
59
150
## Contact
60
151
Created by me and my teammates [Siddharth Oza](https://github.com/siddharthoza), [Ashish Sharma](https://github.com/ashish1993utd), and [Manish Shukla](https://github.com/Manishms18).
0 commit comments