-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathNeural_Nets_Project_Starter_Code.py
More file actions
195 lines (127 loc) · 9.25 KB
/
Neural_Nets_Project_Starter_Code.py
File metadata and controls
195 lines (127 loc) · 9.25 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
#!/usr/bin/env python
# coding: utf-8
# # Gesture Recognition
# Ia this group project, you are going to build a 3D Conv model that will be able to predict the 5 gestures correctly. Please import the following libraries to get started.
# In[ ]:
import numpy as np
import os
from scipy.misc import imread, imresize
import datetime
import os
import cv2
import random
# We set the random seed so that the results don't vary drastically.
# In[ ]:
np.random.seed(30)
import random as rn
rn.seed(30)
from keras import backend as K
import tensorflow as tf
tf.set_random_seed(30)
# In this block, you read the folder names for training and validation. You also set the `batch_size` here. Note that you set the batch size in such a way that you are able to use the GPU in full capacity. You keep increasing the batch size until the machine throws an error.
# In[ ]:
train_doc = np.random.permutation(open('/home/bkumar/upgrad/Project_data/train.csv').readlines())
val_doc = np.random.permutation(open('/home/bkumar/upgrad/Project_data/val.csv').readlines())
batch_size = 50
#experiment with the batch size
# ## Generator
# This is one of the most important part of the code. The overall structure of the generator has been given. In the generator, you are going to preprocess the images as you have images of 2 different dimensions as well as create a batch of video frames. You have to experiment with `img_idx`, `y`,`z` and normalization such that you get high accuracy.
# In[ ]:
folder list is the .csv that would be passed so folder list would be train doc in case of training and val doc in case of validation. source path is the root directory where data is available img_idx is the index of images that we want to use. total images given is 30 but we might not want to use 30 images in traing process, reason if we want to extract 15 images out of 30 then we will have good info about sequences. here 30 images is very high which might use lot of memory usage and take more training information. se we need to sample indexes rather than using all images. so img_idx will be list of image that we want to use instead of 0 to 30.
np.random.permutation(folder_list) : is the random permutation of csv file . in csv file all the sequences are kept in the order all the sequences belonging to class 0 are at the top of the CSV file , all the sequences are belonging to calss 1 is next and so on. so such kind of network biases the network while training, so we need to shuffle the traing data so that it does not have all the sequences belonging to particular in order. after shuffling random order of data will be fetched and being fed into the network
lets consider you have decided the batch size is 10 sequences long so num_batches will be: no of sequenes/ batch size ( note that one sequence is one video)
if batch size is 10 and no of sequenes is 100, if you grab 10 batches of data means actually grabbing 100 sequences
if we have 58 video and batch size is 5 then num_batch will be 12 not 11 but with 11 grab of images we can see only 55 video sequences.there will be always three sequences will be left over. need to write code which will complete the one pass over the training data.
batch_data = np.zeros((batch_size,x,y,z,3)) : create some emplty container x is the number of images you use for each video : it means of img_idx list lenght is 10 then x will become 10. (y,z) is the final size of the input images and 3 is the number of channels RGB y and z height and width of the image.
batch_labels = np.zeros((batch_size,5)) : one hot encoding representation; if video sequence belongs to 1 then in one hot vector zeroth value will be 1.
def generator(source_path, folder_list, batch_size):
print( 'Source path = ', source_path, '; batch size =', batch_size)
img_idx = random.sample(range(0,30),15)
print(img_idx)
#create a list of image numbers you want to use for a particular video
while True:
t = np.random.permutation(folder_list)
num_batches = 3
# calculate the number of batches
for batch in range(num_batches): # we iterate over the number of batches
x=len(img_idx)
#batch_data = np.zeros((batch_size,x,y,z,3)) # x is the number of images you use for each video, (y,z) is the final size of the input images and 3 is the number of channels RGB
batch_data = np.zeros((batch_size,x,100,100,3))
batch_labels = np.zeros((batch_size,5)) # batch_labels is the one hot representation of the output
for folder in range(batch_size): # iterate over the batch_size
imgs = os.listdir(source_path+'/'+ t[folder + (batch*batch_size)].split(';')[0]) # read all the images in the folder
for idx,item in enumerate(img_idx): # Iterate iver the frames/images of a folder to read them in
image = imread(source_path+'/'+ t[folder + (batch*batch_size)].strip().split(';')[0]+'/'+imgs[item]).astype(np.float32)
#crop the images and resize them. Note that the images are of 2 different shape
#and the conv3D will throw error if the inputs in a batch have different shapes
image = cv2.resize(image,(100,100))
batch_data[folder,idx,:,:,0] = 1/.255
#normalise and feed in the image
batch_data[folder,idx,:,:,1] = 1/.255
#normalise and feed in the image
batch_data[folder,idx,:,:,2] = 1/.255
#normalise and feed in the image
batch_labels[folder, int(t[folder + (batch*batch_size)].strip().split(';')[2])] = 1
yield batch_data, batch_labels #you yield the batch_data and the batch_labels, remember what does yield do
# write the code for the remaining data points which are left after full batches
# Note here that a video is represented above in the generator as (number of images, height, width, number of channels). Take this into consideration while creating the model architecture.
# In[ ]:
curr_dt_time = datetime.datetime.now()
train_path = '/home/bkumar/upgrad/Project_data/train'
val_path = '/home/bkumar/upgrad/Project_data/val'
num_train_sequences = len(train_doc)
print('# training sequences =', num_train_sequences)
num_val_sequences = len(val_doc)
print('# validation sequences =', num_val_sequences)
num_epochs = 5
# choose the number of epochs
print ('# epochs =', num_epochs)
for x, y in generator(train_path, train_doc, batch_size):
print(x)
print(y)
# ## Model
# Here you make the model using different functionalities that Keras provides. Remember to use `Conv3D` and `MaxPooling3D` and not `Conv2D` and `Maxpooling2D` for a 3D convolution model. You would want to use `TimeDistributed` while building a Conv2D + RNN model. Also remember that the last layer is the softmax. Design the network in such a way that the model is able to give good accuracy on the least number of parameters so that it can fit in the memory of the webcam.
# In[ ]:
from keras.models import Sequential, Model
from keras.layers import Dense, GRU, Flatten, TimeDistributed, Flatten, BatchNormalization, Activation
from keras.layers.convolutional import Conv3D, MaxPooling3D
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras import optimizers
# In[ ]:
optimiser = 'SGD'
#write your optimizer
#model.compile(optimizer=optimiser, loss='categorical_crossentropy', metrics=['categorical_accuracy'])
#print (model.summary())
# Let us create the `train_generator` and the `val_generator` which will be used in `.fit_generator`.
# In[ ]:
#next(generator(train_path, train_doc, batch_size)
#next(generator(val_path, val_doc, batch_size)
train_generator = generator(train_path, train_doc, batch_size)
print("training generator done")
val_generator = generator(val_path, val_doc, batch_size)
print("validation generator done")
# In[ ]:
model_name = 'model_init' + '_' + str(curr_dt_time).replace(' ','').replace(':','_') + '/'
if not os.path.exists(model_name):
os.mkdir(model_name)
filepath = model_name + 'model-{epoch:05d}-{loss:.5f}-{categorical_accuracy:.5f}-{val_loss:.5f}-{val_categorical_accuracy:.5f}.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=False, save_weights_only=False, mode='auto', period=1)
LR = 0.02
# write the REducelronplateau code here
callbacks_list = [checkpoint, LR]
# The `steps_per_epoch` and `validation_steps` are used by `fit_generator` to decide the number of next() calls it need to make.
# In[ ]:
if (num_train_sequences%batch_size) == 0:
steps_per_epoch = int(num_train_sequences/batch_size)
else:
steps_per_epoch = (num_train_sequences//batch_size) + 1
if (num_val_sequences%batch_size) == 0:
validation_steps = int(num_val_sequences/batch_size)
else:
validation_steps = (num_val_sequences//batch_size) + 1
# Let us now fit the model. This will start training the model and with the help of the checkpoints, you'll be able to save the model at the end of each epoch.
# In[ ]:
#model.fit_generator(train_generator, steps_per_epoch=steps_per_epoch, epochs=num_epochs, verbose=1,
# callbacks=callbacks_list, validation_data=val_generator,
# validation_steps=validation_steps, class_weight=None, workers=1, initial_epoch=0)
# In[ ]: