MemoryError: Unable to allocate 18.4 MiB for an array with shape (32, 224, 224, 3) and data type float32 #12063
elios-dimo
started this conversation in
Help: Best practices
Replies: 1 comment
-
Hello, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello I have this programm that usually stops at 2nd or 5th epoch. I have 16gb of ram 1050 ti gpu ryzen 1600. I use pycharm and I followed a tutorial that uses visual studio but with different datasets. My dataset consists of a folder dogcats that contains sample,test,train,validate folders. I don't actually use the sample folder. Test folder has 12.500 images of both dogs and cats, train folder has 2 folder dogs and cats with 11500 images each and lastly validate folder has again a folder of dog and cats with 1000 images each. That's all the info I could provide. Im new in AI and any help is appreciated. I searched everywhere but couldn't find a solution that works. Below you can see my error message.
Unable to allocate 18.4 MiB for an array with shape (32, 224, 224, 3) and data type float32 [[{{node PyFunc}}]] [[IteratorGetNext]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_1308]
libaries
import numpy as np
import tensorflow as tf
from keras.preprocessing.image
import ImageDataGenerator
import matplotlib as plt
dataset paths
TRAIN = r'C:\Users\Desktop\dogscats\Train'
TEST = r'C:\Users\Desktop\dogscats\Test'
VAL = r'C:\Users\Desktop\dogscats\Validate'
data augmentation and preparation
train_data = ImageDataGenerator(
rescale = 1. / 255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
train_set = train_data.flow_from_directory(TRAIN, target_size=(224, 224), batch_size=32, class_mode='categorical')
val_data = ImageDataGenerator(rescale = 1./255)
val_set = val_data.flow_from_directory(TRAIN, target_size=(224, 224), batch_size=32, class_mode='categorical')
build the model
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=(5, 5), padding='Same', activation='relu', input_shape=[224, 224, 3]))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=(5, 5), padding='Same', activation='relu'))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Conv2D(filters=96, kernel_size=(5, 5), padding='Same', activation='relu'))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Conv2D(filters=96, kernel_size=(5, 5), padding='Same', activation='relu'))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
model.add(tf.keras.layers.Dropout(0.5))
Flatten before dense layer
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units=500, activation='relu'))
model.add(tf.keras.layers.Dense(units=2, activation='softmax'))
print(model.summary())
compile the model
model.compile(optimizer='rmsprop' , loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x=train_set, validation_data=val_set, batch_size=32 , epochs=20)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
print(acc)
print(val_acc)
Beta Was this translation helpful? Give feedback.
All reactions