-
Notifications
You must be signed in to change notification settings - Fork 127
Open
Description
I face a Memory Error trying to train a big model. Is there any way to train using all avalilable GPUs?
I am using a linux machine (gcloud) with 8 GPUs.
Traceback (most recent call last):
File "main.py", line 445, in
train_model(parameters, args.dataset)
File "main.py", line 73, in train_model
dataset = build_dataset(params)
File "/mnt/sdc200/nmt-keras/data_engine/prepare_data.py", line 229, in build_dataset
saveDataset(ds, params['DATASET_STORE_PATH'])
File "/mnt/sdc200/nmt-keras/src/keras-wrapper/keras_wrapper/dataset.py", line 52, in saveDataset
pk.dump(dataset, open(store_path, 'wb'), protocol=-1)
MemoryError
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels