Skip to content

MemoryError #65

@dinosaxon

Description

@dinosaxon

I face a Memory Error trying to train a big model. Is there any way to train using all avalilable GPUs?

I am using a linux machine (gcloud) with 8 GPUs.

Traceback (most recent call last):
File "main.py", line 445, in
train_model(parameters, args.dataset)
File "main.py", line 73, in train_model
dataset = build_dataset(params)
File "/mnt/sdc200/nmt-keras/data_engine/prepare_data.py", line 229, in build_dataset
saveDataset(ds, params['DATASET_STORE_PATH'])
File "/mnt/sdc200/nmt-keras/src/keras-wrapper/keras_wrapper/dataset.py", line 52, in saveDataset
pk.dump(dataset, open(store_path, 'wb'), protocol=-1)
MemoryError

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions