Skip to content

Not enough memory error #40

@Wutislife0022

Description

@Wutislife0022

I got this error right after setting up the repo. It gave me this error:

I can't seem to figure out what is causing this. I tried setting my worker number and batch size to as low as possible, and it still wouldn't work. I have noticed that when I ran the code, my computer RAM would be completely taken up by python process. That's probably why the error occurred. Any help is appreciated.

ASUS TUF GEFORCE 3060
16GB DDR4 RAM 3600Hz
AMD Ryzen 7 5800X CPU

│ C:\Users\Leo\anaconda3\envs\ldm\lib\site-packages\transformers\modeling_utils.py:392 in          │
│ load_state_dict                                                                                  │
│                                                                                                  │
│    389 │   Reads a PyTorch checkpoint file, returning properly formatted errors if they arise.   │
│    390 │   """                                                                                   │
│    391 │   try:                                                                                  │
│ ❱  392 │   │   return torch.load(checkpoint_file, map_location="cpu")                            │
│    393 │   except Exception as e:                                                                │
│    394 │   │   try:                                                                              │
│    395 │   │   │   with open(checkpoint_file) as f:                                              │
│                                                                                                  │
│ C:\Users\Leo\anaconda3\envs\ldm\lib\site-packages\torch\serialization.py:712 in load             │
│                                                                                                  │
│    709 │   │   │   │   │   │   │   │     " silence this warning)", UserWarning)                  │
│    710 │   │   │   │   │   opened_file.seek(orig_position)                                       │
│    711 │   │   │   │   │   return torch.jit.load(opened_file)                                    │
│ ❱  712 │   │   │   │   return _load(opened_zipfile, map_location, pickle_module, **pickle_load_  │
│    713 │   │   return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args  │
│    714                                                                                           │
│    715                                                                                           │
│                                                                                                  │
│ C:\Users\Leo\anaconda3\envs\ldm\lib\site-packages\torch\serialization.py:1049 in _load           │
│                                                                                                  │
│   1046 │                                                                                         │
│   1047 │   unpickler = UnpicklerWrapper(data_file, **pickle_load_args)                           │
│   1048 │   unpickler.persistent_load = persistent_load                                           │
│ ❱ 1049 │   result = unpickler.load()                                                             │
│   1050 │                                                                                         │
│   1051 │   torch._utils._validate_loaded_sparse_tensors()                                        │
│   1052                                                                                           │
│                                                                                                  │
│ C:\Users\Leo\anaconda3\envs\ldm\lib\site-packages\torch\serialization.py:1019 in persistent_load │
│                                                                                                  │
│   1016 │   │                                                                                     │
│   1017 │   │   if key not in loaded_storages:                                                    │
│   1018 │   │   │   nbytes = numel * torch._utils._element_size(dtype)                            │
│ ❱ 1019 │   │   │   load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))                │
│   1020 │   │                                                                                     │
│   1021 │   │   return loaded_storages[key]                                                       │
│   1022                                                                                           │
│                                                                                                  │
│ C:\Users\Leo\anaconda3\envs\ldm\lib\site-packages\torch\serialization.py:997 in load_tensor      │
│                                                                                                  │
│    994 │   def load_tensor(dtype, numel, key, location):                                         │
│    995 │   │   name = f'data/{key}'                                                              │
│    996 │   │                                                                                     │
│ ❱  997 │   │   storage = zip_file.get_storage_from_record(name, numel, torch._UntypedStorage).s  │
│    998 │   │   # TODO: Once we decide to break serialization FC, we can                          │
│    999 │   │   # stop wrapping with _TypedStorage                                                │
│   1000 │   │   loaded_storages[key] = torch.storage._TypedStorage(                               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: [enforce fail at
C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\c10\core\impl\alloc_cpu.cpp:81] data.
DefaultCPUAllocator: not enough memory: you tried to allocate 151781376 bytes.

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\ai\Stable-textual-inversion_win\main.py:620 in <module>                                       │
│                                                                                                  │
│   617 │   │   │   config.model.params.personalization_config.params.initializer_words[0] = opt   │
│   618 │   │                                                                                      │
│   619 │   │   if opt.actual_resume:                                                              │
│ ❱ 620 │   │   │   model = load_model_from_config(config, opt.actual_resume)                      │
│   621 │   │   else:                                                                              │
│   622 │   │   │   model = instantiate_from_config(config.model)                                  │
│   623                                                                                            │
│                                                                                                  │
│ D:\ai\Stable-textual-inversion_win\main.py:39 in load_model_from_config                          │
│                                                                                                  │
│    36 │   pl_sd = torch.load(ckpt, map_location="cuda")                                          │
│    37 │   sd = pl_sd["state_dict"]                                                               │
│    38 │   config.model.params.ckpt_path = ckpt                                                   │
│ ❱  39 │   model = instantiate_from_config(config.model)                                          │
│    40 │   m, u = model.load_state_dict(sd, strict=False)                                         │
│    41 │   if len(m) > 0 and verbose:                                                             │
│    42 │   │   print("missing keys:")                                                             │
│                                                                                                  │
│ D:\ai\Stable-textual-inversion_win\ldm\util.py:85 in instantiate_from_config                     │
│                                                                                                  │
│    82 │   │   elif config == "__is_unconditional__":                                             │
│    83 │   │   │   return None                                                                    │
│    84 │   │   raise KeyError("Expected key `target` to instantiate.")                            │
│ ❱  85 │   return get_obj_from_str(config["target"])(**config.get("params", dict()), **kwargs)    │
│    86                                                                                            │
│    87                                                                                            │
│    88 def get_obj_from_str(string, reload=False):                                                │
│                                                                                                  │
│ D:\ai\Stable-textual-inversion_win\ldm\models\diffusion\ddpm.py:472 in __init__                  │
│                                                                                                  │
│    469 │   │   else:                                                                             │
│    470 │   │   │   self.register_buffer('scale_factor', torch.tensor(scale_factor))              │
│    471 │   │   self.instantiate_first_stage(first_stage_config)                                  │
│ ❱  472 │   │   self.instantiate_cond_stage(cond_stage_config)                                    │
│    473 │   │                                                                                     │
│    474 │   │   self.cond_stage_forward = cond_stage_forward                                      │
│    475 │   │   self.clip_denoised = False                                                        │
│                                                                                                  │
│ D:\ai\Stable-textual-inversion_win\ldm\models\diffusion\ddpm.py:569 in instantiate_cond_stage    │
│                                                                                                  │
│    566 │   │   else:                                                                             │
│    567 │   │   │   assert config != '__is_first_stage__'                                         │
│    568 │   │   │   assert config != '__is_unconditional__'                                       │
│ ❱  569 │   │   │   model = instantiate_from_config(config)                                       │
│    570 │   │   │   self.cond_stage_model = model                                                 │
│    571                                                                                           │
│    572                                                                                           │
│                                                                                                  │
│ D:\ai\Stable-textual-inversion_win\ldm\util.py:85 in instantiate_from_config                     │
│                                                                                                  │
│    82 │   │   elif config == "__is_unconditional__":                                             │
│    83 │   │   │   return None                                                                    │
│    84 │   │   raise KeyError("Expected key `target` to instantiate.")                            │
│ ❱  85 │   return get_obj_from_str(config["target"])(**config.get("params", dict()), **kwargs)    │
│    86                                                                                            │
│    87                                                                                            │
│    88 def get_obj_from_str(string, reload=False):                                                │
│                                                                                                  │
│ D:\ai\Stable-textual-inversion_win\ldm\modules\encoders\modules.py:163 in __init__               │
│                                                                                                  │
│   160 │   def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_lengt   │
│   161 │   │   super().__init__()                                                                 │
│   162 │   │   self.tokenizer = CLIPTokenizer.from_pretrained(version)                            │
│ ❱ 163 │   │   self.transformer = CLIPTextModel.from_pretrained(version)                          │
│   164 │   │   self.device = device                                                               │
│   165 │   │   self.max_length = max_length                                                       │
│   166 │   │   #self.freeze()                                                                     │
│                                                                                                  │
│ C:\Users\Leo\anaconda3\envs\ldm\lib\site-packages\transformers\modeling_utils.py:1978 in         │
│ from_pretrained                                                                                  │
│                                                                                                  │
│   1975 │   │   if from_pt:                                                                       │
│   1976 │   │   │   if not is_sharded and state_dict is None:                                     │
│   1977 │   │   │   │   # Time to load the checkpoint                                             │
│ ❱ 1978 │   │   │   │   state_dict = load_state_dict(resolved_archive_file)                       │
│   1979 │   │   │                                                                                 │
│   1980 │   │   │   # set dtype to instantiate the model under:                                   │
│   1981 │   │   │   # 1. If torch_dtype is not None, we use that dtype                            │
│                                                                                                  │
│ C:\Users\Leo\anaconda3\envs\ldm\lib\site-packages\transformers\modeling_utils.py:396 in          │
│ load_state_dict                                                                                  │
│                                                                                                  │
│    393 │   except Exception as e:                                                                │
│    394 │   │   try:                                                                              │
│    395 │   │   │   with open(checkpoint_file) as f:                                              │
│ ❱  396 │   │   │   │   if f.read().startswith("version"):                                        │
│    397 │   │   │   │   │   raise OSError(                                                        │
│    398 │   │   │   │   │   │   "You seem to have cloned a repository without having git-lfs ins  │
│    399 │   │   │   │   │   │   "git-lfs and run `git lfs install` followed by `git lfs pull` in  │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
MemoryError```

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions