load a trained model #1070
Unanswered
mostafafwefw
asked this question in
Q&A
Replies: 1 comment
-
check what GPU you got |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
any fix ?
Session found, loading the trained model ...
Converting to Diffusers ...
Traceback (most recent call last):
File "/content/convertodiff.py", line 1115, in
convert(args)
File "/content/convertodiff.py", line 1066, in convert
text_encoder, vae, unet = load_models_from_stable_diffusion_checkpoint(v2_model, args.model_to_load)
File "/content/convertodiff.py", line 835, in load_models_from_stable_diffusion_checkpoint
checkpoint = load_checkpoint_with_text_encoder_conversion(ckpt_path)
File "/content/convertodiff.py", line 816, in load_checkpoint_with_text_encoder_conversion
checkpoint = torch.load(ckpt_path, map_location="cuda")
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1131, in _load
result = unpickler.load()
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1101, in persistent_load
load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1083, in load_tensor
wrap_storage=restore_location(storage, location),
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1052, in restore_location
return default_restore_location(storage, map_location)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 215, in default_restore_location
result = fn(storage, location)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Conversion error, if the error persists, remove the CKPT file from the current session folder
Beta Was this translation helpful? Give feedback.
All reactions