Anyone had any luck getting the new x4-upscaler SD model to run at half precision? #5266
Replies: 2 comments
-
Hi, Any luck with this issue? |
Beta Was this translation helpful? Give feedback.
-
Nah no luck yet, unfortunately I haven't been back to experiment with this since I posted this query. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I got it working via the instructions on https://github.com/Stability-AI/stablediffusion. It's a really impressive upscaler but incredibly VRAM hungry, the largest image I can upscale on my 10GB 3080 is 256x256.
It's running in full precision though - presumably if I could get it going at half precision I could bump that up to 512x512, which would be quite acceptable.
Having trouble getting half precision to work, though. I tried adding "model = model.half()" after line 27 of scripts/superresolution.py. This does cause the model to be loaded at fp16, but then when running inference I get the following error:
C:\Users\username\.conda\envs\sd2upscale\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
I tried resolving this by changing line 453 of conv.py to refer to input.half() instead of input. This made inference complete without error, but it output a black image. So clearly that wasn't the right way to fix the problem.
Does anyone know the correct way to get this upscaler running at half precision? It's probably already obvious that I'm a complete dilettante here and just winging it.
Beta Was this translation helpful? Give feedback.
All reactions