Replies: 2 comments 2 replies
-
Hi @StarShang, since I'm just starting out with the library as well I'm not quite sure whether this is applicaple for EfficientAD, but there is a class called 'Tiler' in the data.utils module. This will split your large image into multiple tiles. Maybe this also helps in this case? Regarding the implementation that you gave: I think the students weights should be updated and therefore the gradient calculated, so I'm not quite sure why you run it with torch.no_grad(). |
Beta Was this translation helpful? Give feedback.
-
Thanks @NilsB98 @blaz-r I have revised the model again, and in each forward propagation of three self.students, each epoch only allows gradient descent along one self.student. I trained and found that the memory usage decreased by about 20%, and the final accuracy decreased from 98% to 97.7%. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When I train high_resolution image like 50004000 by efficientAD with input size(10241024),the memory on GPU host too much.
this is the config file:
So I changed the torch_model.py in EfficientAD like this:
After I changed the code which is used for self.student to forward ,the GPU memory host reduced by 20%,but the pixel_AUROC reduced from 0.98 to 0.93 Did I make a mistake!? Also, May I ask if there is a good way to perform anomaly detection on high-resolution images?
Beta Was this translation helpful? Give feedback.
All reactions