ELU/ReLU and MaxPool2D #157
-
Hello, Happy new year! I've been studying your code, particularly the ones like UNet_parts.py. I am sorry for the long message, but I would love to get a clear understanding of the code. I felt like the code does something that is not fully described in the paper. Thank you in advance! :) Feature 1 Feature 2 I was thinking that you had a separate piece of code for the very first 3 convolutional layers after the image input. Hence, I suspect that it goes like this: Is this what you were doing? Feature 3 Additionally, I was wondering if the use of PyTorch Lightning would take over the duty of weight generator? For example, I know the adoption of PyTorch Lightning means we don't need to call backward() and zero_grad() explicitly. Feature 4 Then I see "loss_epoch.mean" being returned as part of training_step(). I guess we return this value, but we don't call backward() on this value, but loss_val.mean()? Feature 5 After sigmoid, do you then pass the sigmoid tensor into the post_processing.py to calculate the Bernoulli probability map? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hey @tsuijenk
|
Beta Was this translation helpful? Give feedback.
Hey @tsuijenk
Thanks for your great interest in decode. Answers below
The activation function is configurable and comes from the config file. If it is not set by the user individually you ELU is used because of the reference parameter.
DECODE/decode/utils/reference_files/reference.yaml
Line 31 in e57a1cc
Sure, in principal we are very open for any kind of improvement. Sometimes it can be a bit tricky that decode is not a "typical" deep learning project, but I have used lightning in other projects and do not see objections so far.
loss_val is a vector, I will have to look at the implementation myself again, we'll get back to you.
I think we did tha…