-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
UMambaEnc is a little weird. The training didn't work at all for 3/5 folds.
Suspicions:
- Unbalanced train -> train-val split, as I let nnUNet's random splitting framework take care of it for the 5-fold cross validation.
- AA: I don't think this is the case, because it does the same for UMambaBot, and all the trainings worked there just fine. If random splitting was the case, I must have seen it in atleast either of the UMambaBot training folds.
- Would try to infer now on labeled-test set, and see what's going on (maybe report and discuss)
PS: UMambaEnc - mamba implementation in the entire encoder; UMambaBot - mamba bottleneck layer b/w encoder and decoder
Metadata
Metadata
Assignees
Labels
No labels