Skip to content
Discussion options

You must be logged in to vote

Hi @nicktasios , thanks for the question.
From the loss curve, the model is not learning/converging.
I saw the code, some parameters are very different from training CT images, not sure if it's the data processing issue.

Some suggestions to check:

  1. The initial LR is set to 1e-3, which is a little bit high for typical segmentation tasks. Could try 1e-4
    ` parser.add_argument("--optim_lr", default=1e-3, type=float, help="optimization learning rate")
  2. output channel is 1, but default DiceCELoss included background, there might causing errors. The model is not optimizing DiceCELoss correctly. Might should be 2?
  3. The scale range and spacing are very different, not sure if this will cause problem.

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@nicktasios
Comment options

@tangy5
Comment options

tangy5 Oct 31, 2022
Collaborator

@nicktasios
Comment options

@tangy5
Comment options

tangy5 Oct 31, 2022
Collaborator

Answer selected by wyli
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants