Skip to content
Discussion options

You must be logged in to vote

Hi, @KumoLiu
Thanks again for your solution. You enlightened me a lot, but it's equal to just logging the loss of the last batch in each epoch. So, I followed what you have done, and tried to use monai.handlers.MetricLogger & monai.handlers.TensorBoardStatsHandler to log mean_loss in each epoch. The idea is to use MetricLogger to record loss in each iteration for an epoch and log it by TensorBoardStatsHandler.

Here is my solution. Maybe it's not robust enough, so please use it carefully, but it indeed handles the issue I face.

  1. Overwrite the function _default_epoch_writer and add bind_handler in Class TensorBoardStatsHandler
  def _default_epoch_writer(self, engine: Engine, writer) -> None

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@yhuang1997
Comment options

Comment options

You must be logged in to vote
0 replies
Answer selected by wyli
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants