Skip to content
Discussion options

You must be logged in to vote

Hey @tsuijenk,

I think that's a standard procedure in many DL algorithms. oftentimes post-processing can not be backpropped through and it the other point is that training should ensure a good prediction that needs as little post-processing as possible. If you had a backproppable post-processing algorithm you can often end up in situations in which the model predictions worsens because the model "knows" that the predictions will be refined.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by tsuijenk
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants