Skip to content
Discussion options

You must be logged in to vote

Backpropagation also known as loss.backward() :

I guees you already understand the basics, lienar x = y and no linear (1 if x > 0 else 0)

  • Linear: the derivate of lineal is a constant, 1 for all values. This mean that gradients will propagate with no changes and will not introduce the no lineal changes in the backpropagate step, this limit the network or model to catch no lineal relations in the data and is harder for the model to learn

  • ReLU: The derivate is 1 for x > 0 else 0, as you see there is the first difference from this point, lets continue, this means that gradients will propagate without changes for positive values and for negative values gradient will be delete or null

  • So,…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@ChampPhil
Comment options

@jhoanmartinez
Comment options

Answer selected by ChampPhil
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants