Try to add batch normalization in network.py, failed with dependency line. #1234
Unanswered
Franklalalala
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I used high level api, tf.layers.batch_normalization(hidden, training=True, trainable=True)
They say, you need solve the dependency like:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
or, it'll not learn the gamma and belta.
I tried to add these lines in trainer.py, but failed with bugs.
I see there are lines about updatate variables.
deepmd-kit/deepmd/train/trainer.py
Lines 337 to 354 in 159e45d
Does that mean, I do not need to check the dependency anymore?
I'm new to tf, coundn't understand it.
I tried low API wrapper, always fail on line ↓
with tf.control_dependencies([assign_moving_average(pop_mean, batch_mean, decay),
assign_moving_average(pop_var, batch_var, decay)]):
return tf.identity(batch_mean), tf.identity(batch_var)
I don't know, how to slove this problem, or maybe, just leave it. ( It works without dependency line, with the high level API.)
Help!
Beta Was this translation helpful? Give feedback.
All reactions