Why not use batch norm? #1383
Unanswered
AlexanderOnly
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I was wondering what is the "timestep" here, and why not use batch normalization?
deepmd/utils/network.py
if activation_fn != None and use_timestep :
idt_initializer = tf.random_normal_initializer(
stddev=0.001,
mean=0.1,
seed=seed if (seed is None or uniform_seed) else seed + 2)
if initial_variables is not None:
idt_initializer = tf.constant_initializer(initial_variables[name + '/idt'])
idt = tf.get_variable('idt',
[outputs_size],
precision,
idt_initializer,
trainable = trainable)
variable_summaries(idt, 'idt')
if activation_fn != None:
if useBN:
None
# hidden_bn = self._batch_norm(hidden, name=name+'_normalization', reuse=reuse)
# return activation_fn(hidden_bn)
else:
if use_timestep :
return tf.reshape(activation_fn(hidden), [-1, outputs_size]) * idt
else :
return tf.reshape(activation_fn(hidden), [-1, outputs_size])
else:
if useBN:
None
# return self._batch_norm(hidden, name=name+'_normalization', reuse=reuse)
else:
return hidden
Beta Was this translation helpful? Give feedback.
All reactions