Skip to content

Commit e12f137

Browse files
authored
Bugfix in concatenating gradient batches
tf.stack only accounted for the last batch of gradients, instead of collecting the gradients from all batches. The all but the last batch were discarded. The correct implementation is to concatenate the gradient batches by the first axis. This way, all the gradients are correctly accounted for. Tested the changes by running it in googe colab.
1 parent c25295f commit e12f137

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

site/en/tutorials/interpretability/integrated_gradients.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -904,8 +904,8 @@
904904
" gradient_batch = one_batch(baseline, image, alpha_batch, target_class_idx)\n",
905905
" gradient_batches.append(gradient_batch)\n",
906906
" \n",
907-
" # Stack path gradients together row-wise into single tensor.\n",
908-
" total_gradients = tf.stack(gradient_batch)\n",
907+
" # Concatenate path gradients together row-wise into single tensor.\n",
908+
" total_gradients = tf.concat(gradient_batches, axis=0)\n",
909909
"\n",
910910
" # Integral approximation through averaging gradients.\n",
911911
" avg_gradients = integral_approximation(gradients=total_gradients)\n",

0 commit comments

Comments
 (0)