We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent a54962d commit 936dfcbCopy full SHA for 936dfcb
doc/fluid/design/dist_train/async_update.md
@@ -31,8 +31,8 @@ them while they are all calculated.
31
instances and then send them.
32
1. PServer would run an `Optimize Block` using a specified optimize algorithm to update
33
the specified parameter.
34
-1. The trainer will fetch the parameter before running forward Op which depends on the specified
35
-parameter.
+1. The trainer will fetch latest parameter from PServer before running forward Op which depends
+on the specified parameter.
36
1. Broadcast the received variable into multiple GPU cards and continue to run the next
37
mini-batch.
38
0 commit comments