-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Labels
Description
I want to train on hdfs cluster with distribute tensorflow, now I have start same code on ps, master and each worker use 'run_config' to specify them, and I use the 'estimator' and tf.contrib.learn.Experiment".
But I don't know if I should separate the whole train data to each worker(each worker has different data),
or I just specify all the workers the same path (the whole train data)?
if specify all the workers the same path, then all the data will load to memory,right? I think it will have some issue .
Forgive my poor English.
Thanks in advance!