@@ -36,13 +36,14 @@ git clone https://github.com/sql-machine-learning/elasticdl.git
36
36
37
37
### Start Kubernetes Cluster
38
38
39
- We start minikube with a command-line option ` --mount-string ` ,
40
- which mounts the host directory ` $DATA_PATH ` to ` /data ` path in all minikube containers.
39
+ We start minikube with a command-line option ` --mount-string ` , which mounts the
40
+ directory ` {elasticdl_repo_root}/data ` in local host to ` /data ` path in all
41
+ minikube containers.
41
42
42
43
``` bash
43
- export DATA_PATH={a_folder_path_to_store_training_data}
44
- minikube start --vm-driver=hyperkit --cpus 2 --memory 6144 --disk-size=50gb --mount=true --mount-string=" $DATA_PATH :/data"
45
44
cd elasticdl
45
+ mkdir data
46
+ minikube start --vm-driver=hyperkit --cpus 2 --memory 6144 --disk-size=50gb --mount=true --mount-string=" ./data:/data"
46
47
kubectl apply -f elasticdl/manifests/elasticdl-rbac.yaml
47
48
eval $( minikube docker-env)
48
49
```
@@ -64,18 +65,16 @@ We generate MNIST training and evaluation data in RecordIO format. We provide a
64
65
script in elasticdl repo.
65
66
66
67
``` bash
68
+ # Change directory to the root of elasticdl repo
69
+ cd ../
67
70
docker pull elasticdl/elasticdl:dev
68
- cd {elasticdl_repo_root}
69
71
docker run --rm -it \
70
72
-v $HOME /.keras/datasets:/root/.keras/datasets \
71
73
-v $PWD :/work \
72
74
-w /work elasticdl/elasticdl:dev \
73
75
bash -c " scripts/gen_dataset.sh data"
74
- cp -r data/* $DATA_PATH
75
76
```
76
77
77
- We generate datasets and copy them to ` $DATA_PATH ` .
78
-
79
78
### Summit a training job
80
79
81
80
We use the following command to submit a training job:
0 commit comments