7575
7676You can install the recommended environment as follows:
7777
78- ```
78+ ``` console
7979conda env create -f environment.yml -n studiogan
8080```
8181
8282With docker, you can use:
83- ```
83+ ``` console
8484docker pull mgkang/studiogan:latest
8585```
8686
@@ -100,19 +100,19 @@ CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -e -c CONFIG_PATH
100100```
101101
102102* Train (`` -t `` ) and evaluate (`` -e `` ) the model defined in `` CONFIG_PATH `` using GPUs `` (0, 1, 2, 3) `` and `` DataParallel ``
103- ```
103+ ``` console
104104CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -e -c CONFIG_PATH
105105```
106106
107107* Train (`` -t `` ) and evaluate (`` -e `` ) the model defined in `` CONFIG_PATH `` using GPUs `` (0, 1, 2, 3) `` and `` DistributedDataParallel ``
108- ```
108+ ``` console
109109CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -e -DDP -n 1 -nr 0 -c CONFIG_PATH
110110```
111111Try `` python3 src/main.py `` to see available options.
112112
113113
114114Via Tensorboard, you can monitor trends of `` IS, FID, F_beta, Authenticity Accuracies, and the largest singular values `` :
115- ```
115+ ``` console
116116~ PyTorch-StudioGAN/logs/RUN_NAME>>> tensorboard --logdir=./ --port PORT
117117```
118118<p align =" center " >
@@ -151,7 +151,7 @@ Via Tensorboard, you can monitor trends of ``IS, FID, F_beta, Authenticity Accur
151151## Supported Training Techniques
152152
153153* DistributedDataParallel
154- ```
154+ ``` console
155155 # export NCCL_DEBUG=INFO
156156 export NCCL_SOCKET_IFNAME=^docker0,lo
157157 export MASTER_ADDR=MASTER_IP
@@ -160,23 +160,23 @@ Via Tensorboard, you can monitor trends of ``IS, FID, F_beta, Authenticity Accur
160160 CUDA_VISIBLE_DEVICES=0,1,...,N python3 src/main.py -t -DDP -n TOTAL_NODES -nr CURRENT_NODE -c CONFIG_PATH
161161 ```
162162* Mixed Precision Training ([ Narang et al.] ( https://arxiv.org/abs/1710.03740 ) )
163- ```
163+ ``` console
164164 CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -t -mpc -c CONFIG_PATH
165165 ```
166166* Standing Statistics ([ Brock et al.] ( https://arxiv.org/abs/1809.11096 ) )
167- ```
167+ ``` console
168168 CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -e -std_stat --standing_step STANDING_STEP -c CONFIG_PATH
169169 ```
170170* Synchronized BatchNorm
171- ```
171+ ``` console
172172 CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -t -sync_bn -c CONFIG_PATH
173173 ```
174174* Load All Data in Main Memory
175- ```
175+ ``` console
176176 CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -t -l -c CONFIG_PATH
177177 ```
178178* LARS
179- ```
179+ ``` console
180180 CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -t -l -c CONFIG_PATH -LARS
181181 ```
182182
@@ -185,7 +185,7 @@ Via Tensorboard, you can monitor trends of ``IS, FID, F_beta, Authenticity Accur
185185The StudioGAN supports `` Image visualization, K-nearest neighbor analysis, Linear interpolation, and Frequency analysis `` . All results will be saved in `` ./figures/RUN_NAME/*.png `` .
186186
187187* Image Visualization
188- ```
188+ ``` console
189189CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -iv -std_stat --standing_step STANDING_STEP -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --log_output_path LOG_OUTPUT_PATH
190190```
191191<p align =" center " >
@@ -194,7 +194,7 @@ CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -iv -std_stat --standing_step S
194194
195195
196196* K-Nearest Neighbor Analysis (we have fixed K=7, the images in the first column are generated images.)
197- ```
197+ ``` console
198198CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -knn -std_stat --standing_step STANDING_STEP -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --log_output_path LOG_OUTPUT_PATH
199199```
200200<p align =" center " >
@@ -203,7 +203,7 @@ CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -knn -std_stat --standing_step
203203
204204
205205* Linear Interpolation (applicable only to conditional Big ResNet models)
206- ```
206+ ``` console
207207CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -itp -std_stat --standing_step STANDING_STEP -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --log_output_path LOG_OUTPUT_PATH
208208```
209209<p align =" center " >
@@ -212,7 +212,7 @@ CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -itp -std_stat --standing_step
212212
213213
214214* Frequency Analysis
215- ```
215+ ``` console
216216CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -fa -std_stat --standing_step STANDING_STEP -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --log_output_path LOG_OUTPUT_PATH
217217```
218218<p align =" center " >
@@ -225,13 +225,13 @@ CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -fa -std_stat --standing_step S
225225Inception Score (IS) is a metric to measure how much GAN generates high-fidelity and diverse images. Calculating IS requires the pre-trained Inception-V3 network, and recent approaches utilize [ OpenAI's TensorFlow implementation] ( https://github.com/openai/improved-gan ) .
226226
227227To compute official IS, you have to make a "samples.npz" file using the command below:
228- ```
228+ ``` console
229229CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -s -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --log_output_path LOG_OUTPUT_PATH
230230```
231231
232232It will automatically create the samples.npz file in the path `` ./samples/RUN_NAME/fake/npz/samples.npz `` .
233233After that, execute TensorFlow official IS implementation. Note that we do not split a dataset into ten folds to calculate IS ten times. We use the entire dataset to compute IS only once, which is the evaluation strategy used in the [ CompareGAN] ( https://github.com/google/compare_gan ) repository.
234- ```
234+ ``` console
235235CUDA_VISIBLE_DEVICES=0,1,... python3 src/inception_tf13.py --run_name RUN_NAME --type "fake"
236236```
237237Keep in mind that you need to have TensorFlow 1.3 or earlier version installed!
@@ -282,7 +282,7 @@ We report the best IS, FID, and F_beta values of various GANs. B. S. means batch
282282※ IS, FID, and F_beta values are computed using 10K test and 10K generated Images.
283283
284284※ When evaluating, the statistics of batch normalization layers are calculated on the fly (statistics of a batch).
285- ```
285+ ``` console
286286CUDA_VISIBLE_DEVICES=0 python3 src/main.py -e -l -stat_otf -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --eval_type "test"
287287```
288288
@@ -315,7 +315,7 @@ CUDA_VISIBLE_DEVICES=0 python3 src/main.py -e -l -stat_otf -c CONFIG_PATH --chec
315315※ IS, FID, and F_beta values are computed using 50K validation and 50K generated Images.
316316
317317※ When evaluating, the statistics of batch normalization layers are calculated on the fly (statistics of a batch).
318- ```
318+ ``` console
319319CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -e -l -stat_otf -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --eval_type "valid"
320320```
321321
@@ -332,7 +332,7 @@ CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -e -l -stat_otf -c CONFIG_PATH
332332※ IS, FID, and F_beta values are computed using 50K validation and 50K generated Images.
333333
334334※ When evaluating, the statistics of batch normalization layers are calculated in advance (moving average of the previous statistics).
335- ```
335+ ``` console
336336CUDA_VISIBLE_DEVICES=0,1,... python3 src/main.py -e -l -sync_bn -c CONFIG_PATH --checkpoint_folder CHECKPOINT_FOLDER --eval_type "valid"
337337```
338338
0 commit comments