Skip to content

Commit 0ea7c7a

Browse files
authored
Own data train (#41)
* own dataset readme * own dataset readme * own dataset readme * own dataset readme * own dataset readme * own dataset readme * own dataset readme * my dataset full roadmap * my dataset full roadmap * my dataset full roadmap * my dataset full roadmap * more details * more details * more details
1 parent f0ff7a8 commit 0ea7c7a

File tree

1 file changed

+112
-5
lines changed

1 file changed

+112
-5
lines changed

README.md

Lines changed: 112 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -240,14 +240,121 @@ On the host machine:
240240
Docker: TODO
241241
242242
## Create your data
243+
244+
Please check bash scripts for data preparation and mask generation from CelebaHQ section,
245+
if you stuck at one of the following steps.
246+
247+
243248
On the host machine:
244249
245-
Explain explain explain
250+
# Make shure you are in lama folder
251+
cd lama
252+
export TORCH_HOME=$(pwd) && export PYTHONPATH=.
253+
254+
# You need to prepare following image folders:
255+
$ ls my_dataset
256+
train
257+
val_source # 2000 or more images
258+
visual_test_source # 100 or more images
259+
eval_source # 2000 or more images
260+
261+
# LaMa generates random masks for the train data on the flight,
262+
# but needs fixed masks for test and visual_test for consistency of evaluation.
263+
264+
# Suppose, we want to evaluate and pick best models
265+
# on 512x512 val dataset with thick/thin/medium masks
266+
# And your images have .jpg extention:
267+
268+
python3 bin/gen_mask_dataset.py \
269+
$(pwd)/configs/data_gen/random_<size>_512.yaml \ # thick, thin, medium
270+
my_dataset/val_source/ \
271+
my_dataset/val/random_<size>_512.yaml # thick, thin, medium
272+
--ext jpg
273+
274+
# So the mask generator will:
275+
# 1. resize and crop val images and save them as .png
276+
# 2. generate masks
277+
278+
ls my_dataset/val/random_medium_512/
279+
image1_crop000_mask000.png
280+
image1_crop000.png
281+
image2_crop000_mask000.png
282+
image2_crop000.png
283+
...
284+
285+
# Generate thick, thin, medium masks for visual_test folder:
286+
287+
python3 bin/gen_mask_dataset.py \
288+
$(pwd)/configs/data_gen/random_<size>_512.yaml \ #thick, thin, medium
289+
my_dataset/visual_test_source/ \
290+
my_dataset/visual_test/random_<size>_512/ #thick, thin, medium
291+
--ext jpg
292+
293+
294+
ls my_dataset/visual_test/random_thick_512/
295+
image1_crop000_mask000.png
296+
image1_crop000.png
297+
image2_crop000_mask000.png
298+
image2_crop000.png
299+
...
300+
301+
# Same process for eval_source image folder:
302+
303+
python3 bin/gen_mask_dataset.py \
304+
$(pwd)/configs/data_gen/random_<size>_512.yaml \ #thick, thin, medium
305+
my_dataset/eval_source/ \
306+
my_dataset/eval/random_<size>_512/ #thick, thin, medium
307+
--ext jpg
308+
309+
310+
311+
# Generate location config file which locate these folders:
312+
313+
touch my_dataset.yaml
314+
echo "data_root_dir: $(pwd)/my_dataset/" >> my_dataset.yaml
315+
echo "out_root_dir: $(pwd)/experiments/" my_dataset.yaml
316+
echo "tb_dir: $(pwd)/tb_logs/" my_dataset.yaml
317+
mv my_dataset.yaml ${PWD}/configs/training/location/
318+
319+
320+
# Check data config for consistency with my_dataset folder structure:
321+
$ cat ${PWD}/configs/training/data/abl-04-256-mh-dist
322+
...
323+
train:
324+
indir: ${location.data_root_dir}/train
325+
...
326+
val:
327+
indir: ${location.data_root_dir}/val
328+
img_suffix: .png
329+
visual_test:
330+
indir: ${location.data_root_dir}/visual_test
331+
img_suffix: .png
332+
333+
334+
# Run training
335+
python bin/train.py -cn lama-fourier location=my_dataset data.batch_size=10
336+
337+
# Evaluation: LaMa training procedure picks best few models according to
338+
# scores on my_dataset/val/
339+
340+
# To evaluate one of your best models (i.e. at epoch=32)
341+
# on previously unseen my_dataset/eval do the following
342+
# for thin, thick and medium:
343+
344+
# infer:
345+
python3 bin/predict.py \
346+
model.path=$(pwd)/experiments/<user>_<date:time>_lama-fourier_/ \
347+
indir=$(pwd)/my_dataset/eval/random_<size>_512/ \
348+
outdir=$(pwd)/inference/my_dataset/random_<size>_512 \
349+
model.checkpoint=epoch32.ckpt
350+
351+
# metrics calculation:
352+
python3 bin/evaluate_predicts.py \
353+
$(pwd)/configs/eval_2gpu.yaml \
354+
$(pwd)/my_dataset/eval/random_<size>_512/ \
355+
$(pwd)/inference/my_dataset/random_<size>_512 \
356+
$(pwd)/inference/my_dataset/random_<size>_512_metrics.csv
246357
247-
TODO: format
248-
TODO: configs
249-
TODO: run training
250-
TODO: run eval
251358
252359
**OR** in the docker:
253360

0 commit comments

Comments
 (0)