We use PyTorch, TorchVision, OpenCV, PIL, and the albumentations library for extensive data augmentation.
conda install -c pytorch torchvision captum
conda install -c conda-forge imgaug
conda install -c albumentations albumentationsWe use PyTorch's TensorBoard integration to see metrics and some segmentation results during training
tensorboard --logdir runs/General configuration parameters (training, validation slices, image input size) are located in config.py.
To launch the training script:
python train.py --model attunet --loss combined --lr 0.001 -E 80python inference.py --model attunet --weights path/to/weightsDRIVE data is located in
data/drive
- training
- test
Its mean is [0.5078, 0.2682, 0.1613], stdev is [0.3378, 0.1753, 0.0978]
We also use the STARE dataset with manual vessel annotations: http://cecas.clemson.edu/~ahoover/stare/probing/index.html.
wget http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar
wget http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tarThe layout of the data is
data/stare/
- images # images
- labels/ # labels
- labels_ah
- labels_vk
- results_hoover
CHASE dataset:
data/chase
The 1st manual annotation set is *_1stHO.png, the second manual annotations are *_2ndHO.png.
ARIA data is located in
data/aria
- images
- annotation 1
- annotation 2
- (markupdiscfovea)
The database contains healthy subjects (61 images, "c" suffix), diabetic patients (59 images, "d" suffix), and patients with age-related macular degeneration (23 images, "a" suffix). We created a CSV file recording the
The data augmentation can be visualized in a notebook.
The Attention U-Net model has attention maps that can be directly interpreted using the following script:
python visualize_attention.py --img-path path/to/img --weights path/to/modelcheck python visualize_attention.py -h for options.
See visualize_activations.py

