You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm custom a dataset for a DeepLab V3 model (following the DeepLab implementation under projects).
My data is a binary dataset (label 1 for object and 0 for background), and my register dataset code is below.
def registry_dataset_semantic_segmentation():
name = "custom_dataset_train"
# 1. register a function which returns dicts
DatasetCatalog.register(name, lambda: load_sem_seg('/mnt/d/train/PS-RGB_tiled_mask', '/mnt/d/train/PS-RGB_tiled', gt_ext='png', image_ext='png'))
# 2. Optionally, add metadata about this dataset,
MetadataCatalog.get(name).set(thing_classes=["aircraft"])
name_test = "custom_dataset_test"
# 1. register a function which returns dicts
DatasetCatalog.register(name_test, lambda: load_sem_seg('/mnt/d/test/trongan_PS-RGB_tiled_mask', '/mnt/d/test/PS-RGB_tiled', gt_ext='png', image_ext='png'))
# 2. Optionally, add metadata about this dataset,
MetadataCatalog.get(name_test).set(thing_classes=["aircraft"], evaluator_type=["sem_seg"],stuff_classes=[""], ignore_label=[0])
My training is passed. However, the accuracy and loss values are weird. Actually, both values are incorrect.
I got an error in the visualization process.
[05/07 07:00:01 d2.evaluation.evaluator]: Start inference on 245 batches
[05/07 07:00:21 d2.evaluation.evaluator]: Inference done 11/245. 0.0425 s / iter. ETA=0:00:13
[05/07 07:00:26 d2.evaluation.evaluator]: Inference done 96/245. 0.0440 s / iter. ETA=0:00:08
[05/07 07:00:31 d2.evaluation.evaluator]: Inference done 182/245. 0.0440 s / iter. ETA=0:00:03
[05/07 07:00:35 d2.evaluation.evaluator]: Total inference time: 0:00:14.732784 (0.061387 s / iter per device, on 7 devices)
[05/07 07:00:35 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:10 (0.044370 s / iter per device, on 7 devices)
trongan93 - test - acc: [nan] , iou: [nan]
/home/eeaiserver/viplab_projects/detectron2/detectron2/evaluation/sem_seg_evaluation.py:135: RuntimeWarning: invalid value encountered in true_divide
class_weights = pos_gt / np.sum(pos_gt)
/home/eeaiserver/viplab_projects/detectron2/detectron2/evaluation/sem_seg_evaluation.py:142: RuntimeWarning: invalid value encountered in double_scalars
macc = np.sum(acc[acc_valid]) / np.sum(acc_valid)
/home/eeaiserver/viplab_projects/detectron2/detectron2/evaluation/sem_seg_evaluation.py:143: RuntimeWarning: invalid value encountered in double_scalars
miou = np.sum(iou[acc_valid]) / np.sum(iou_valid)
/home/eeaiserver/viplab_projects/detectron2/detectron2/evaluation/sem_seg_evaluation.py:145: RuntimeWarning: invalid value encountered in double_scalars
pacc = np.sum(tp) / np.sum(pos_gt)
[05/07 07:00:35 d2.evaluation.sem_seg_evaluation]: OrderedDict([('sem_seg', {'mIoU': nan, 'fwIoU': 0.0, 'IoU-': nan, 'mACC': nan, 'pACC': nan, 'ACC-': nan})])
[05/07 07:00:35 d2.engine.defaults]: Evaluation results for custom_dataset_test in csv format:
[05/07 07:00:35 d2.evaluation.testing]: copypaste: Task: sem_seg
[05/07 07:00:35 d2.evaluation.testing]: copypaste: mIoU,fwIoU,mACC,pACC
[05/07 07:00:35 d2.evaluation.testing]: copypaste: nan,0.0000,nan,nan
Are someone else has worked on a custom dataset on semantic segmentation? I guess my registration dataset has some errors, but I not sure how to fix them.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I'm custom a dataset for a DeepLab V3 model (following the DeepLab implementation under projects).
My data is a binary dataset (label 1 for object and 0 for background), and my register dataset code is below.
My training is passed. However, the accuracy and loss values are weird. Actually, both values are incorrect.
I got an error in the visualization process.
Are someone else has worked on a custom dataset on semantic segmentation? I guess my registration dataset has some errors, but I not sure how to fix them.
Thank you for your attention.
Beta Was this translation helpful? Give feedback.
All reactions