You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
I have a dataset of resumes (train - validation). I want to re-train a "Faster-RCNN-R-50-FPN-3x" for object detection on resumes (actually it's for document layout analysis).
So I ran my code on AWS. And I had a problem with the validation step.
Here's the config file :
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))#"COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) #Get the basic model configuration from the model zoo
#Passing the Train and Validation sets
cfg.DATASETS.TRAIN = (nameTrainDS,)
cfg.DATASETS.TEST = (nameValDS,)
cfg.OUTPUT_DIR = os.environ["SM_MODEL_DIR"]
# Number of data loading threads
cfg.DATALOADER.NUM_WORKERS = 1
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")#"COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")# # Let training initialize from model zoo
# Number of images per batch across all machines.
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.0005 # pick a good LearningRate
cfg.SOLVER.MAX_ITER = 80 #No. of iterations
cfg.SOLVER.CHECKPOINT_PERIOD = 10 #Save a checkpoint after every this number of iterations
cfg.SOLVER.STEPS = []
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128
cfg.MODEL.DEVICE = "cuda"
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 2 # No. of classes = [Block, Subblock]
cfg.TEST.EVAL_PERIOD = 80 # No. of iterations after which the Validation Set is evaluated.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
The training phase ran without any error. But, as you can see on my configuration file, I have an evaluation at the end of the train, and I had the following error : #033[5m#033[31mWARNING#033[0m #033[32m[07/20 13:26:09 d2.engine.defaults]: #033[0mNo evaluator found. Use `DefaultTrainer.test(evaluators=)`, or implement its `build_evaluator` method.
I looked on `DefaultTrainer class of Detectron2 and the default evaluator is set to None.
Based on Detectron2 documentation, I can choose between COCOEvaluator and SemSegEvaluator (because I used a custom dataset).
But I don't know how to call the evaluator using the config file. I'm planning on changing the evaluation period to 10 iterations, that's why I really want to call it on the config and not after the train.
Does someone have any idea to fix this issue ?
(Sorry for the english level btw)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
I have a dataset of resumes (train - validation). I want to re-train a "Faster-RCNN-R-50-FPN-3x" for object detection on resumes (actually it's for document layout analysis).
So I ran my code on AWS. And I had a problem with the validation step.
Here's the config file :
The training phase ran without any error. But, as you can see on my configuration file, I have an evaluation at the end of the train, and I had the following error :
#033[5m#033[31mWARNING#033[0m #033[32m[07/20 13:26:09 d2.engine.defaults]: #033[0mNo evaluator found. Use `DefaultTrainer.test(evaluators=)`, or implement its `build_evaluator` method.
I looked on `DefaultTrainer class of Detectron2 and the default evaluator is set to None.
Based on Detectron2 documentation, I can choose between COCOEvaluator and SemSegEvaluator (because I used a custom dataset).
But I don't know how to call the evaluator using the config file. I'm planning on changing the evaluation period to 10 iterations, that's why I really want to call it on the config and not after the train.
Does someone have any idea to fix this issue ?
(Sorry for the english level btw)
Beta Was this translation helpful? Give feedback.
All reactions