You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to do transfer learning using one of the new baseline model which is using the new py config.
I have been successful in modifying some of the lazy configs, however, I am stuck when trying to add custom augmentation.
Previously (when using the YAML config), I am using the build_detection_train_loader accompanied with a mapper function.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am trying to do transfer learning using one of the new baseline model which is using the new py config.
I have been successful in modifying some of the lazy configs, however, I am stuck when trying to add custom augmentation.
Previously (when using the YAML config), I am using the
build_detection_train_loader
accompanied with a mapper function.In this lazy configs sample (https://github.com/facebookresearch/detectron2/blob/main/tools/lazyconfig_train_net.py)
I see that:
train_loader = instantiate(cfg.dataloader.train)
in which the mapper would be in
cfg.dataloader.train._target_
I tried to do:
cfg.dataloader.train._target_ = build_detection_train_loader(cfg, mapper=custom_mapper)
However, since the cfg is using the new format, it can't work as the function itself rely on the old config format.
Any advice on how to add custom augmentation with this new format?
Beta Was this translation helpful? Give feedback.
All reactions