-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Hi,
Thank you for producing this wonderful software. This is more of a methods question (not an issue with the code), and I hope this is the appropriate place to post it.
I'm trying to use mmaction2 to find a small, and rare type of activity in videos and I'm wondering if anyone has suggestions on a model type, or pipeline settings that would help. There are only two classes of activity that I'm trying to find, and I have video clips where this action occurs (2-100 frames); I also have a bunch of training clips where this activity is not occurring. I know this isn't the typical use of mmaction, so I thought folks might have ideas of ways to improve my methods.
I've tried a few different recognition models, but I haven't managed to get precision and recall above 0.7. Does anyone have suggestions for model types or pipeline settings that might improve my model? I've pasted an example of a config file that I'm using, but I'm not tied to any of these settings.
Thank you for any suggestions.
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=True),
cls_head=dict(
type='TSNHead',
num_classes=3,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.5,
init_std=0.001))
# model training and testing settings
train_cfg = None
test_cfg = dict(average_clips=None)
# pipeline
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='SampleFrames', clip_len=2, frame_interval=1, num_clips=8),
dict(type='RawFrameDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1,
num_fixed_crops=13),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(
type='SampleFrames', clip_len=2, frame_interval=1,num_clips=8,test_mode=True),
dict(type='RawFrameDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224), # maybe change to multiscle crop
dict(type='Normalize', **img_norm_cfg),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(
type='SampleFrames',
clip_len=2,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='RawFrameDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='Normalize', **img_norm_cfg),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
# dataset settings
dataset_type = 'RawframeDataset'
data_root = '/home/usr/projects/clientbattransects_2020/data/ava_format/client/videos_jpg_7frames/'
data_root_val = '/home/usr/projects/clientbattransects_2020/data/ava_format/client/videos_jpg_7frames/'
ann_file_train = '/home/usr/projects/clientbattransects_2020/data/ava_format/client/annotations/annotations_7frame_trainShorter.txt'
ann_file_val = '/home/usr/projects/clientbattransects_2020/data/ava_format/client/annotations/annotations_7frame_val.txt'
ann_file_test = '/home/usr/projects/clientbattransects_2020/data/ava_format/client/annotations/annotations_7frame_val.txt'
data = dict(
videos_per_gpu=16,
workers_per_gpu=8,
train=dict(
type=dataset_type,
ann_file=ann_file_train,
data_prefix=data_root,
filename_tmpl='frame_{:06}.jpg',
pipeline=train_pipeline
),
val=dict(
type=dataset_type,
ann_file=ann_file_val,
data_prefix=data_root_val,
filename_tmpl='frame_{:06}.jpg',
pipeline=val_pipeline
),
test=dict(
type=dataset_type,
ann_file=ann_file_test,
data_prefix=data_root_val,
filename_tmpl='frame_{:06}.jpg',
pipeline=test_pipeline
))
dataset_A_train = dict(
type='MyDataset',
ann_file = ann_file_train
)
# optimizer
optimizer = dict(
#type='Adam', lr=0.001, weight_decay=0.0001)
type='SGD', lr=0.01, momentum=0.9,
weight_decay=0.0005)
optimizer_config = dict(grad_clip=dict(max_norm=20, norm_type=2))
# learning policy
lr_config = dict(policy='step', step=[20, 40])
total_epochs = 20
checkpoint_config = dict(interval=1)
evaluation = dict(
interval=2, metrics=['top_k_accuracy', 'mean_class_accuracy'])
log_config = dict(
interval=20,
hooks=[
dict(type='TextLoggerHook'),
# dict(type='TensorboardLoggerHook'),
])
# runtime settings
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = '/home/usr/repos/mmaction2/work_dirs/tsn/'
load_from = None
resume_from = None
workflow = [('train', 1)]