How to load annotations in test_pipeline or inference process? #8926
-
Hello, I am implementing an idea on noisy annotations correction. The idea comes from a kind of Teacher-Student Learning, where the process looks like:
Now, I want to use the first part of this process for annotations correction, which is like:
Since this process only involves inference, it should be implemented using test_pipeline. So, is it possible to load annotations in test_pipeline? If yes, then how to do that? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
You can add This is the code snippet I use to inference manually for this kind of use case where other_keys = []
for k in data:
if k not in ["img", "img_metas"]:
other_keys.append(k)
gt_data = {k: data[k] for k in other_keys}
data = {k: data[k] for k in ["img", "img_metas"]}
# fix inputs
data["img_metas"] = data["img_metas"].data
data["img"] = data["img"].data
if next(model.parameters()).is_cuda:
device = next(model.parameters()).device
data = scatter(data, [device])[0]
with torch.inference_mode():
res = model.simple_test(data["img"].unsqueeze(0), [data["img_metas"]]) This kind of code works with pipelines like this: train_pipeline = [
dict(type="LoadImageFromFile"),
dict(type="LoadAnnotations", with_mask=True),
dict(type="DefaultFormatBundle"),
dict(type="Collect", keys=["img", "gt_bboxes", "gt_labels", "gt_masks"]),
] You may need to add other meta data depending on the model like |
Beta Was this translation helpful? Give feedback.
You can add
LoadAnnotations
to atest_pipeline
, but you'll need to preventCollect
from discarding this information and manually control what's passed into the model's inference call.This is the code snippet I use to inference manually for this kind of use case where
data = dataset[0]
for like aCocoDataset
: