Skip to content

Is it possible to speed up inference and compare features ?  #13

@MyraBaba

Description

@MyraBaba

Hi

I have 2080RTX Ti and the below inference ~tooks 0,019 seconds after warmup . ~ 40 - 50 inference per second. Its look slow.
How I can make it faster ? Is it due to model size ?

Also

@torch.no_grad()
def get_feature(img, model, device, normalize=False):
    input = val_transforms(img).unsqueeze(0)
    input = input.to(device)
                                                                                                       29,1          Top
    input = val_transforms(img).unsqueeze(0)
    input = input.to(device)
    output, _ = model(input)
    if normalize:
        output = F.normalize(output)
    return output

if device:
    if torch.cuda.device_count() > 1:
        print('Using {} GPUs for inference'.format(torch.cuda.device_count()))
    model = nn.DataParallel(model)
    model.to(device)

model.eval()

 elapsed_time = next(timer_gen)

    feature1 = get_feature(img1, model, device, normalize=True)
    elapsed_time = next(timer_gen) - elapsed_time
    print(elapsed_time)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions