Skip to content

Improving speed of PointpillarsΒ #3

@xavidzo

Description

@xavidzo

Hello @AutoVision-cloud, thanks for the release of your nice work.

I have a question regarding the speed of Pointpillars. Since you determined that Pointpillars-FSA (or -DSA) runs with nearly half the amount of G-FLOPs, I assume this means your model should have a faster inference speed than the baseline...
if my assumption is correct, do you know how much faster can it run?
I tested Pointpillars-FSA with custom data and compared the speed with base Pointpillars, but didn't see any difference in speed....
To me it's not clear how Pointpillars-FSA would perform less G-FLOPs if you are adding extra layers to the baseline, con you please explain? Or for faster speed should I pass only the context features to the BEVEncoder, and ignore the pillar features, thus avoiding concatenation of pillar and context features?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions