Skip to content

High overhead of power measurements for CPU inference measurements #236

@psyhtest

Description

@psyhtest

During the MLPerf Inference v1.0 round, I noticed that the power workflow when used with CPU inference occasionally seemed to incur a rather high overhead (~10%), for example:

Here, ArmNN is faster than TFLite but takes a big hit under the power workflow. TFLite, however, is not affected.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions