Skip to content

Reduce allocationsΒ #309

@cmeeren

Description

@cmeeren

This library showed up as a hotspot in Azure Code Optimizations for my API.

I believe this library has room for peformance improvements. At least in terms of allocations (memory usage).

Below are benchmark results for a quick experiment I did, where I cached a function that parsed, validated, and formatted input phone numbers. I used 100.000 phone numbers. 10x means each number was reused 10 times (so 10.000 unique phone numbers), and 100x means 1.000 unique numbers reused 100 times each.

As you can see, even just at 10x the cached implementation vastly outperforms the uncached implementation, not only in terms of CPU (which is to be expected) but also in terms of allocations/memory. The latter is more surprising, given that caching normally gives lower CPU but higher memory usage, and this is what leads me to conclude that there is room for performance improvements in this library wrt. allocations.

Method Mean Error StdDev Ratio RatioSD Gen0 Gen1 Allocated Alloc Ratio
Uncached_1x 199.25 ms 3.293 ms 3.080 ms 1.02 0.02 4000.0000 - 67.14 MB 1.00
Cached_1x 284.59 ms 5.493 ms 7.333 ms 1.45 0.04 6000.0000 3000.0000 100.93 MB 1.50
Uncached_10x 195.84 ms 3.186 ms 2.980 ms 1.00 0.02 4000.0000 - 67.14 MB 1.00
Cached_10x 34.98 ms 0.980 ms 2.781 ms 0.18 0.01 1000.0000 - 22.37 MB 0.33
Uncached_100x 199.36 ms 2.797 ms 2.479 ms 1.02 0.02 4000.0000 - 67.14 MB 1.00
Cached_100x 11.72 ms 1.095 ms 3.212 ms 0.06 0.02 - - 13.89 MB 0.21

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions