- 
                Notifications
    You must be signed in to change notification settings 
- Fork 79
Open
Description
See LuxDL/LuxLib.jl#136 for some background context. The main motivation for me is to avoid code duplication between CPU and GPU versions. However, if you take a look at the benchmark comment on the PR (for batchnorm and groupnorm) you see somewhere between a 10x-40x slowdown between KA and the equivalent optimized loop version (note that it is simply using @simd or @simd ivdep and nothing like LoopVectorization).
I think there are a couple of reasons for the slowdown:
- @simdannotations are missing (which causes slowdown even in the loop version if I remove the annotations)
- threading has overhead for some of the smaller problems
Potential solutions:
- Allow users to control threading. [FR] Add nthreads argument to CPU backend #507. For smaller problems, I want to opt out of threading manually.
- @simdannotations (Make CPU loops simd & ivdep #436 seems to do this. not sure what is the status for that)
- Alternate threading: KA is being used inside "core" operations. As such we are unlikely (if not impossible) to call other operations that make use of threading. Hence, having the option to use "cheaper threads" (Polyester.jl) would be a great addition
MasonProtter
Metadata
Metadata
Assignees
Labels
No labels