-
Notifications
You must be signed in to change notification settings - Fork 3.6k
[WIP] extract common code for EP API adapter #26879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
6f22ae7 to
549f0b9
Compare
cfa6c99 to
c1e98e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors common base classes used by ONNX Runtime operators to use templates instead of concrete types, enabling code reuse across different execution provider implementations. The refactoring primarily converts OpKernelInfo and OpKernelContext parameters to template parameters like KernelInfoType and KernelContextType, allowing these base classes to work with different EP API implementations (particularly for WebGPU and CUDA EPs).
Key changes include:
- Converting constructors and methods in operator base classes from using
OpKernelInfoto templatedKernelInfoType - Moving implementation code from .cc files to .h files as template methods (e.g.,
ComputePadsImpl,PrepareForComputeImpl) - Adding
templatekeyword before template member function calls on dependent types (e.g.,info.template GetAttr<T>)
Reviewed changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| onnxruntime/core/providers/cpu/tensor/upsamplebase.h | Templated UpsampleBase constructor with KernelInfoType, added template keywords for GetAttr/GetAttrOrDefault calls |
| onnxruntime/core/providers/cpu/tensor/unsqueeze.h | Templated UnsqueezeBase constructor with KernelInfoType |
| onnxruntime/core/providers/cpu/tensor/transpose.h | Templated TransposeBase constructor with KernelInfoType, added template keyword for GetAttrs call |
| onnxruntime/core/providers/cpu/tensor/squeeze.h | Templated SqueezeBase constructor with KernelInfoType |
| onnxruntime/core/providers/cpu/tensor/split.h | Templated SplitBase constructor with KernelInfoType, added template keywords for GetAttrOrDefault calls |
| onnxruntime/core/providers/cpu/tensor/padbase.h | Added templated ComputePadsImpl method, templated PadBase constructor, moved ComputePadWithAxes to be static, added conditional includes for SHARED_PROVIDER |
| onnxruntime/core/providers/cpu/tensor/pad.cc | Refactored ComputePads to call templated ComputePadsImpl, removed ComputePadWithAxes (moved to header) |
| onnxruntime/core/providers/cpu/tensor/gatherbase.h | Added templated PrepareForComputeImpl method, templated GatherBase constructor, added conditional includes |
| onnxruntime/core/providers/cpu/tensor/gather.cc | Refactored PrepareForCompute to call templated PrepareForComputeImpl |
| onnxruntime/core/providers/cpu/tensor/concatbase.h | Added large templated PrepareForComputeImpl method, templated ConcatBase constructor, added template keywords |
| onnxruntime/core/providers/cpu/tensor/concat.cc | Refactored PrepareForCompute to call templated PrepareForComputeImpl |
| onnxruntime/core/providers/cpu/reduction/reduction_kernel_base.h | Templated ReduceKernelBase constructor with KernelInfoType, added template keywords |
| onnxruntime/core/providers/cpu/nn/pool_base.h | Templated PoolBase constructor, changed API from GetKernelDef().OpName() to node().OpType() |
| onnxruntime/core/providers/cpu/nn/pool_attributes.h | Templated PoolAttributes constructor, added template keywords for GetAttr calls |
| onnxruntime/core/providers/cpu/nn/conv_transpose_attributes.h | Templated ConvTransposeAttributes constructor |
| onnxruntime/core/providers/cpu/nn/conv_attributes.h | Templated ConvAttributes constructor, changed from GetAttrsAsSpan to GetAttrs with vector intermediary, added template keywords |
| onnxruntime/contrib_ops/cpu/bert/attention_base.h | Templated AttentionBase constructor with KernelInfoType, added template keywords for GetAttr/GetAttrs/GetAttrOrDefault calls |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| : op_name_(info.GetKernelDef().OpName().rfind("QLinear", 0) != 0 ? info.GetKernelDef().OpName() : info.GetKernelDef().OpName().substr(7)), | ||
| template <typename KernelInfoType> | ||
| PoolBase(const KernelInfoType& info) | ||
| : op_name_(info.node().OpType().rfind("QLinear", 0) != 0 ? info.node().OpType() : info.node().OpType().substr(7)), |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change from 'GetKernelDef().OpName()' to 'node().OpType()' may introduce an API incompatibility. Ensure that 'node().OpType()' returns the same value as 'GetKernelDef().OpName()' and that 'node()' is available for all KernelInfoType template parameter types. This change could cause issues if different KernelInfoType implementations don't support the 'node()' method or if OpType() has different semantics than OpName().
Description
WIP.
This PR tries to generalize the class
OpKernelInfoandOpKernelContextusing template so that the code is able to be used on a different type, which helps for future changes that supports migration of WebGPU EP and CUDA EP to the EP API implementation.Currently only applies to base classes that is reused by WebGPU EP.