-
Notifications
You must be signed in to change notification settings - Fork 738
Open
Milestone
Description
This issue collects tasks that block porting rir/rir.cpp and rir/ray_tracing.cpp to use torch stable ABI.
- implement
mutable_data_ptr<T>()andconst_data_ptr<T>()in torch/csrc/stable/tensor_struct.h. For instance, this simplifies porting of expressions liketensor.data_ptr<scalar_t>(). Currently, one needs to rewrite this asreinterpret_cast<scalar_t*>(tensor.data_ptr())where tensor is atorch::stable::Tensor. Not really a blocker but would be nice to have.
Fix available: [STABLE ABI] Add mutable_data_ptr() and const_data_ptr() methods to torch::stable::Tensor. pytorch#161891 - import
arangeas a stable/ops.h factory function - implement
torch::fft::fftshiftandtorch::fft::irfftas a stable/ops.h operation
Resolution: delete rir/ray_tracing.cpp as unused - implement
indexas atorch::stable::Tensormethod. Can we use torch::indexing::Slice() in torch stable ABI code? - expose
AT_DISPATCH_FLOATING_TYPES_AND_HALFandAT_DISPATCH_FLOATING_TYPESto stable ABI. Not really a blocker but would be nice to have.
For a workaround, see [STABLE ABI] Porting forced_align #4078 - implement
zerosandfullas astable/ops.hfactory functions. Currently, one can usenew_emptyandfill_to mimic these functions. Not really a blocker but would be nice to have. - implement
tensoras astable/ops.hfactory function. Currently, one can usenew_emptybut it is really clumsy to mimictensor, especially for CUDA tensors. - implement
dot,norm, andmaxas atorch::stable::Tensormethod or astable/ops.hoperation - implement
item<T>()as atorch::stable::Tensortemplate method
For a workaround, see [STABLE ABI] Porting forced_align #4078
janeyx99
Metadata
Metadata
Assignees
Labels
No labels