Skip to content

Commit e0bcd58

Browse files
DoubleBiaopytorchmergebot
authored andcommitted
[MTIA] Add MTIA dispatch for kernel foreach_maximum(Add D80022242 back) (pytorch#161571)
Summary: dispatch MTIA to function foreach_tensor_maximum_scalar_kernel_mtia_ Test Plan: CI Rollback Plan: Differential Revision: D81086607 Pull Request resolved: pytorch#161571 Approved by: https://github.com/malfet
1 parent 1708120 commit e0bcd58

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

aten/src/ATen/native/native_functions.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10848,6 +10848,7 @@
1084810848
dispatch:
1084910849
CompositeExplicitAutograd: foreach_tensor_clamp_min_scalar_kernel_slow_
1085010850
CUDA: foreach_tensor_clamp_min_scalar_kernel_cuda_
10851+
MTIA: foreach_tensor_maximum_scalar_kernel_mtia_
1085110852
autogen: _foreach_maximum.Scalar_out
1085210853

1085310854
# foreach_minimum/maximum dispatches to clamp_max/min

0 commit comments

Comments
 (0)