Skip to content

Commit c66f533

Browse files
committed
Update
[ghstack-poisoned]
1 parent e6be3fe commit c66f533

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

kernels/portable/cpu/op_argmin.cpp

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,8 +53,7 @@ Tensor& argmin_out(
5353
// that dimension is contiguous. Is there any particular reason we
5454
// shouldn't just always use this strategy since we aren't
5555
// otherwise capable of parallelizing reductions?
56-
const auto reduction_size =
57-
dim.has_value() ? in.sizes().at(dim.value()) : in.numel();
56+
const int64_t reduction_size = get_reduced_dim_product(in, dim);
5857
const auto grain_size = std::max(
5958
static_cast<int64_t>(1),
6059
executorch::extension::internal::GRAIN_SIZE / reduction_size);

0 commit comments

Comments
 (0)