Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 4 additions & 7 deletions kernels/portable/cpu/util/dtype_util.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -28,17 +28,14 @@ bool check_tensor_dtype(
case SupportedTensorDtypes::INTB:
return executorch::runtime::tensor_is_integral_type(t, true);
case SupportedTensorDtypes::BOOL_OR_BYTE:
return (
executorch::runtime::tensor_is_type(t, ScalarType::Bool) ||
executorch::runtime::tensor_is_type(t, ScalarType::Byte));
return (executorch::runtime::tensor_is_type(
t, {ScalarType::Bool, ScalarType::Byte}));
case SupportedTensorDtypes::SAME_AS_COMPUTE:
return executorch::runtime::tensor_is_type(t, compute_type);
case SupportedTensorDtypes::SAME_AS_COMMON: {
if (compute_type == ScalarType::Float) {
return (
executorch::runtime::tensor_is_type(t, ScalarType::Float) ||
executorch::runtime::tensor_is_type(t, ScalarType::Half) ||
executorch::runtime::tensor_is_type(t, ScalarType::BFloat16));
return (executorch::runtime::tensor_is_type(
t, {ScalarType::Float, ScalarType::Half, ScalarType::BFloat16}));
} else {
return executorch::runtime::tensor_is_type(t, compute_type);
}
Expand Down
26 changes: 26 additions & 0 deletions runtime/core/exec_aten/util/tensor_util.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
#include <cmath>
#include <cstddef> // size_t
#include <limits>
#include <sstream>
#include <vector>

#include <executorch/runtime/core/array_ref.h>
#include <executorch/runtime/core/error.h>
Expand Down Expand Up @@ -484,6 +486,30 @@ inline bool tensor_is_type(
return true;
}

inline bool tensor_is_type(
executorch::aten::Tensor t,
const std::vector<executorch::aten::ScalarType>& dtypes) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The runtime must not use streams, vectors or any other type that allocates memory dynamically: see https://pytorch.org/executorch/main/portable-cpp-programming.html for some commentary on this.

In this case you may be able to use a Span<ScalarType> since the container size is known at construction time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

noted, I will use the Span container

if (std::find(dtypes.begin(), dtypes.end(), t.scalar_type()) !=
dtypes.end()) {
return true;
}

std::stringstream dtype_ss;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not safe to assemble a dynamic string here. If you wanted to print all of the types, printing individual log lines would be the safest and easiest. If you wanted to assemble a string, you could create a fixed-size buffer on the stack and strncat strings onto it, but that can be error-prone.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I was thinking about using the c-ctyle buffer for that, I will update

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to go a simple way that provides an explicit override without any loops.

The use of strncat will lead to a warning https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83404 which I think is not desired.

@dbort , hope you find that version suitable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dbort surficing the thread on top.

for (size_t i = 0; i < dtypes.size(); i++) {
if (i != 0) {
dtype_ss << ", ";
}
dtype_ss << torch::executor::toString(dtypes[i]);
}

ET_LOG_MSG_AND_RETURN_IF_FALSE(
false,
"Expected to find one of %s types, but tensor has type %s",
dtype_ss.str().c_str(),
torch::executor::toString(t.scalar_type()));
return false;
}

inline bool tensor_is_integral_type(
executorch::aten::Tensor t,
bool includeBool = false) {
Expand Down
Loading