-
Notifications
You must be signed in to change notification settings - Fork 742
Use safe_numerics util from PyTorch #11537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11537
Note: Links to docs will display an error until the docs builds have been completed. ❌ 17 New FailuresAs of commit 73aacee with merge base 18e9149 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "release notes: runtime" |
| namespace executor { | ||
|
|
||
| #ifdef __clang__ | ||
| #pragma clang diagnostic ignore "-Wvla-extension" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whys this needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I faced this error when compiling (./install_executorch.sh) with Clang on MacOS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aaron-ang can you paste the error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jathu My bad. I faced this error not when installing Executorch, but when running tests locally using sh test/build_size_test.sh as mentioned in https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing.
executorch/kernels/portable/cpu/util/normalization_ops_util.cpp:110:37: error: variable length arrays in C++ are a Clang extension [-Werror,-Wvla-cxx-extension]
110 | executorch::aten::SizesType shape[ndim];
| ^~~~
executorch/kernels/portable/cpu/util/normalization_ops_util.cpp:110:37: note: read of non-const variable 'ndim' is not allowed in a constant expression
executorch/kernels/portable/cpu/util/normalization_ops_util.cpp:88:10: note: declared here
88 | size_t ndim = normalized_shape.size();
| ^
1 error generated.
gmake[2]: *** [kernels/portable/cpu/util/CMakeFiles/kernels_util_all_deps.dir/build.make:219: kernels/portable/cpu/util/CMakeFiles/kernels_util_all_deps.dir/normalization_ops_util.cpp.o] Error 1
gmake[2]: *** Waiting for unfinished jobs....
gmake[1]: *** [CMakeFiles/Makefile2:750: kernels/portable/cpu/util/CMakeFiles/kernels_util_all_deps.dir/all] Error 2
gmake: *** [Makefile:136: all] Error 2My syntax is also wrong: it should be ignored instead of ignore, but surprisingly compiles fine.
|
Oh on
We track that anything under c10 mirrors exactly (this is a temporary thing iiuc until we can introduce a hard dep on pytorch/pytorch directly). If you want you could try submitting this to pytorch/pytorch instead and then mirror the file from there, or you can just move this to any other util spot that looks reasonable in ET to dodge this issue, and then we will deal with upstreaming it later. Moving it will also probably help with the c10 namespace issues you were having. |
|
This pull request has been merged in 3dda80e. |
|
hi @huydhn why is this issue closed? I need to update the code from PyTorch upstream. It is different from this branch. |
|
What! This makes no sense. I have no intention of closing this PR, as I didn't even know about this before you pointed it out. Something doesn't feel right |
|
While I'm checking, maybe it's easier to recreate this in a different PR |
huydhn/pytorch@3dda80e claimes to fix this issue, therefore github closed it (issues and pulls are the same entities in GitHub parlance) |

Fixes #11370
Summary
We import
safe_numerics.hfrom PyTorch and extendmul_overflowsforsize_t. Then, we apply it incalculate_nbytesto perform faster overflow checks.Test plan
Use existing tests since we are doing a drop-in replacement for overflow checks.