-
Notifications
You must be signed in to change notification settings - Fork 752
Check for null outputs in method #12559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12559
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit d2635e5 with merge base 1333b36 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D78438716 |
Summary: When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code. Differential Revision: D78438716
ed3a36f to
87ec05d
Compare
|
This pull request was exported from Phabricator. Differential Revision: D78438716 |
Summary: When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code. Differential Revision: D78438716
87ec05d to
2d5303b
Compare
|
This pull request was exported from Phabricator. Differential Revision: D78438716 |
Summary: When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code. Differential Revision: D78438716
2d5303b to
d2635e5
Compare
|
|
||
| // Validate that outputs are set. | ||
| for (size_t i = 0; i < method_meta().num_outputs(); i++) { | ||
| auto& output = mutable_value(get_output_index(i)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: dont need a mutable reference here
| ET_LOG(Debug, "Executing method: %s.", method_meta().name()); | ||
|
|
||
| // Validate that outputs are set. | ||
| for (size_t i = 0; i < method_meta().num_outputs(); i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just cache that we've checked this so we dont do this every time
JacobSzwejbka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just the couple nits
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Summary: When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code.
Differential Revision: D78438716