Skip to content

Conversation

@GregoryComer
Copy link
Member

Summary: When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code.

Differential Revision: D78438716

@pytorch-bot
Copy link

pytorch-bot bot commented Jul 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12559

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit d2635e5 with merge base 1333b36 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 16, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78438716

GregoryComer added a commit to GregoryComer/executorch that referenced this pull request Jul 16, 2025
Summary:

When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code.

Differential Revision: D78438716
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78438716

GregoryComer added a commit to GregoryComer/executorch that referenced this pull request Jul 16, 2025
Summary:

When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code.

Differential Revision: D78438716
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78438716

Summary:

When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both unrecoverable and unclear to the user. This PR adds a check in method.execute to detect null output tensor data pointers, and, if found, log a message and return an error code.

Differential Revision: D78438716
@GregoryComer GregoryComer added the release notes: none Do not include this in the release notes label Jul 16, 2025

// Validate that outputs are set.
for (size_t i = 0; i < method_meta().num_outputs(); i++) {
auto& output = mutable_value(get_output_index(i));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: dont need a mutable reference here

ET_LOG(Debug, "Executing method: %s.", method_meta().name());

// Validate that outputs are set.
for (size_t i = 0; i < method_meta().num_outputs(); i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just cache that we've checked this so we dont do this every time

Copy link
Contributor

@JacobSzwejbka JacobSzwejbka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just the couple nits

@github-actions
Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the stale PRs inactive for over 60 days label Sep 15, 2025
@github-actions
Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: none Do not include this in the release notes stale PRs inactive for over 60 days

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants