Skip to content

Conversation

@oscarandersson8218
Copy link
Collaborator

Tensors of rank < 4 always have a contiguous dim_orde (e.g. (0, 1, 2)) while 4D tensors always have (0, 2, 3, 1) as dim_order. This patch inserts transposes between 4D tensors and lower ranked tensors, more precisely it targets squeeze and unsqueeze ops. The transposes will act as bridges between different dim_orders to ensure that activations are passed forward in the correct format.

Vela version is bumped to include transpose implementation. This version also includes implementation of transpose, therefore xfails in test_expand and test_repeat are disabled.

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 9, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6045

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 88ee974 with merge base b1c94ab (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 9, 2024
@oscarandersson8218 oscarandersson8218 added partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm ciflow/trunk and removed CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. labels Oct 9, 2024
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 9, 2024
@oscarandersson8218 oscarandersson8218 force-pushed the transpose_insertion branch 2 times, most recently from 27be916 to 927f143 Compare October 16, 2024 11:03
@facebook-github-bot
Copy link
Contributor

@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Tensors of rank < 4 always have a contiguous dim_order while 4D tensors
may have (0, 2, 3, 1) as dim_order. This patch inserts transposes
between 4D tensors and other tensors, more precisely it targets
squeeze and unsqueeze ops. The transposes will act as bridges between
different dim_orders to ensure that activations are passed forward in
the correct format.

Vela version is bumped to include transpose implementation.
This version also includes implementation of transpose, therefore
xfails in test_expand and test_repeat are disabled.

Signed-off-by: Oscar Andersson <[email protected]>
Change-Id: If56344b2153441f113e4e87e95d41acad5df9220
@facebook-github-bot
Copy link
Contributor

@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot facebook-github-bot merged commit c7a3c3f into pytorch:main Oct 21, 2024
108 of 109 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants