-
Notifications
You must be signed in to change notification settings - Fork 741
[ET-VK] Change weight packing in embedding #7063
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ET-VK] Change weight packing in embedding #7063
Conversation
The existing weight tensor for aten.embedding is created using a `tensor_like` from the output tensor, which defaults to channel packed. However, the weight tensor is actually a 2D-tensor of `(num_embedding, dim_of_embedding)`. It is better in space to use either width or height packing. This diff changes the implementation to use height-packing. Differential Revision: [D66421366](https://our.internmc.facebook.com/intern/diff/D66421366/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7063
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit f958cab with merge base fbcc9a1 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
The existing weight tensor for aten.embedding is created using a `tensor_like` from the output tensor, which defaults to channel packed. However, the weight tensor is actually a 2D-tensor of `(num_embedding, dim_of_embedding)`. It is better in space to use either width or height packing. This diff changes the implementation to use height-packing. Differential Revision: [D66421366](https://our.internmc.facebook.com/intern/diff/D66421366/) ghstack-source-id: 255375459 Pull Request resolved: #7063
|
This pull request was exported from Phabricator. Differential Revision: D66421366 |
The existing weight tensor for aten.embedding is created using a `tensor_like` from the output tensor, which defaults to channel packed. However, the weight tensor is actually a 2D-tensor of `(num_embedding, dim_of_embedding)`. It is better in space to use either width or height packing. This diff changes the implementation to use height-packing. Differential Revision: [D66421366](https://our.internmc.facebook.com/intern/diff/D66421366/) [ghstack-poisoned]
Pull Request resolved: #7063 The existing weight tensor for aten.embedding is created using a `tensor_like` from the output tensor, which defaults to channel packed. However, the weight tensor is actually a 2D-tensor of `(num_embedding, dim_of_embedding)`. It is better in space to use either width or height packing. This diff changes the implementation to use height-packing. ghstack-source-id: 255439082 Differential Revision: [D66421366](https://our.internmc.facebook.com/intern/diff/D66421366/)
|
This pull request was exported from Phabricator. Differential Revision: D66421366 |
Pull Request resolved: #7063 The existing weight tensor for aten.embedding is created using a `tensor_like` from the output tensor, which defaults to channel packed. However, the weight tensor is actually a 2D-tensor of `(num_embedding, dim_of_embedding)`. It is better in space to use either width or height packing. This diff changes the implementation to use height-packing. ghstack-source-id: 255439082 Differential Revision: [D66421366](https://our.internmc.facebook.com/intern/diff/D66421366/) Co-authored-by: Justin Yip <[email protected]>
Stack from ghstack (oldest at bottom):
The existing weight tensor for aten.embedding is created using a
tensor_likefrom the output tensor, which defaults to channel packed.However, the weight tensor is actually a 2D-tensor of
(num_embedding, dim_of_embedding). It is better in space to use either width or height packing.This diff changes the implementation to use height-packing.
Differential Revision: D66421366