- 
                Notifications
    You must be signed in to change notification settings 
- Fork 706
[ET-VK] Fix metadata UBO VVL warnings #7484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ructor ## Context I discovered this bug when trying to execute the `vulkan_compute_api_test` binary on Windows. Almost all the tests were failing, with compute shaders producing incorrect results. After bisecting the change, it turns out the culprit is #7015. The diff introduced an alternative templated constructor for `ParamsBuffer` which would initialize an empty UBO with a specified size instead of wrapping a pre-existing object. The issue is that these constructors are ambiguous because they both are template constructors and both only accept one argument. Therefore, the original constructor would be called when certain callsites intended to call the new constructor. This results in a UBO being created with an incorrect size, and resulted in the tensor's metadata being passed incorrectly into a compute shader. To fix, I added a dummy argument into the new constructor for disambiguation purposes. I also changed it so that it's not templated, since there's no reason for it to be templated. Differential Revision: [D67770791](https://our.internmc.facebook.com/intern/diff/D67770791/) ghstack-source-id: 260031108 Pull Request resolved: #7478
## Context #7223 added the ability to use push constants in shaders. However, one thing the diff missed was not specifying that the compute pipeline layout needed to include a push constant upon creation. The Vulkan validation layers warns against this, and on certain GPUs such as the integrated Intel GPU on my windows laptop compute shaders will produce incorrect output. This diff makes the change such that the compute pipeline layout will be created with a push constant block if necessary. ## Solution Change the key of the pipeline layout cache to accept an additional push constant size field. The push constant size will be used to create the pipeline layout with a push constant block of the specified size. Differential Revision: [D67770793](https://our.internmc.facebook.com/intern/diff/D67770793/) ghstack-source-id: 260031109 Pull Request resolved: #7479
## Context Recently #7015 was implemented so that all tensor metadata (e.g. sizes, strides) would be stored in a single UBO instead of with separate UBO objects. This helps with memory savings presumably due to defragmentation of memory allocations. However, once the change was introduced, I noticed two new warnings produced by the Vulkan Validation Layer. The first complains that the offset of a UBO descriptor is not a multiple of the `minUniformBufferOffsetAlignment` field reported by the physical device properties. The second complains that the range of a UBO descriptor exceeds the offset + range of the underlying UBO object. # Solution To address the first one, instead of using `sizeof(utils::ivec4)` to determine the offset per metadata field, check the `minUniformBufferOffsetAlignment` field of reported by the device and use that instead. The second warning arises because the logic in the constructor of `BufferBindInfo` had a mistake; instead of using the range of the underlying UBO object, it should use the range subtracted by the user specified offset. Differential Revision: [D67770792](https://our.internmc.facebook.com/intern/diff/D67770792/) ghstack-source-id: 260031110 Pull Request resolved: #7480
| 🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7484
 Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. | 
| This PR needs a  | 
This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #7480
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/161/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/161/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/160/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/161/orig
@diff-train-skip-merge