You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update on "[ET-VK] Fully De vectorise conv2d pw shader to improve perf."
This improves the performance of the conv2d pw shader by fully de-vectorizing it.
The optimization involved replacing the `ivec3 pos` array with a plain `int pos` array to store the position values. The `x` and `y` coordinates are now stored in separate elements of the array instead of being stored together in an `ivec3`. This change allows for more efficient memory access and computation.
Differential Revision: [D75335802](https://our.internmc.facebook.com/intern/diff/D75335802/)
[ghstack-poisoned]
Transposes are needed for operators transforming the input to a different rank, as 4D-tensors are assumed to be in NHWC-format, whereas all other are in NCHW format.
213
+
Transposes are needed for operators transforming the input to a different rank, as 4D and 5D-tensors are assumed to be in (N)NHWC-format, whereas all other are in (N)NCHW format.
182
214
This is relevant for the following cases:
183
-
- view: <4D -> 4D
184
-
- view: 4D -> <4D
185
-
Additionally, a 4D->4D view operation acting on the channel dimension currently needs to be performed in NCHW format, leadning to one extra input and output transpose for this case.
215
+
- view: <4D -> >=4D
216
+
- view: >=4D -> <4D
217
+
Additionally, a 4D/5D->4D/5D view operation acting on the channel dimension currently needs to be performed in (N)NCHW format, leadning to one extra input and output transpose for this case.
186
218
187
219
Transposes can be avoided for shapes where there is no difference in actual memory, e.g for
0 commit comments