You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update on "[ET-VK] Tuning local workgroup size calculation for conv2d pw to improve performance."
This diff adjusts the local workgroup size (`local_wg_size`) based on batch count (stored in `wg_size[1]`), to improve conv2d pw performance.
* If `wg_size[1]` is a multiple of 8, `local_wg_size_y` is set to 8.
* If `wg_size[1]` is a multiple of 4, `local_wg_size_y` is set to 4.
* If `wg_size[1]` is a multiple of 2, `local_wg_size_y` is set to 2.
* Otherwise, we default to `local_wg_size_y` = 1.
The dispatch size in 2 dimensions is then calculate based on `{64 / local_wg_size_y, local_wg_size_y, 1}`.
Differential Revision: [D75420517](https://our.internmc.facebook.com/intern/diff/D75420517/)
[ghstack-poisoned]
Transposes are needed for operators transforming the input to a different rank, as 4D-tensors are assumed to be in NHWC-format, whereas all other are in NCHW format.
213
+
Transposes are needed for operators transforming the input to a different rank, as 4D and 5D-tensors are assumed to be in (N)NHWC-format, whereas all other are in (N)NCHW format.
182
214
This is relevant for the following cases:
183
-
- view: <4D -> 4D
184
-
- view: 4D -> <4D
185
-
Additionally, a 4D->4D view operation acting on the channel dimension currently needs to be performed in NCHW format, leadning to one extra input and output transpose for this case.
215
+
- view: <4D -> >=4D
216
+
- view: >=4D -> <4D
217
+
Additionally, a 4D/5D->4D/5D view operation acting on the channel dimension currently needs to be performed in (N)NCHW format, leadning to one extra input and output transpose for this case.
186
218
187
219
Transposes can be avoided for shapes where there is no difference in actual memory, e.g for
0 commit comments