You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update on "[ET-VK] Tuning conv 2d pw op tile size to improve perf."
This diff tunes the tile size for the conv 2d pw op to improve performance. The changes include updating the `TILE_SIZE_X` and `TILE_SIZE_Y` values in the `conv2d_pw.yaml` files and modifying the `Convolution.cpp` files to adjust the image extents calculation. The `TILE_SIZE_X` value is changed from 2 to 1, and the `TILE_SIZE_Y` value is changed from 2 to 4.
Differential Revision: [D75317820](https://our.internmc.facebook.com/intern/diff/D75317820/)
[ghstack-poisoned]
Transposes are needed for operators transforming the input to a different rank, as 4D-tensors are assumed to be in NHWC-format, whereas all other are in NCHW format.
213
+
Transposes are needed for operators transforming the input to a different rank, as 4D and 5D-tensors are assumed to be in (N)NHWC-format, whereas all other are in (N)NCHW format.
182
214
This is relevant for the following cases:
183
-
- view: <4D -> 4D
184
-
- view: 4D -> <4D
185
-
Additionally, a 4D->4D view operation acting on the channel dimension currently needs to be performed in NCHW format, leadning to one extra input and output transpose for this case.
215
+
- view: <4D -> >=4D
216
+
- view: >=4D -> <4D
217
+
Additionally, a 4D/5D->4D/5D view operation acting on the channel dimension currently needs to be performed in (N)NCHW format, leadning to one extra input and output transpose for this case.
186
218
187
219
Transposes can be avoided for shapes where there is no difference in actual memory, e.g for
0 commit comments