-
Notifications
You must be signed in to change notification settings - Fork 15.4k
[mlir][linalg] Update pack and unpack documentation #143903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -93,17 +93,21 @@ def Linalg_PackOp : Linalg_RelayoutOp<"pack", [ | |||||||||||||||
| tensor of rank `n + k` with a tiled and packed layout (maybe with padding) | ||||||||||||||||
| and optionally transposes the tiled source tensor dimensions. | ||||||||||||||||
|
|
||||||||||||||||
| `inner_dims_pos` (mandatory) specifies `k` source tensor dimensions that are | ||||||||||||||||
| being tiled, where `0 < k <= n`. The order of the dimensions matters: | ||||||||||||||||
| - The tiled dimensions (of size `inner_tiles`) are added to the end of the result | ||||||||||||||||
| tensor in the order in which they appear in `inner_dims_pos`. | ||||||||||||||||
| - `inner_dims_pos[i]` specifies the source tensor dimension tiled by | ||||||||||||||||
| `inner_tiles[i]`. | ||||||||||||||||
|
|
||||||||||||||||
| `inner_tiles` (mandatory) specifies `k` tile sizes. These tile sizes | ||||||||||||||||
| correspond to the least significant ("inner") result tensor dimension sizes, | ||||||||||||||||
| in the same order. Tile sizes can be static or dynamic. | ||||||||||||||||
|
|
||||||||||||||||
| `inner_dims_pos` (mandatory) specifies `k` source tensor dimensions that are | ||||||||||||||||
| being tiled, where `0 <= k <= n`. | ||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've just realised that this was incorrect. Please double-check :)
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why is it incorrect? I think you can pack all the dimensions? E.g., llvm-project/mlir/test/Dialect/Linalg/named-ops.mlir Lines 2697 to 2701 in 0a2b6f6
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, since there are n dimensions, the indices should go from Hopefully not having an off-by-one moment here 😊
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If we read from the start, I think You are right that the indices should go from
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've incorporated the last statement @hanhanW into the docs. |
||||||||||||||||
| - `inner_dims_pos[i]` specifies the source tensor dimension tiled by | ||||||||||||||||
| `inner_tiles[i]` where `0 <= i < k`. All the values in `inner_dims_pos` are | ||||||||||||||||
| within [0, n). | ||||||||||||||||
| - The tiled dimensions (of size `inner_tiles`) are added to the end of the | ||||||||||||||||
| result tensor in the order in which they appear, i.e. | ||||||||||||||||
| `shape(result)[rank(result) + i] = inner_tiles[i]` for `0 <= i < k`. | ||||||||||||||||
| - The following relationship for the tiled dimensions holds: | ||||||||||||||||
| `shape(result)[inner_dims_pos[i]] = shape(source)[inner_dims_pos[i]] / inner_tiles[i]`. | ||||||||||||||||
|
|
||||||||||||||||
| Example: If `inner_tiles = [16, 32]`, the result tensor has a shape of | ||||||||||||||||
| `...x16x32`. If `inner_dims_pos = [0, 1]`, the 0th source dimension is tiled | ||||||||||||||||
| by 16 and the 1st source dimension is tiled by 32. Other source dimensions | ||||||||||||||||
|
|
@@ -116,7 +120,19 @@ def Linalg_PackOp : Linalg_RelayoutOp<"pack", [ | |||||||||||||||
| %0 = linalg.pack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32] | ||||||||||||||||
| into %dest : tensor<128x256xf32> -> tensor<16x8 x 8x32 xf32> | ||||||||||||||||
| // \ / \ / | ||||||||||||||||
| // outer dims inner dims | ||||||||||||||||
| // Outer Dims: 16x8 Inner Dims: 8x32 | ||||||||||||||||
|
|
||||||||||||||||
| // CHW to CHWhw | ||||||||||||||||
| %0 = linalg.pack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2] | ||||||||||||||||
| into %dest : tensor<3x20x24xf32> -> tensor<3x10x6 x 4x2 xf32> | ||||||||||||||||
| // \ / \ / | ||||||||||||||||
| // Outer Dims: 3x10x6 Inner Dims: 4x2 | ||||||||||||||||
|
|
||||||||||||||||
| // HCW to HCWhw | ||||||||||||||||
| %0 = linalg.pack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2] | ||||||||||||||||
| into %dest : tensor<18x3x32xf32> -> tensor<9x3x8 x 4x2 xf32> | ||||||||||||||||
| // \ / \ / | ||||||||||||||||
| // Outer Dims: 9x3x8 Inner Dims: 4x2 | ||||||||||||||||
| ``` | ||||||||||||||||
|
|
||||||||||||||||
| `outer_dims_perm` (optional) specifies a permutation for the outer | ||||||||||||||||
|
|
@@ -246,13 +262,6 @@ def Linalg_UnPackOp : Linalg_RelayoutOp<"unpack"> { | |||||||||||||||
| The "unpack" operation converts a source tensor of rank `n` with a tiled and | ||||||||||||||||
| packed layout to a result tensor of rank `n - k`. | ||||||||||||||||
|
|
||||||||||||||||
| `inner_dims_pos` (mandatory) specifies `k` source tensor dimensions with | ||||||||||||||||
| which the last `k` source tensor dimensions are combined, where | ||||||||||||||||
| `0 < k <= n/2`. Each `inner_dims_pos` element must be `>= 0` and `< n - k`. | ||||||||||||||||
| The order of the dimensions in `inner_dims_pos` matters: dimension | ||||||||||||||||
| `inner_dims_pos[i]` is combined with dimension `n - k + i` (assuming that | ||||||||||||||||
| `outer_dims_perm` is not specified). | ||||||||||||||||
|
|
||||||||||||||||
| `inner_tiles` (mandatory) specifies `k` tile sizes. These tile sizes | ||||||||||||||||
| correspond to the least significant ("inner") source tensor dimension sizes. | ||||||||||||||||
|
||||||||||||||||
| The behavior of this op is undefined if: | ||||||||||||||||
|
|
@@ -262,21 +271,50 @@ def Linalg_UnPackOp : Linalg_RelayoutOp<"unpack"> { | |||||||||||||||
| `inner_dims_pos[i]` (assuming that `outer_dims_perm` is not specified) | ||||||||||||||||
| evenly. | ||||||||||||||||
|
|
||||||||||||||||
| `inner_dims_pos` (mandatory) specifies `k` result tensor (i.e. unpacked | ||||||||||||||||
| tensor) dimensions that were tiled with the `inner_tiles` to create the | ||||||||||||||||
| packed source tensor. The source tensor (i.e. packed tensor) dimensions can | ||||||||||||||||
| be unpacked given `inner_dims_pos` as follows. | ||||||||||||||||
| - For `0 <= i < k` the following relationship holds: | ||||||||||||||||
| `shape(result)[inner_dims_pos[i]] <= shape(source)[n-k+i] * shape(source)[inner_dims_pos[i]]`. | ||||||||||||||||
| - For `0 <= j < n-k` and `j` not in `inner_dims_pos` the following relationship holds: | ||||||||||||||||
| `shape(result)[j] = shape(source)[j]`. | ||||||||||||||||
|
|
||||||||||||||||
| `outer_dims_perm` (optional) specifies a permutation for the outer | ||||||||||||||||
| dimensions. If specified, it must have `n - k` elements. If specified, this | ||||||||||||||||
| permutation is applied before combining any dimensions. | ||||||||||||||||
|
|
||||||||||||||||
| Example: | ||||||||||||||||
| Note, the unpack operation may drop any padding introduced by the pack | ||||||||||||||||
| operation and hence the following holds | ||||||||||||||||
| `NumElementsOf(source) >= NumElementsOf(result)`. | ||||||||||||||||
|
|
||||||||||||||||
| Examples: | ||||||||||||||||
|
|
||||||||||||||||
| ```mlir | ||||||||||||||||
| // NCnc to NC: | ||||||||||||||||
| %0 = linalg.unpack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32] | ||||||||||||||||
| into %dest : tensor<16x8x8x32xf32> -> tensor<128x256xf32> | ||||||||||||||||
| into %dest : tensor<16x8 x 8x32 xf32> -> tensor<128x256xf32> | ||||||||||||||||
| // \ / \ / | ||||||||||||||||
| // Outer Dims: 16x8 Inner Dims: 8x32 | ||||||||||||||||
|
|
||||||||||||||||
| // CK to KCck: | ||||||||||||||||
| %0 = linalg.unpack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1] | ||||||||||||||||
| inner_tiles = [8, 32] into %dest | ||||||||||||||||
| : tensor<8x16x8x32xf32> -> tensor<128x256xf32> | ||||||||||||||||
| inner_tiles = [8, 32] | ||||||||||||||||
| into %dest : tensor<8x16 x 8x32 xf32> -> tensor<128x256xf32> | ||||||||||||||||
| // \ / \ / | ||||||||||||||||
| // Outer Dims: 8x16 Inner Dims: 8x32 | ||||||||||||||||
|
|
||||||||||||||||
| // CHW to CHWhw: | ||||||||||||||||
| %0 = linalg.unpack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2] | ||||||||||||||||
| into %dest : tensor<3x10x6 x 4x2 xf32> -> tensor<3x20x24xf32> | ||||||||||||||||
| // \ / \ / | ||||||||||||||||
| // Outer Dims: 3x10x6 Inner Dims: 4x2 | ||||||||||||||||
|
|
||||||||||||||||
| // HCW to HCWhw | ||||||||||||||||
| %0 = linalg.unpack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2] | ||||||||||||||||
| into %dest : tensor<9x3x8 x 4x2 xf32> -> tensor<18x3x32xf32> | ||||||||||||||||
| // \ / \ / | ||||||||||||||||
| // Outer Dims: 9x3x8 Inner Dims: 4x2 | ||||||||||||||||
| ``` | ||||||||||||||||
| }]; | ||||||||||||||||
| let arguments = (ins AnyRankedTensor:$source, | ||||||||||||||||
|
|
||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this particular sentence helpful. Would you mind keeping it? You could rewrite it using code:
shape(result)[rank(result) + i] = inner_tiles[i]for 0 <= i < k.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, the "order in which they appear in inner_dims_pos" is what threw me off. That is why I removed it. I will add the code example.