-
Notifications
You must be signed in to change notification settings - Fork 76
Make isExpensiveLoadOrStore consider blocked pointers load and stores
#2570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
36 commits
Select commit
Hold shift + click to select a range
c7fe682
Improve axis analysis to handle tt.make_tensor_ptr
etiotto ad3888f
Merge branch 'main' into etiotto/axis_analysis_make_tensor_ptr
etiotto a7a9b06
Merge branch 'main' into etiotto/axis_analysis_make_tensor_ptr
etiotto 6bddd5f
Merge branch 'main' into etiotto/axis_analysis_make_tensor_ptr
etiotto 4ad4f1a
Merge branch 'main' into etiotto/axis_analysis_make_tensor_ptr
etiotto 4dc1cf1
WIP: Coalescing for block ptrs
etiotto fa53ced
Fix pre_commit
etiotto 049ddb8
Merge branch 'main' into etiotto/coalesce_for_block_ptr
etiotto 041e2da
Merge branch 'main' into etiotto/coalesce_for_block_ptr
etiotto 5a6cf81
Fix functional problem and add lit test
etiotto 2546665
Fix pre_commit
etiotto 4d5dc49
Reenable rewrite tensor ptr
etiotto c3fdbba
Fix test_core regression
etiotto d9de8e7
Fix tutorial assertion
etiotto 949256e
Refactor
etiotto 754ec70
Cleanup
etiotto 469407b
Cleanup
etiotto 9f4f98d
Extend axis info analysis to more block ptrs
etiotto a40844b
Merge branch 'main' into etiotto/coalesce_for_block_ptr
etiotto bb9b4c3
Address code review comments
etiotto 8d9a158
Remove unrelated change
etiotto 6529f04
Remove unrelated change
etiotto 0aa334b
Remove unrelated change
etiotto 547d6fa
Fix pre_commit
etiotto 6566f6c
Merge branch 'main' into etiotto/coalesce_for_block_ptr
etiotto 2f97c1a
Address code review comments
etiotto 95f5832
Fix pre_commit
etiotto 0887245
Merge branch 'main' into etiottoremove_layout_conv
etiotto 3636bef
Make isExpensiveLoadOrStore consider blocked pointers load and stores
etiotto db2193e
Make isExpensiveLoadOrStore consider blocked pointers load and stores
etiotto eeda8e9
Merge branch 'main' into etiottoremove_layout_conv
etiotto 7c9a0f9
MaterializeBlockPointer fix for GEMM with 1st operand transposed
etiotto cbc630b
MaterializeBlockPointer fix for GEMM with 1st operand transposed
etiotto 0215a16
Fix unit tests
etiotto ae3d625
Fix performance regression for gemm-preop-exp
etiotto 22b7ec9
Reduce PR footprint
etiotto File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2324,31 +2324,29 @@ module attributes {"triton_gpu.num-ctas" = 1 : i32, "triton_gpu.num-warps" = 32 | |
| %cst_1 = arith.constant dense<0.000000e+00> : tensor<256x256xf32, #blocked2> | ||
| %0 = tt.get_program_id x : i32 | ||
| %1 = tt.get_program_id y : i32 | ||
| // CHECK: %[[VAL_0:.*]] = tt.make_tensor_ptr {{.*}} : <tensor<256x32xbf16, #triton_gpu.dot_op<{opIdx = 0, parent = #[[$DPAS]], kWidth = 2}>>> | ||
| // CHECK: %[[VAL_1:.*]] = tt.make_tensor_ptr {{.*}} : <tensor<32x256xbf16, #triton_gpu.dot_op<{opIdx = 1, parent = #[[$DPAS]], kWidth = 2}>>> | ||
| // CHECK: %[[VAL_0:.*]] = tt.make_tensor_ptr {{.*}} : <tensor<256x32xbf16, {{.*}}>> | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The actual layout is not important in these tests. |
||
| // CHECK: %[[VAL_1:.*]] = tt.make_tensor_ptr {{.*}} : <tensor<32x256xbf16, {{.*}}>> | ||
| %12 = tt.make_tensor_ptr %arg0, [%c4096_i64, %c4096_i64], [%c4096_i64, %c1_i64], [%0, %1] {order = array<i32: 1, 0>} : <tensor<256x32xbf16, #blocked3>> | ||
| %14 = tt.make_tensor_ptr %arg1, [%c4096_i64, %c4096_i64], [%c4096_i64, %c1_i64], [%0, %1] {order = array<i32: 1, 0>} : <tensor<32x256xbf16, #blocked2>> | ||
| // CHECK: %[[VAL_2:.*]]:3 = scf.for {{.*}} -> (tensor<256x256xf32, #[[$DPAS]]>, !tt.ptr<tensor<256x32xbf16, #triton_gpu.dot_op<{opIdx = 0, parent = #[[$DPAS]], kWidth = 2}>>>, !tt.ptr<tensor<32x256xbf16, #triton_gpu.dot_op<{opIdx = 1, parent = #[[$DPAS]], kWidth = 2}>>>) : i32 { | ||
| // CHECK: %[[VAL_2:.*]]:3 = scf.for {{.*}} -> (tensor<256x256xf32, #[[$DPAS]]>, !tt.ptr<tensor<256x32xbf16, {{.*}}>>, !tt.ptr<tensor<32x256xbf16, {{.*}}>>) : i32 { | ||
| %15:3 = scf.for %arg3 = %c0_i32 to %c4096_i32 step %c128_i32 iter_args(%arg4 = %cst_1, %arg5 = %12, %arg6 = %14) -> (tensor<256x256xf32, #blocked2>, !tt.ptr<tensor<256x32xbf16, #blocked3>>, !tt.ptr<tensor<32x256xbf16, #blocked2>>) : i32 { | ||
| %47 = tt.load %arg5 : !tt.ptr<tensor<256x32xbf16, #blocked3>> | ||
| %48 = tt.load %arg6 : !tt.ptr<tensor<32x256xbf16, #blocked2>> | ||
| // CHEKC-NOT: triton_gpu.convert_layout | ||
| %49 = triton_gpu.convert_layout %arg4 : tensor<256x256xf32, #blocked2> -> tensor<256x256xf32, #mma> | ||
| %50 = triton_gpu.convert_layout %47 : tensor<256x32xbf16, #blocked3> -> tensor<256x32xbf16, #triton_gpu.dot_op<{opIdx = 0, parent = #mma, kWidth = 2}>> | ||
| %51 = triton_gpu.convert_layout %48 : tensor<32x256xbf16, #blocked2> -> tensor<32x256xbf16, #triton_gpu.dot_op<{opIdx = 1, parent = #mma, kWidth = 2}>> | ||
| %52 = tt.dot %50, %51, %49, inputPrecision = tf32 : tensor<256x32xbf16, #triton_gpu.dot_op<{opIdx = 0, parent = #mma, kWidth = 2}>> * tensor<32x256xbf16, #triton_gpu.dot_op<{opIdx = 1, parent = #mma, kWidth = 2}>> -> tensor<256x256xf32, #mma> | ||
| %53 = triton_gpu.convert_layout %52 : tensor<256x256xf32, #mma> -> tensor<256x256xf32, #blocked2> | ||
| // CHECK: %[[VAL_3:.*]] = tt.advance {{.*}} : <tensor<256x32xbf16, #triton_gpu.dot_op<{opIdx = 0, parent = #[[$DPAS]], kWidth = 2}>>> | ||
| // CHECK: %[[VAL_4:.*]] = tt.advance {{.*}} : <tensor<32x256xbf16, #triton_gpu.dot_op<{opIdx = 1, parent = #[[$DPAS]], kWidth = 2}>>> | ||
| // CHECK: scf.yield {{.*}} : tensor<256x256xf32, #[[$DPAS]]>, !tt.ptr<tensor<256x32xbf16, #triton_gpu.dot_op<{opIdx = 0, parent = #[[$DPAS]], kWidth = 2}>>>, !tt.ptr<tensor<32x256xbf16, #triton_gpu.dot_op<{opIdx = 1, parent = #[[$DPAS]], kWidth = 2}>>> | ||
| // CHECK: %[[VAL_3:.*]] = tt.advance {{.*}} : <tensor<256x32xbf16, {{.*}}>> | ||
| // CHECK: %[[VAL_4:.*]] = tt.advance {{.*}} : <tensor<32x256xbf16, {{.*}}>> | ||
| // CHECK: scf.yield {{.*}} : tensor<256x256xf32, #[[$DPAS]]>, !tt.ptr<tensor<256x32xbf16, {{.*}}>>, !tt.ptr<tensor<32x256xbf16, {{.*}}>> | ||
| %54 = tt.advance %arg5, [%c0_i32, %c128_i32] : <tensor<256x32xbf16, #blocked3>> | ||
| %55 = tt.advance %arg6, [%c128_i32, %c0_i32] : <tensor<32x256xbf16, #blocked2>> | ||
| scf.yield %53, %54, %55 : tensor<256x256xf32, #blocked2>, !tt.ptr<tensor<256x32xbf16, #blocked3>>, !tt.ptr<tensor<32x256xbf16, #blocked2>> | ||
| } | ||
| %16 = tt.make_range {end = 256 : i32, start = 0 : i32} : tensor<256xi32, #blocked> | ||
| %32 = tt.splat %arg2 : !tt.ptr<f32> -> tensor<256x256x!tt.ptr<f32>, #blocked2> | ||
| %38 = arith.cmpi slt, %16, %cst : tensor<256xi32, #blocked> | ||
| // CHEKC-NOT: triton_gpu.convert_layout | ||
| %39 = triton_gpu.convert_layout %38 : tensor<256xi1, #blocked> -> tensor<256xi1, #triton_gpu.slice<{dim = 0, parent = #blocked4}>> | ||
| %40 = tt.expand_dims %39 {axis = 0 : i32} : tensor<256xi1, #triton_gpu.slice<{dim = 0, parent = #blocked4}>> -> tensor<1x256xi1, #blocked4> | ||
| %41 = triton_gpu.convert_layout %40 : tensor<1x256xi1, #blocked4> -> tensor<1x256xi1, #blocked2> | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: adding
triton_intel_gpu.block_iois consistent with our optimization pipeline (in our pipeline this is done before the 2nd invocation ofRemoveLayoutConversion)