Skip to content

Conversation

@j2kun
Copy link
Contributor

@j2kun j2kun commented Jun 7, 2025

This PR makes dump-pass-pipeline pretty-print the dumped pipeline. For large pipelines the current behavior produces a wall of text that is hard to visually navigate.

For the command

mlir-opt --pass-pipeline="builtin.module(flatten-memref, expand-strided-metadata,func.func(arith-expand,func.func(affine-scalrep)))" --dump-pass-pipeline

Before:

Pass Manager with 3 passes:
builtin.module(flatten-memref,expand-strided-metadata,func.func(arith-expand{include-bf16=false include-f8e8m0=false},func.func(affine-scalrep)))

After:

Pass Manager with 3 passes:
builtin.module(
  flatten-memref,
  expand-strided-metadata,
  func.func(
    arith-expand{include-bf16=false include-f8e8m0=false},
    func.func(
      affine-scalrep
    )
  )
)

Another nice feature of this is that the pretty-printed string can still be copy/pasted into -pass-pipeline using a quote:

$ bin/mlir-opt --dump-pass-pipeline test.mlir --pass-pipeline='
builtin.module(
  flatten-memref,
  expand-strided-metadata,
  func.func(
    arith-expand{include-bf16=false include-f8e8m0=false},
    func.func(
      affine-scalrep
    )
  )
)'

@j2kun j2kun requested a review from jpienaar June 7, 2025 01:33
@llvmbot llvmbot added mlir:core MLIR Core Infrastructure mlir labels Jun 7, 2025
@llvmbot
Copy link
Member

llvmbot commented Jun 7, 2025

@llvm/pr-subscribers-mlir-core

@llvm/pr-subscribers-mlir

Author: Jeremy Kun (j2kun)

Changes

This PR makes dump-pass-pipeline pretty-print the dumped pipeline. For large pipelines the current behavior produces a wall of text that is hard to visually navigate.

For the command

mlir-opt --pass-pipeline="builtin.module(flatten-memref, expand-strided-metadata,func.func(arith-expand,func.func(affine-scalrep)))" --dump-pass-pipeline

Before:

Pass Manager with 3 passes:
builtin.module(flatten-memref,expand-strided-metadata,func.func(arith-expand{include-bf16=false include-f8e8m0=false},func.func(affine-scalrep)))

After:

Pass Manager with 3 passes:
builtin.module(
  flatten-memref,
  expand-strided-metadata,
  func.func(
    arith-expand{include-bf16=false include-f8e8m0=false},
    func.func(
      affine-scalrep
    )
  )
)

Another nice feature of this is that the pretty-printed string can still be copy/pasted into -pass-pipeline using a quote:

$ bin/mlir-opt --dump-pass-pipeline test.mlir --pass-pipeline='
builtin.module(
  flatten-memref,
  expand-strided-metadata,
  func.func(
    arith-expand{include-bf16=false include-f8e8m0=false},
    func.func(
      affine-scalrep
    )
  )
)'

Full diff: https://github.com/llvm/llvm-project/pull/143223.diff

1 Files Affected:

  • (modified) mlir/lib/Pass/Pass.cpp (+15-4)
diff --git a/mlir/lib/Pass/Pass.cpp b/mlir/lib/Pass/Pass.cpp
index e0e9b5f54042a..6e8d3bbfdff81 100644
--- a/mlir/lib/Pass/Pass.cpp
+++ b/mlir/lib/Pass/Pass.cpp
@@ -18,6 +18,7 @@
 #include "mlir/IR/Threading.h"
 #include "mlir/IR/Verifier.h"
 #include "mlir/Support/FileUtilities.h"
+#include "mlir/Support/IndentedOstream.h"
 #include "llvm/ADT/Hashing.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/ScopeExit.h"
@@ -392,18 +393,28 @@ StringRef OpPassManager::getOpAnchorName() const {
 /// Prints out the passes of the pass manager as the textual representation
 /// of pipelines.
 void printAsTextualPipeline(
-    raw_ostream &os, StringRef anchorName,
+    raw_indented_ostream &os, StringRef anchorName,
     const llvm::iterator_range<OpPassManager::pass_iterator> &passes) {
-  os << anchorName << "(";
+  os << anchorName << "(\n";
+  os.indent();
   llvm::interleave(
       passes, [&](mlir::Pass &pass) { pass.printAsTextualPipeline(os); },
-      [&]() { os << ","; });
+      [&]() { os << ",\n"; });
+  os << "\n";
+  os.unindent();
   os << ")";
 }
+void printAsTextualPipeline(
+    raw_ostream &os, StringRef anchorName,
+    const llvm::iterator_range<OpPassManager::pass_iterator> &passes) {
+  raw_indented_ostream indentedOS(os);
+  printAsTextualPipeline(indentedOS, anchorName, passes);
+}
 void OpPassManager::printAsTextualPipeline(raw_ostream &os) const {
   StringRef anchorName = getOpAnchorName();
+  raw_indented_ostream indentedOS(os);
   ::printAsTextualPipeline(
-      os, anchorName,
+      indentedOS, anchorName,
       {MutableArrayRef<std::unique_ptr<Pass>>{impl->passes}.begin(),
        MutableArrayRef<std::unique_ptr<Pass>>{impl->passes}.end()});
 }

@github-actions
Copy link

github-actions bot commented Jun 7, 2025

⚠️ We detected that you are using a GitHub private e-mail address to contribute to the repo.
Please turn off Keep my email addresses private setting in your account.
See LLVM Discourse for more information.

@j2kun
Copy link
Contributor Author

j2kun commented Jun 7, 2025

I still have to fix some tests, but I wanted early feedback if this is a desirable change. I may add a "pretty" option that allows other users (like the reproducer) to continue to use the single-line format

@github-actions
Copy link

github-actions bot commented Jun 7, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@j2kun j2kun merged commit b1b84a6 into llvm:main Jun 8, 2025
5 of 7 checks passed
@llvm-ci
Copy link
Collaborator

llvm-ci commented Jun 8, 2025

LLVM Buildbot has detected a new failure on builder mlir-nvidia running on mlir-nvidia while building mlir at step 7 "test-build-check-mlir-build-only-check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/138/builds/14261

Here is the relevant piece of the build log for the reference
Step 7 (test-build-check-mlir-build-only-check-mlir) failure: test (failure)
******************** TEST 'MLIR :: Integration/GPU/CUDA/async.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir  | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -gpu-kernel-outlining  | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -pass-pipeline='builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm),nvvm-attach-target)'  | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -gpu-async-region -gpu-to-llvm -reconcile-unrealized-casts -gpu-module-to-binary="format=fatbin"  | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -async-to-async-runtime -async-runtime-ref-counting  | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -convert-async-to-llvm -convert-func-to-llvm -convert-arith-to-llvm -convert-cf-to-llvm -reconcile-unrealized-casts  | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-runner    --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_cuda_runtime.so    --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_async_runtime.so    --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_runner_utils.so    --entry-point-result=void -O0  | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -gpu-kernel-outlining
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt '-pass-pipeline=builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm),nvvm-attach-target)'
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -gpu-async-region -gpu-to-llvm -reconcile-unrealized-casts -gpu-module-to-binary=format=fatbin
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -async-to-async-runtime -async-runtime-ref-counting
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -convert-async-to-llvm -convert-func-to-llvm -convert-arith-to-llvm -convert-cf-to-llvm -reconcile-unrealized-casts
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-runner --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_cuda_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_async_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_runner_utils.so --entry-point-result=void -O0
# .---command stderr------------
# | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuEventSynchronize(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED'
# `-----------------------------
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir
# .---command stderr------------
# | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir:68:12: error: CHECK: expected string not found in input
# |  // CHECK: [84, 84]
# |            ^
# | <stdin>:1:1: note: scanning from here
# | Unranked Memref base@ = 0x59fb04b6df70 rank = 1 offset = 0 sizes = [2] strides = [1] data = 
# | ^
# | <stdin>:2:1: note: possible intended match here
# | [42, 84]
# | ^
# | 
# | Input file: <stdin>
# | Check file: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             1: Unranked Memref base@ = 0x59fb04b6df70 rank = 1 offset = 0 sizes = [2] strides = [1] data =  
# | check:68'0     X~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error: no match found
# |             2: [42, 84] 
# | check:68'0     ~~~~~~~~~
# | check:68'1     ?         possible intended match
...

@jpienaar
Copy link
Member

jpienaar commented Jun 9, 2025

Nice!

tomtor pushed a commit to tomtor/llvm-project that referenced this pull request Jun 14, 2025
This PR makes `dump-pass-pipeline` pretty-print the dumped pipeline. For
large pipelines the current behavior produces a wall of text that is
hard to visually navigate.

For the command

```bash
mlir-opt --pass-pipeline="builtin.module(flatten-memref, expand-strided-metadata,func.func(arith-expand,func.func(affine-scalrep)))" --dump-pass-pipeline
```

Before:

```bash
Pass Manager with 3 passes:
builtin.module(flatten-memref,expand-strided-metadata,func.func(arith-expand{include-bf16=false include-f8e8m0=false},func.func(affine-scalrep)))
```

After:

```bash
Pass Manager with 3 passes:
builtin.module(
  flatten-memref,
  expand-strided-metadata,
  func.func(
    arith-expand{include-bf16=false include-f8e8m0=false},
    func.func(
      affine-scalrep
    )
  )
)
```

Another nice feature of this is that the pretty-printed string can still
be copy/pasted into `-pass-pipeline` using a quote:

```bash
$ bin/mlir-opt --dump-pass-pipeline test.mlir --pass-pipeline='
builtin.module(
  flatten-memref,
  expand-strided-metadata,
  func.func(
    arith-expand{include-bf16=false include-f8e8m0=false},
    func.func(
      affine-scalrep
    )
  )
)'
```

---------

Co-authored-by: Jeremy Kun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

mlir:core MLIR Core Infrastructure mlir

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants