Skip to content
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
152 changes: 152 additions & 0 deletions mlir/docs/Remarks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
# Remark Infrastructure

[TOC]

Remarks are structured, human- and machine-readable notes emitted by passes to
explain what was optimized, what was missed, and why. The `RemarkEngine`
collects finalized remarks during compilation and forwards them to a pluggable
streamer. A default streamer integrates LLVM’s `llvm::remarks` so you can stream
while a pass runs and serialize to disk (YAML or LLVM bitstream) for tooling.

**Key points**

- **Opt-in**: Disabled by default; zero overhead unless enabled.
- **Per-context**: Configured on `MLIRContext`.
- **Formats**: Custom streamers, or LLVM’s Remark engine (YAML / Bitstream).
- **Kinds**: `Pass`, `Missed`, `Failure`, `Analysis`.
- **API**: Lightweight streaming interface with `<<` (similar to diagnostics).

## How it works

Remarks has two important classes:

- **`RemarkEngine`** (owned by `MLIRContext`): receives finalized
`InFlightRemark`s, optionally mirrors them to the `DiagnosticEngine`, then
dispatches to the installed streamer.
- **`MLIRRemarkStreamerBase`** (abstract): backend interface with a single hook
`streamOptimizationRemark(const Remark &)`.

**Default backend – `MLIRLLVMRemarkStreamer`** Adapts `mlir::Remark` to
`llvm::remarks::Remark` and writes YAML/bitstream via
`llvm::remarks::RemarkStreamer` to a `ToolOutputFile`.

**Ownership**: `MLIRContext``RemarkEngine``MLIRRemarkStreamerBase`.

## Enable Remarks via mlir::emitRemarks (No Streamer)

Enable once per `MLIRContext` (e.g., where you build your pass pipeline or in
your tool). If `printAsEmitRemarks` is true, each remark is also mirrored to the
context’s `DiagnosticEngine` under the provided category labels—handy for
interactive tools and tests.

```c++
mlir::MLIRContext::RemarkCategories cats{/*passed=*/categoryLoopunroll,
/*missed=*/std::nullopt,
/*analysis=*/std::nullopt,
/*failed=*/categoryLoopunroll};

context.enableOptimizationRemarks(/*streamer=*/nullptr,
cats,
/*printAsEmitRemarks=*/true);
```
## Enable Remarks with LLVMRemarkStreamer (YAML/Bitstream)
If you want to persist remarks to a file in YAML or bitstream format, use
`mlir::remark::LLVMRemarkStreamer` (helper shown below):
You can read more information about [LLVM's Remark from here](https://llvm.org/docs/Remarks.html).
```c++
#include "mlir/Remark/RemarkStreamer.h"
mlir::MLIRContext::RemarkCategories cats{/*passed=*/categoryLoopunroll,
/*missed=*/std::nullopt,
/*analysis=*/std::nullopt,
/*failed=*/categoryLoopunroll};
mlir::remark::enableOptimizationRemarksToFile(
context, yamlFile, llvm::remarks::Format::YAML, cats);
```

## Emitting remarks from a pass

The `reportOptimization*` functions return an in-flight remark object (like MLIR
diagnostics). Append strings and key–value pairs with `<<`.

```c++
#include "mlir/IR/Remarks.h"

using namespace mlir;

LogicalResult MyPass::runOnOperation() {
Operation *op = getOperation();
Location loc = op->getLoc();

// PASS: something succeeded
reportOptimizationPass(loc, /*category=*/"vectorizer", /*passName=*/"MyPass")
<< "vectorized loop."
<< Remark::RemarkKeyValue("tripCount", 128);

// ANALYSIS: neutral insight
reportOptimizationAnalysis(loc, "unroll", "MyPass")
<< "estimated cost: " << Remark::RemarkKeyValue("cost", 42);

// MISSED: explain why + suggest a fix
reportOptimizationMiss(loc, "unroll", "MyPass",
/*suggestion=*/[&](){ return "increase unroll factor to >=4"; })
<< "not profitable at this size";

// FAILURE: action attempted but failed
if (failed(doThing(op))) {
reportOptimizationFail(loc, "pipeline", "MyPass")
<< "failed due to unsupported pattern";
return failure();
}
return success();
}
```

### Output formats

#### YAML

Readable, easy to diff and grep.

```yaml
--- !Passed
pass: MyPass
name: vectorizer
function: myFunc
loc: myfile.mlir:12:3
args:
- key: tripCount
value: 128
message: "vectorized loop with tripCount=128"
```
#### Bitstream
Compact binary format supported by LLVM’s remark tooling. Prefer this for large
production runs or when existing infrastructure already consumes LLVM remarks.
## Enable Remarks with a Custom Streamer
`RemarkEngine` talks to `MLIRRemarkStreamerBase`. Implement your own streamer to
consume remarks in any format you like:

```c++
class MyStreamer : public MLIRRemarkStreamerBase {
public:
void streamOptimizationRemark(const Remark &remark) override {
// Convert Remark to your format and write it out.
}
};
// ...
auto myStreamer = std::make_unique<MyStreamer>();
context.setupOptimizationRemarks(path,
std::move(myStreamer),
/*printAsEmitRemarks=*/false,
/*categories=*/cat);
```
26 changes: 26 additions & 0 deletions mlir/include/mlir/IR/MLIRContext.h
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,10 @@ class MLIRContextImpl;
class RegisteredOperationName;
class StorageUniquer;
class IRUnit;
namespace remark {
class MLIRRemarkStreamerBase;
class RemarkEngine;
} // namespace remark

/// MLIRContext is the top-level object for a collection of MLIR operations. It
/// holds immortal uniqued objects like types, and the tables used to unique
Expand All @@ -60,6 +64,10 @@ class IRUnit;
class MLIRContext {
public:
enum class Threading { DISABLED, ENABLED };
struct RemarkCategories {
std::optional<std::string> passed, missed, analysis, failed;
};

/// Create a new Context.
explicit MLIRContext(Threading multithreading = Threading::ENABLED);
explicit MLIRContext(const DialectRegistry &registry,
Expand Down Expand Up @@ -212,6 +220,9 @@ class MLIRContext {
/// Returns the diagnostic engine for this context.
DiagnosticEngine &getDiagEngine();

/// Returns the remark engine for this context.
remark::RemarkEngine *getRemarkEngine();

/// Returns the storage uniquer used for creating affine constructs.
StorageUniquer &getAffineUniquer();

Expand Down Expand Up @@ -245,6 +256,18 @@ class MLIRContext {
/// (attributes, operations, types, etc.).
llvm::hash_code getRegistryHash();

/// Setup optimization remarks for the context.
/// This will enable the remark engine and set the streamer to be used for
/// optimization remarks.
/// The remark categories are used to filter the remarks that will be emitted
/// by the remark engine. If a category is not specified, it will not be
/// emitted.
/// If `printAsEmitRemarks` is true, the remarks will be printed as
/// mlir::emitRemarks.
LogicalResult enableOptimizationRemarks(
std::unique_ptr<remark::MLIRRemarkStreamerBase> streamer,
const RemarkCategories &cats, bool printAsEmitRemarks = false);

//===--------------------------------------------------------------------===//
// Action API
//===--------------------------------------------------------------------===//
Expand Down Expand Up @@ -281,6 +304,9 @@ class MLIRContext {
}

private:
/// Set the remark engine for this context.
void setRemarkEngine(std::unique_ptr<remark::RemarkEngine> engine);

/// Return true if the given dialect is currently loading.
bool isDialectLoading(StringRef dialectNamespace);

Expand Down
Loading
Loading