Skip to content

Pass pipeline logging instrumentation#4182

Open
atgeller wants to merge 13 commits intoNVIDIA:mainfrom
atgeller:pass-pipeline-testing
Open

Pass pipeline logging instrumentation#4182
atgeller wants to merge 13 commits intoNVIDIA:mainfrom
atgeller:pass-pipeline-testing

Conversation

@atgeller
Copy link
Copy Markdown
Collaborator

Adds instrumentation to log pass pipelines when enabled. Use CUDAQ_PIPELINE_LOG=path/to/log/file to enable logging. Currently, this will only log JIT passes.

atgeller and others added 7 commits March 17, 2026 16:36
Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
@github-actions
Copy link
Copy Markdown

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

…y on source file

Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
@github-actions
Copy link
Copy Markdown

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

Copy link
Copy Markdown
Collaborator

@sacpis sacpis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left few comments.

~PipelineRecorder() override { flushToLog(); }

void runBeforePass(mlir::Pass *pass, mlir::Operation *op) override {
records.push_back(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A question. Is this records vector threadsafe? Seems like we will need to ensure that disableMultithreading is set before calling pm.run()?

Comment on lines 955 to +978
@@ -973,6 +975,7 @@ jitCode(ImplicitLocOpBuilder &builder, ExecutionEngine *jit,
pm.addNestedPass<func::FuncOp>(createCSEPass());
pm.addPass(cudaq::opt::createConvertToQIR());
pm.addPass(createCanonicalizerPass());
cudaq_internal::maybeLogPassPipeline(pm, "cudaq::builder JIT");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we should have different labels for these so that it will be helpful to distinguish?

Comment on lines +18 to +20
cudaq.reset_target()
yield
cudaq.reset_target()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we calling reset_target() before and after the test? Should it be just after the test?

@pytest.mark.parametrize("target,target_kwargs,supports_run", TARGET_CONFIGS)
def test_pipeline_logging_decorator(tmp_path, monkeypatch, target,
target_kwargs, supports_run):
log_path = tmp_path / "pipeline.jsonl"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we move out this hard coded value in a constant outside?



def test_pipeline_logging_builder_default(tmp_path, monkeypatch):
log_path = tmp_path / "pipeline.jsonl"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. Wondering if the same log_file can be reused for these tests?

tm.setEnabled(cudaq::isTimingTagEnabled(cudaq::TIMING_JIT_PASSES));
auto timingScope = tm.getRootScope(); // starts the timer
pm.enableTiming(timingScope); // do this right before pm.run
cudaq_internal::maybeLogPassPipeline(pm, name + ":client");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here.

mlir::PassManager pm(ctx);
// For now, the server side expects full-QIR.
opt::addAOTPipelineConvertToQIR(pm);
cudaq_internal::maybeLogPassPipeline(pm, name + ":aot-qir");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here.

Comment on lines +25 to +51
std::string jsonEscape(llvm::StringRef s) {
std::string out;
out.reserve(s.size());
for (char c : s) {
switch (c) {
case '"':
out += "\\\"";
break;
case '\\':
out += "\\\\";
break;
case '\n':
out += "\\n";
break;
case '\r':
out += "\\r";
break;
case '\t':
out += "\\t";
break;
default:
out += c;
break;
}
}
return out;
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can optimize(instead of using +=) this function, we can collect all characters in some string and then flush it once at the end.

Comment on lines +104 to +116
std::string record = "{\"type\":\"executed\",\"label\":\"" +
jsonEscape(label) + "\",\"passes\":[";
for (size_t i = 0; i < records.size(); ++i) {
if (i > 0)
record += ",";
record += "{\"pass\":\"" + jsonEscape(records[i].passArg) +
"\",\"op\":\"" + jsonEscape(records[i].opName) + "\"";
if (records[i].failed)
record += ",\"failed\":true";
record += "}";
}
record += "]}";
appendToLogFile(logPath, record);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use llvm::raw_string_ostream here? This will avoid the creation of intermediate std::string (with +). We can also get rid of \".

Comment on lines +130 to +132
std::string record = "{\"type\":\"configured\",\"label\":\"" +
jsonEscape(effectiveLabel) + "\",\"pipeline\":\"" +
jsonEscape(pipeline) + "\"}";
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here.

@github-actions
Copy link
Copy Markdown

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

Signed-off-by: Adam Geller <adgeller@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants