Skip to content

impl implement codeaction/resolve #2540#2542

Open
asukaminato0721 wants to merge 5 commits intofacebook:mainfrom
asukaminato0721:2540
Open

impl implement codeaction/resolve #2540#2542
asukaminato0721 wants to merge 5 commits intofacebook:mainfrom
asukaminato0721:2540

Conversation

@asukaminato0721
Copy link
Contributor

Summary

Fixes #2540

Added codeAction/resolve support and advertised resolveProvider; introduce-parameter actions now return unresolved code actions with data and resolve to edits on demand.

Made refactor computation respect context.only and skip refactors for unfiltered requests without a triggerKind, reducing automatic-request cost.

Added introduce_parameter_action_titles to avoid expensive callsite edits during listing

Test Plan

updated LSP tests to expect resolveProvider and set triggerKind for refactor requests.

@meta-cla meta-cla bot added the cla signed label Feb 25, 2026
@asukaminato0721 asukaminato0721 marked this pull request as ready for review February 25, 2026 06:28
Copilot AI review requested due to automatic review settings February 25, 2026 06:28
@github-actions

This comment has been minimized.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Implements codeAction/resolve support in the non-wasm LSP server to make expensive refactors (notably introduce_parameter) lazy/resolved-on-demand, and reduces automatic code-action request cost by tightening refactor computation based on context.only and triggerKind.

Changes:

  • Advertise codeActionProvider.resolveProvider and handle codeAction/resolve to resolve introduce_parameter edits on demand.
  • Adjust code action kind filtering to properly respect hierarchical kinds in context.only, and skip refactors for unfiltered automatic/unspecified-trigger requests.
  • Update LSP interaction tests to expect the new capability and set triggerKind where refactors are required.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
pyrefly/lib/lsp/non_wasm/server.rs Adds resolve capability + request handling, introduces resolve data payload, refines kind filtering and automatic-request refactor skipping, and defers introduce_parameter edits to resolve.
pyrefly/lib/state/lsp/quick_fixes/introduce_parameter.rs Adds a lightweight titles-only path for introduce_parameter and adjusts occurrence handling to avoid consuming ranges.
pyrefly/lib/state/lsp.rs Exposes Transaction::introduce_parameter_action_titles for server use.
pyrefly/lib/test/lsp/lsp_interaction/basic.rs Updates initialize-capabilities snapshot to include resolveProvider under codeActionProvider.
pyrefly/lib/test/lsp/lsp_interaction/convert_module_package.rs Updates code action request contexts to include triggerKind: 1 so refactor actions are computed in tests.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +88 to +100
pub(crate) fn introduce_parameter_action_titles(
transaction: &Transaction<'_>,
handle: &Handle,
selection: TextRange,
) -> Option<Vec<String>> {
let module_info = transaction.get_module_info(handle)?;
let ast = transaction.get_ast(handle)?;
let selection_text = validate_non_empty_selection(selection, module_info.code_at(selection))?;
let (_, expression_text, _, expression_range) = split_selection(selection_text, selection)?;
if !is_exact_expression(ast.as_ref(), expression_range) {
return None;
}
let function_ctx = find_function_context(ast.as_ref(), expression_range)?;
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

introduce_parameter_action_titles largely duplicates the validation / analysis logic from introduce_parameter_code_actions. This duplication makes it easy for the title computation and resolve computation to drift over time (e.g., producing titles that no longer match resolvable actions). Consider factoring out a shared helper that returns the computed context (function ctx, template, param_name, occurrence count, etc.) used by both paths.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd like to see if we can share more logic

Comment on lines +126 to +140
let template =
ExpressionTemplate::new(expression_text, expression_range, &name_refs, &param_names);
let base_name = suggest_parameter_name(ast.as_ref(), expression_range, &param_names);
let param_name = unique_name(&base_name, |name| param_names.contains(name));
let occurrence_ranges = collect_matching_expression_ranges(
module_info.contents(),
&function_ctx.function_def.body,
&template.text,
);
let mut titles = vec![format!("Introduce parameter `{param_name}`")];
if occurrence_ranges.len() > 1 {
titles.push(format!(
"Introduce parameter `{param_name}` (replace all occurrences)"
));
}
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In introduce_parameter_action_titles, collect_matching_expression_ranges computes and allocates the full list of occurrence ranges, but the titles path only needs to know whether there is more than one occurrence. This can still be expensive for large function bodies / many matches. Consider adding a cheap “count up to 2” helper (or early-exit traversal) so listing only determines 0/1/many without materializing every range.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wonder if benchmarking will help us understand the implications of this split

Copy link
Contributor

@kinto0 kinto0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome! and quick turnaround, thank you

a few suggestions. and I'd love to see benchmarks on a really large codebase of the before / after. if you want to automate the benchmarks, they can be done in a similar way to pytorch_benchmark. otherwise, a regular timer or cli works just fine

if is_automatic {
return (!actions.is_empty()).then_some(actions);
}
if allow_refactor {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

introduce parameter was the slowest, but I am not convinced we can remove this gating just yet. let's keep the gating for now until we're confident it's fast

@github-actions

This comment has been minimized.

Copy link
Contributor

@kinto0 kinto0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still very interested in a benchmark to understand if this change is worthwhile. let some other comments as well

);
if allows_kind(&CodeActionKind::REFACTOR_EXTRACT) {
let start = Instant::now();
if let Some(titles) = transaction.introduce_parameter_action_titles(&handle, range)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since I imaging we will move a lot more of these to resolve, is there a way this logic can be abstracted?

..Default::default()
}));
}
record_code_action_telemetry(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the new call instead of the macro here? can we abstract this away?

Comment on lines +88 to +100
pub(crate) fn introduce_parameter_action_titles(
transaction: &Transaction<'_>,
handle: &Handle,
selection: TextRange,
) -> Option<Vec<String>> {
let module_info = transaction.get_module_info(handle)?;
let ast = transaction.get_ast(handle)?;
let selection_text = validate_non_empty_selection(selection, module_info.code_at(selection))?;
let (_, expression_text, _, expression_range) = split_selection(selection_text, selection)?;
if !is_exact_expression(ast.as_ref(), expression_range) {
return None;
}
let function_ctx = find_function_context(ast.as_ref(), expression_range)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd like to see if we can share more logic

Comment on lines +126 to +140
let template =
ExpressionTemplate::new(expression_text, expression_range, &name_refs, &param_names);
let base_name = suggest_parameter_name(ast.as_ref(), expression_range, &param_names);
let param_name = unique_name(&base_name, |name| param_names.contains(name));
let occurrence_ranges = collect_matching_expression_ranges(
module_info.contents(),
&function_ctx.function_def.body,
&template.text,
);
let mut titles = vec![format!("Introduce parameter `{param_name}`")];
if occurrence_ranges.len() > 1 {
titles.push(format!(
"Introduce parameter `{param_name}` (replace all occurrences)"
));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wonder if benchmarking will help us understand the implications of this split

@asukaminato0721 asukaminato0721 force-pushed the 2540 branch 2 times, most recently from 0ddf8a9 to 372ac12 Compare February 27, 2026 17:18
@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

Copy link
Contributor

@kinto0 kinto0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for writing the benchmark! could you add details to the PR on where you tested it and what the results were?

//! CODE_ACTION_BENCH_RANGE=START_LINE:START_COL-END_LINE:END_COL \
//! CODE_ACTION_BENCH_ITERS=10 \
//! CODE_ACTION_BENCH_TITLE="Introduce parameter `param`" \
//! cargo test --release test_code_action_latency -- --ignored --nocapture
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what were the results of this benchmark before/after?

@github-actions
Copy link

github-actions bot commented Mar 2, 2026

According to mypy_primer, this change doesn't affect type check results on a corpus of open source code. ✅

@github-actions
Copy link

github-actions bot commented Mar 2, 2026

No diffs to classify.

@asukaminato0721
Copy link
Contributor Author

RUSTC_WRAPPER= SCCACHE_DISABLE=1 CODE_ACTION_BENCH_PATH=/home/w/gitproject/pytorch CODE_ACTION_BENCH_FILE=torch/utils/_traceback.py CODE_ACTION_BENCH_RANGE=135:15-135:57 CODE_ACTION_BENCH_ITERS=15 CODE_ACTION_BENCH_TITLE='Introduce parameter `param`' cargo test code_action_benchmark -- --ignored --nocapture
  Branch (this PR)

  - list: count=15 mean=1.19573733s p50=6.539171ms p95=17.846324808s p99=17.846324808s
  - resolve: count=15 mean=142.463678ms p50=131.816033ms p95=299.781025ms p99=299.781025ms

  Main (worktree with benchmark harness added)

  - list: count=15 mean=1.34127552s p50=134.289285ms p95=18.233989033s p99=18.233989033s
  - resolve: no samples (main returns edits in list; no codeAction/resolve)

@kinto0
Copy link
Contributor

kinto0 commented Mar 2, 2026

RUSTC_WRAPPER= SCCACHE_DISABLE=1 CODE_ACTION_BENCH_PATH=/home/w/gitproject/pytorch CODE_ACTION_BENCH_FILE=torch/utils/_traceback.py CODE_ACTION_BENCH_RANGE=135:15-135:57 CODE_ACTION_BENCH_ITERS=15 CODE_ACTION_BENCH_TITLE='Introduce parameter `param`' cargo test code_action_benchmark -- --ignored --nocapture
  Branch (this PR)

  - list: count=15 mean=1.19573733s p50=6.539171ms p95=17.846324808s p99=17.846324808s
  - resolve: count=15 mean=142.463678ms p50=131.816033ms p95=299.781025ms p99=299.781025ms

  Main (worktree with benchmark harness added)

  - list: count=15 mean=1.34127552s p50=134.289285ms p95=18.233989033s p99=18.233989033s
  - resolve: no samples (main returns edits in list; no codeAction/resolve)

I would hope that list would be more performant than resolve. I'm not sure this difference makes the complexity worth it. those ~18 second lists are the problem. what do you think?

@asukaminato0721
Copy link
Contributor Author

I noticed that it's under debug build, so...

will test again in release build

@asukaminato0721
Copy link
Contributor Author

release build

branch ver

==== Code Action Benchmark Results ====
list: count=15 mean=124.718888ms p50=1.319557ms p95=1.850084524s p99=1.850084524s
resolve: count=15 mean=32.971791ms p50=28.746369ms p95=99.454146ms p99=99.454146ms
======================================

main

==== Code Action Benchmark Results ====
list: count=15 mean=177.039189ms p50=33.840974ms p95=2.19255534s p99=2.19255534s
resolve: no samples
======================================

@kinto0
Copy link
Contributor

kinto0 commented Mar 4, 2026

release build

branch ver

==== Code Action Benchmark Results ====
list: count=15 mean=124.718888ms p50=1.319557ms p95=1.850084524s p99=1.850084524s
resolve: count=15 mean=32.971791ms p50=28.746369ms p95=99.454146ms p99=99.454146ms
======================================

main

==== Code Action Benchmark Results ====
list: count=15 mean=177.039189ms p50=33.840974ms p95=2.19255534s p99=2.19255534s
resolve: no samples
======================================

I'm still not sure this performance difference is worth the complexity. on large codebases list still takes too long. what do you think? should we remove the github issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

implement codeaction/resolve

4 participants