Skip to content

Conversation

mohammedahmed18
Copy link
Contributor

@mohammedahmed18 mohammedahmed18 commented Oct 22, 2025

PR Type

Enhancement, Bug fix


Description

  • Introduce singleton LSP server instance

  • Add per-task execution context vars

  • Implement cancellation abort checkpoints

  • Simplify server feature handlers' signatures


Diagram Walkthrough

flowchart LR
  FEAT["LSP Feature Handlers"] -- "use" --> SING["CodeflashServerSingleton"]
  SING -- "exposes" --> SRV["CodeflashLanguageServer"]
  SRV -- "context var" --> CTX["execution_context_vars"]
  FEAT -- "run optimization" --> PERF["perform_optimization.sync_perform_optimization"]
  FEAT -- "on cancel" --> CANCEL["set cancel_event + return canceled"]
  PERF -- "poll" --> CE["cancel_event checkpoints"]
  SRV -- "message serialize" --> LSPMSG["LSPMessage includes task_id"]
Loading

File Walkthrough

Relevant files
Enhancement
beta.py
Feature handlers refactor with context and cancel               

codeflash/lsp/beta.py

  • Switch to CodeflashServerSingleton usage.
  • Add task_id param and execution context manager.
  • Refactor feature handlers to not accept server arg.
  • Add cancellation handling via contextvars and threading event.
+99/-92 
perform_optimization.py
Add cancellable optimization checkpoints                                 

codeflash/lsp/features/perform_optimization.py

  • Add cancel response helper and abort checkpoints.
  • Use server singleton and cancel_event.
  • Insert abort checks throughout optimization flow.
+25/-2   
lsp_message.py
Propagate task_id in LSP messages                                               

codeflash/lsp/lsp_message.py

  • Include task_id from execution context in messages.
  • Use server singleton to access context vars.
+4/-1     
server.py
Server singleton and context vars support                               

codeflash/lsp/server.py

  • Implement CodeflashServerSingleton.
  • Add execution_context_vars contextvar to server.
  • Adjust server constructor signature.
+24/-3   

@github-actions
Copy link

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Context Propagation

The new execution context uses a single dict ContextVar and is manually copied when offloading to a thread. Verify all code paths that need task-scoped data run inside the execution_context manager and that contextvars.copy_context is always used when submitting to executors; otherwise, task_id may be missing or stale in logs/messages.

@contextlib.contextmanager
def execution_context(**kwargs: str) -> None:
    """Temporarily set context values for the current async task."""
    # Create a fresh copy per use
    current = {**server.execution_context_vars.get(), **kwargs}
    token = server.execution_context_vars.set(current)
    try:
        yield
    finally:
        server.execution_context_vars.reset(token)


@server.feature("initializeFunctionOptimization")
def initialize_function_optimization(params: FunctionOptimizationInitParams) -> dict[str, str]:
    with execution_context(task_id=params.task_id):
        document_uri = params.textDocument.uri
        document = server.workspace.get_text_document(document_uri)

        server.show_message_log(
            f"Initializing optimization for function: {params.functionName} in {document_uri}", "Info"
        )

        if server.optimizer is None:
            _initialize_optimizer_if_api_key_is_valid()

        server.optimizer.worktree_mode()

        original_args, _ = server.optimizer.original_args_and_test_cfg

        server.optimizer.args.function = params.functionName
        original_relative_file_path = Path(document.path).relative_to(original_args.project_root)
        server.optimizer.args.file = server.optimizer.current_worktree / original_relative_file_path
        server.optimizer.args.previous_checkpoint_functions = False
Cancellation Robustness

Abort checkpoints raise RuntimeError("cancelled") but callers only handle asyncio.CancelledError. Ensure RuntimeError is caught and converted to a canceled response, or switch to a consistent exception type to avoid surfacing as generic errors.

def abort_if_cancelled(cancel_event: threading.Event) -> None:
    if cancel_event.is_set():
        raise RuntimeError("cancelled")


def sync_perform_optimization(params) -> dict[str, str]:  # noqa
    server = CodeflashServerSingleton.get()
    cancel_event = server.cancel_event
    server.show_message_log(f"Starting optimization for function: {params.functionName}", "Info")
Singleton Lifecycle

The singleton creates a global LanguageServer but there is no explicit reset/shutdown path. Validate that tests or multiple CLI invocations won’t accidentally reuse shared state (optimizer, cancel_event, context vars) across sessions, and consider thread-safety for instance creation.

class CodeflashServerSingleton:
    _instance: CodeflashLanguageServer | None = None

    @classmethod
    def get(cls) -> CodeflashLanguageServer:
        if cls._instance is None:
            cls._instance = CodeflashLanguageServer(
                "codeflash-language-server", "v1.0", protocol_cls=CodeflashLanguageServerProtocol
            )
        return cls._instance

    def __init__(self) -> None:
        # This is a singleton class, so we don't want to initialize.
        ...

@github-actions
Copy link

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Make cancel flag task-local

Avoid assigning server.cancel_event per request on the shared singleton, which can
race across concurrent tasks. Instead, store the event in the context vars and have
sync code fetch it from the server’s context storage.

codeflash/lsp/beta.py [401-412]

 @server.feature("performFunctionOptimization")
 async def perform_function_optimization(params: FunctionOptimizationParams) -> dict[str, str]:
-    with execution_context(task_id=params.task_id):
+    cancel_event = threading.Event()
+    with execution_context(task_id=params.task_id, cancel_event=cancel_event):
         loop = asyncio.get_running_loop()
-        server.cancel_event = threading.Event()
-
         try:
             ctx = contextvars.copy_context()
             return await loop.run_in_executor(None, ctx.run, sync_perform_optimization, params)
         except asyncio.CancelledError:
-            server.cancel_event.set()
+            cancel_event.set()
             return get_cancelled_reponse()
         finally:
             server.cleanup_the_optimizer()
Suggestion importance[1-10]: 8

__

Why: This addresses a real concurrency bug risk: storing per-request cancellation state on a shared singleton. Using context to keep the event task-local aligns with the new execution_context pattern and reduces race conditions.

Medium
Use dedicated cancellation exception

Raise a distinct, caught exception type rather than a generic RuntimeError so
upstream can map it to a canceled response reliably. Update call sites to catch it
and return get_cancelled_reponse() instead of bubbling an unhandled error.

codeflash/lsp/features/perform_optimization.py [20-22]

+class OptimizationCancelled(Exception):
+    pass
+
 def abort_if_cancelled(cancel_event: threading.Event) -> None:
     if cancel_event.is_set():
-        raise RuntimeError("cancelled")
+        raise OptimizationCancelled("cancelled")
Suggestion importance[1-10]: 6

__

Why: Introducing a specific cancellation exception can improve clarity and handling, but the current code already maps asyncio.CancelledError to a canceled response and uses a cancel_event with checks. This is a minor robustness improvement.

Low
Validate singleton initialization timing

Ensure the singleton server is created with the correct protocol class only once to
avoid multiple event loops or protocol mismatch. Verify that
CodeflashServerSingleton.get() is safe to call at import time in all execution
contexts (tests, workers).

codeflash/lsp/beta.py [90]

+# Defer singleton creation until first feature registration/runtime use if needed elsewhere
 server = CodeflashServerSingleton.get()
Suggestion importance[1-10]: 5

__

Why: The suggestion is conceptually valid—import-time singleton instantiation can be risky—but it asks to "ensure/verify" behavior without concrete code change. Impact is moderate and context-aware, yet not critical.

Low

@codeflash-ai
Copy link
Contributor

codeflash-ai bot commented Oct 22, 2025

⚡️ Codeflash found optimizations for this PR

📄 101% (1.01x) speedup for check_api_key in codeflash/lsp/beta.py

⏱️ Runtime : 6.88 milliseconds 3.42 milliseconds (best of 75 runs)

A dependent PR with the suggested changes has been created. Please review:

If you approve, it will be merged into this PR (branch lsp/task-execution-context).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant