Skip to content

Conversation

@pan-x-c
Copy link
Collaborator

@pan-x-c pan-x-c commented Nov 5, 2025

Description

Fix compile cache conflicts when multiple vLLM instances are started at the same time.

Please refer to the following for details.
vllm-project/vllm#24601

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @pan-x-c, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where multiple vLLM instances running simultaneously with torch.compile (i.e., enforce_eager=False) could experience cache directory conflicts. By dynamically setting a unique VLLM_CACHE_ROOT for each instance, the change ensures stable and independent operation, preventing potential data corruption or unexpected behavior arising from shared cache resources.

Highlights

  • Torch Compile Cache Conflict Resolution: Implemented a fix to prevent torch.compile cache directory conflicts when multiple vLLM instances are launched concurrently and enforce_eager mode is disabled.
  • Dynamic Cache Root Configuration: The VLLM_CACHE_ROOT environment variable is now dynamically set using config.bundle_indices to ensure unique cache directories for each vLLM instance, specifically when enforce_eager is False.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix torch.compile cache directory conflicts when multiple vLLM instances are started with enforce_eager=False. It does this by setting the VLLM_CACHE_ROOT environment variable to a path that includes config.bundle_indices, making the cache path unique for different instances. My review identifies a potential issue where the fix might not be effective if config.bundle_indices is empty, as multiple instances would still share the same cache directory. I've suggested a more robust implementation using the process ID as a fallback to ensure a unique cache path in all relevant scenarios.

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Nov 5, 2025

/unittest-module-explorer

@github-actions
Copy link

github-actions bot commented Nov 5, 2025

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
36 35 0 1 0 0 11m 17s

Skipped

Tests Status
tests/explorer/workflow_test.py::TestAgentScopeWorkflowAdapter::test_adapter skipped ⏭️

Tests

Test Name Status Flaky Duration
tests/explorer/explorer_test.py::TestExplorerCountdownEval::test_explorer 1m 15s
tests/explorer/explorer_test.py::TestExplorerGSM8KRULERNoEval::test_explorer 1m 1s
tests/explorer/explorer_test.py::TestExplorerGSM8k::test_explorer 3m 38s
tests/explorer/explorer_test.py::ServeTest::test_serve 1m 22s
tests/explorer/scheduler_test.py::SchedulerTest::test_async_workflow 12.2s
tests/explorer/scheduler_test.py::SchedulerTest::test_concurrent_operations 12.3s
tests/explorer/scheduler_test.py::SchedulerTest::test_get_results 30.7s
tests/explorer/scheduler_test.py::SchedulerTest::test_multi_step_execution 12.5s
tests/explorer/scheduler_test.py::SchedulerTest::test_non_repeatable_workflow 12.5s
tests/explorer/scheduler_test.py::SchedulerTest::test_scheduler_all_methods 22.2s
tests/explorer/scheduler_test.py::SchedulerTest::test_scheduler_restart_after_stop 24.0s
tests/explorer/scheduler_test.py::SchedulerTest::test_split_tasks 15.6s
tests/explorer/scheduler_test.py::SchedulerTest::test_stepwise_experience_eid 12.7s
tests/explorer/scheduler_test.py::SchedulerTest::test_wait_all 15.3s
tests/explorer/scheduler_test.py::SchedulerTest::test_wait_all_timeout_with_multi_batch 21.2s
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_reward_propagation_workflow_0 2ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_reward_propagation_workflow_1 603ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_step_wise_reward_workflow_0 3ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_step_wise_reward_workflow_1 1.0s
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_workflows_raise_error 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_workflows_stop_at_max_env_steps 1.0s
tests/explorer/workflow_test.py::WorkflowTest::test_gsm8k_workflow 41ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_boxed_workflow 27ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_complex_workflow 182ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_eval_workflow 4ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_fraction_workflow 13ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_workflow 8ms
tests/explorer/workflow_test.py::WorkflowTest::test_rm_gallery_workflow 99ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_repeatable_0 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_repeatable_1 101ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_resettable_0 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_resettable_1 201ms
tests/explorer/workflow_test.py::MultiTurnWorkflowTest_0::test_multi_turn_workflow 15.7s
tests/explorer/workflow_test.py::MultiTurnWorkflowTest_1::test_multi_turn_workflow 15.0s
tests/explorer/workflow_test.py::TestAgentScopeWorkflowAdapter::test_adapter ⏭️ 1ms
tests/explorer/workflow_test.py::TestWorkflowRunner::test_workflow_runner 293ms

Github Test Reporter by CTRF 💚

@chenyushuo chenyushuo merged commit 1eb169c into agentscope-ai:main Nov 6, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants