Skip to content

[0.34.0-dlc][python] Support adapters in async vLLM handler#2901

Merged
ethnzhng merged 20 commits into0.34.0-dlcfrom
0.34.0-dlc-lora
Sep 29, 2025
Merged

[0.34.0-dlc][python] Support adapters in async vLLM handler#2901
ethnzhng merged 20 commits into0.34.0-dlcfrom
0.34.0-dlc-lora

Conversation

@ethnzhng
Copy link
Contributor

@ethnzhng ethnzhng commented Sep 29, 2025

Description

  • Add support for adapters (LoRA) management in async-mode vLLM handler, adapted from existing huggingface.py handler
  • Add integration tests for async vLLM LoRA
    • Temporarily disable irrelevant integration tests to save time/compute and prevent blocking vLLM image release
  • Improve robustness of error handling in test client

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Checklist:

  • Please add the link of Integration Tests Executor run with related tests.
  • Have you manually built the docker image and verify the change?
  • Have you run related tests? Check how to set up the test environment here; One example would be pytest tests.py -k "TestCorrectnessLmiDist" -m "lmi_dist"
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
    Logs for Test A

  • Test B
    Logs for Test B

@ethnzhng ethnzhng requested review from a team and zachgk as code owners September 29, 2025 21:21
@Lokiiiiii
Copy link
Member

@ethnzhng What happens to existing lora adapters if the engine/worker/container restarts ?

@ethnzhng
Copy link
Contributor Author

ethnzhng commented Sep 29, 2025

@ethnzhng What happens to existing lora adapters if the engine/worker/container restarts ?

Upon engine restart, previously registered adapters which were loaded from a location other than /opt/ml/model/adapters will not automatically be restored and will need to be re-registered. This is consistent with the existing behavior, but we can note it as a TODO to address as a follow-up.

@xyang16
Copy link
Contributor

xyang16 commented Sep 29, 2025

@ethnzhng What happens to existing lora adapters if the engine/worker/container restarts ?

Upon engine restart, previously registered adapters will not automatically be restored and will need to be re-registered. This is consistent with the existing behavior, but we can note it as a TODO to address as a follow-up.

Adapters saved in /opt/ml/model/adapters will be restored.

@ethnzhng ethnzhng merged commit 33b22de into 0.34.0-dlc Sep 29, 2025
27 of 30 checks passed
@ethnzhng ethnzhng deleted the 0.34.0-dlc-lora branch September 29, 2025 23:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants