Skip to content

Conversation

dtrifiro
Copy link
Contributor

@dtrifiro dtrifiro commented Aug 6, 2025

  • bump minimum vLLM version to v0.10.0
  • drop PromptAdapterRequest and everything related
  • gha: tests: bump vllm tag to v0.10

fixes #268

Summary by CodeRabbit

  • Chores

    • Increased the minimum required version of the "vllm" dependency to 0.10.0.
    • Updated test workflow to support CPU-only mode and extended job timeout.
    • Removed the script entry point for prompt adapter conversion.
  • Bug Fixes

    • Eliminated support for prompt adapters and related configuration files.
  • Tests

    • Removed all tests related to prompt adapters and the prompt adapter conversion tool.
  • Refactor

    • Updated type annotations and internal logic to remove prompt adapter handling.

Copy link

coderabbitai bot commented Aug 6, 2025

Walkthrough

This update removes all support for prompt tuning adapters from the codebase. It eliminates related code, tests, configuration files, and the CLI tool for converting PyTorch prompt tuning models. The dependency on "vllm" is updated, and the script entry point for prompt adapter conversion is removed from the project configuration.

Changes

Cohort / File(s) Change Summary
Dependency and CLI Configuration
pyproject.toml
Updated "vllm" dependency to ">=0.10.0"; removed the "convert_pt_to_prompt" script entry point.
Prompt Adapter Logic Removal
src/vllm_tgis_adapter/grpc/adapters.py
Removed support for "PROMPT_TUNING" adapters in validation logic and updated function signature accordingly.
Prompt Adapter CLI Tool
src/vllm_tgis_adapter/tgis_utils/convert_pt_to_prompt.py
Deleted the CLI tool for converting PyTorch prompt tuning models to PEFT format.
Prompt Adapter Logging
src/vllm_tgis_adapter/tgis_utils/logs.py
Removed logic related to prompt_adapter_request from the logging function.
Prompt Adapter Test Fixtures
tests/fixtures/bloom_sentiment_1/adapter_config.json
Deleted the prompt tuning adapter configuration fixture.
Prompt Adapter Tests
tests/test_adapters.py
Removed all tests related to prompt adapters and their caching/handling.
Test Fixture Removal
tests/conftest.py
Removed the prompt_tune_path fixture and related imports.
CI Configuration
.github/workflows/tests.yaml
Updated GitHub Actions matrix to test with vllm version 0.10.0 instead of 0.7.2.
gRPC Server Typing Update
src/vllm_tgis_adapter/grpc/grpc_server.py
Removed PromptAdapterRequest import and updated _validate_adapters return type to exclude prompt adapters.
PEFT Converter Test
tests/test_peft_converter.py
Disabled the test for prompt tuning adapter conversion, marking it as deprecated and removed.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI (removed)
    participant AdapterValidator
    participant AdapterStore
    participant ModelHandler

    User->>CLI (removed): Run convert_pt_to_prompt (No longer available)
    Note over CLI (removed): CLI tool for prompt adapter conversion is removed

    User->>AdapterValidator: Request with adapter type "PROMPT_TUNING"
    AdapterValidator->>AdapterValidator: Immediately raise unsupported adapter error
    Note over AdapterValidator: No prompt adapter validation or handling remains
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Assessment against linked issues

Objective Addressed Explanation
Upgrade to vllm version 0.10.0 without import errors The import of PromptAdapterRequest from vllm.prompt_adapter was removed, fixing the import error.
Remove prompt tuning adapter support to prevent failures All prompt tuning adapter related code, tests, CLI, and config files were removed accordingly.

Assessment against linked issues: Out-of-scope changes

No out-of-scope changes detected.

Poem

A rabbit hopped through code so bright,
Prompt tuning vanished out of sight.
No more adapters, no more tune,
CLI and tests gone too soon.
With dependencies fresh and clean,
The code now runs, prompt-free and lean!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2d6819 and 9dad652.

📒 Files selected for processing (2)
  • .github/workflows/tests.yaml (3 hunks)
  • tests/conftest.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/conftest.py
  • .github/workflows/tests.yaml
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fixes-0.10.0

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.



@pytest.mark.asyncio
async def test_cache_handles_concurrent_loads(vllm_model_handler):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test would probably be good to keep, the second half of it is all using lora adapters

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(But also not a huge deal if we're deleting this repo soon 😉 )

Copy link
Contributor

@maxdebayser maxdebayser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@joerunde joerunde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally lgtm, ship it when tests pass 🚀

@heyselbi
Copy link

heyselbi commented Aug 6, 2025

@joerunde should this section be removed as well https://github.com/opendatahub-io/vllm-tgis-adapter/pull/269/files#diff-044c034e586927eab0938dfbd552ed6e6cdc3a05168bae2e69fae45bd3ac54d7L191-L200 ? Since the src/vllm_tgis_adapter/tgis_utils/convert_pt_to_prompt.py is deleted.

@dtrifiro
Copy link
Contributor Author

dtrifiro commented Aug 8, 2025

So, I investigated this a bit. It looks like CPU vllm is broken as of v0.10.0 (maybe from earlier versions as well since I've seen our CI fail for quite a while before this).

I tested this branch on a GPU host and both endpoints work (http and grpc) when using python -m vllm_tgis_adapter

I used this script https://github.com/opendatahub-io/vllm-tgis-adapter/tree/main/examples (inference.py) to hit the grpc endpoint.

We don't have the capacity to get CPU vllm work again, we'll just merge this as is for now.

@dtrifiro dtrifiro merged commit 516f857 into main Aug 8, 2025
1 of 3 checks passed
@dtrifiro dtrifiro deleted the fixes-0.10.0 branch August 8, 2025 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

failures with vllm==0.10.0
4 participants