Conversation
Signed-off-by: Harry Mellor <[email protected]>
There was a problem hiding this comment.
Code Review
This pull request aims to update the transformers library to version 5. The changes correctly update the version in requirements/test.in and requirements/nightly_torch_test.txt, and also add the --pre flag to uv pip install in the Dockerfile to allow installation of the release candidate. However, there is a critical oversight: requirements/common.txt still contains a constraint transformers < 5. This will lead to build failures for any configuration that relies on common.txt. This file must be updated to allow transformers v5 for this PR to be mergeable.
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
Comment @cursor review or bugbot run to trigger another review on this PR
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
|
Documentation preview: https://vllm--30566.org.readthedocs.build/en/30566/ |
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Harry Mellor <[email protected]>
|
Hi @hmellor, the pre-commit checks have failed. Please run: uv pip install pre-commit
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
|
Hi @hmellor, the pre-commit checks have failed. Please run: uv pip install pre-commit
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
Signed-off-by: Harry Mellor <[email protected]>
Changes:
5.0.00.22.2(as is required by Transformers5.0.0)0.18.1so that huggingface/peft@41c07f0 is included (guards import ofHybridCacheon Transformers version)1.1.0so that 4-bit bnb can work on Transformers v52.3.0so that state-spaces/mamba@35e927b is included (removes import that was deleted in Transformers v5)HF_HUB_ENABLE_HF_TRANSFERwithHF_XET_HIGH_PERFORMANCEas the HF Hub is all Xet now sohf_transferdoesn't do anything anymoreHF_HUB_DOWNLOAD_TIMEOUT=60to the CI environment to deal with the shortened timeout inhuggingface-hub==1since it switched tohttpx4.57.5installedArchitectures/models that will no longer work after the upgrade:
MiniCPMV- Custom processing code on the Hub is incompatible with Transformers v5 (PR made but unmerged)Molmo2ForConditionalGeneration-Molmo2Processoruses deprecatedoptional_attributesand passes arbitrarykwargstoProcessorMixin.__init__which is no longer supported in Transformers v5OpenCUAForConditionalGeneration- Custom code is not compatible with Transformers v5OpenPanguVLForConditionalGeneration- OpenPanguVLVideoProcessorInitKwargs does not specify total=False, making all kwargs requiredHCXVisionForCausalLM- Custom model code imports something deprecated that was deleted in Transformers v5Caution
30d8b3d must be reverted before this can be merged
Supplementary PRs:
pad_token_idhuggingface/transformers#43453tied_weight_keysin-place huggingface/transformers#43619convert_rope_params_to_dictso it usesrope_thetafrom the config huggingface/transformers#43766Jamba] Fallback to slow path and warn instead of error out huggingface/transformers#43889head_maskfrom Ultravox and Swin #30764HfHubHTTPErrorin LoRA test #30768position_embedding_typewill be present for BERT and RoBERTa models #30770WeightRenamingfor Transformers modeling backend #31545min_pixels/max_pixelsfrom Qwen2VL's processor #33208tie_word_embeddingsfor multimodal models in Transformers v5 #33359return_dictforapply_chat_template#33372lm-evalversion for Transformers v5 compatibility #33994mamba-ssmversion in CI for Transformers v5 compatibility #34233modify_gen_kwargsinvllm_vlms.pyEleutherAI/lm-evaluation-harness#3573