Skip to content

Releases: vllm-project/vllm

v0.6.1.post1

13 Sep 04:40
acda0b3

Choose a tag to compare

Highlights

This release features important bug fixes and enhancements for

  • Pixtral models. (#8415, #8425, #8399, #8431)
    • Chunked scheduling has been turned off for vision models. Please replace --max_num_batched_tokens 16384 with --max-model-len 16384
  • Multistep scheduling. (#8417, #7928, #8427)
  • Tool use. (#8423, #8366)

Also

  • support multiple images for qwen-vl (#8247)
  • removes engine_use_ray (#8126)
  • add engine option to return only deltas or final output (#7381)
  • add bitsandbytes support for Gemma2 (#8338)

What's Changed

New Contributors

Full Changelog: v0.6.1...v0.6.1.post1

v0.6.1

11 Sep 21:44
3fd2b0d

Choose a tag to compare

Highlights

Model Support

  • Added support for Pixtral (mistralai/Pixtral-12B-2409). (#8377, #8168)
  • Added support for Llava-Next-Video (#7559), Qwen-VL (#8029), Qwen2-VL (#7905)
  • Multi-input support for LLaVA (#8238), InternVL2 models (#8201)

Performance Enhancements

  • Memory optimization for awq_gemm and awq_dequantize, 2x throughput (#8248)

Production Engine

  • Support load and unload LoRA in api server (#6566)
  • Add progress reporting to batch runner (#8060)
  • Add support for NVIDIA ModelOpt static scaling checkpoints. (#6112)

Others

  • Update the docker image to use Python 3.12 for small performance bump. (#8133)
  • Added CODE_OF_CONDUCT.md (#8161)

What's Changed

  • [Doc] [Misc] Create CODE_OF_CONDUCT.md by @mmcelaney in #8161
  • [bugfix] Upgrade minimum OpenAI version by @SolitaryThinker in #8169
  • [Misc] Clean up RoPE forward_native by @WoosukKwon in #8076
  • [ci] Mark LoRA test as soft-fail by @khluu in #8160
  • [Core/Bugfix] Add query dtype as per FlashInfer API requirements. by @elfiegg in #8173
  • [Doc] Add multi-image input example and update supported models by @DarkLight1337 in #8181
  • Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) by @Manikandan-Thangaraj-ZS0321 in #7860
  • [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) by @alex-jw-brooks in #8029
  • Move verify_marlin_supported to GPTQMarlinLinearMethod by @mgoin in #8165
  • [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM by @sroy745 in #7962
  • [Core] Support load and unload LoRA in api server by @Jeffwan in #6566
  • [BugFix] Fix Granite model configuration by @njhill in #8216
  • [Frontend] Add --logprobs argument to benchmark_serving.py by @afeldman-nm in #8191
  • [Misc] Use ray[adag] dependency instead of cuda by @ruisearch42 in #7938
  • [CI/Build] Increasing timeout for multiproc worker tests by @alexeykondrat in #8203
  • [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput by @rasmith in #8248
  • [Misc] Remove SqueezeLLM by @dsikka in #8220
  • [Model] Allow loading from original Mistral format by @patrickvonplaten in #8168
  • [misc] [doc] [frontend] LLM torch profiler support by @SolitaryThinker in #7943
  • [Bugfix] Fix Hermes tool call chat template bug by @K-Mistele in #8256
  • [Model] Multi-input support for LLaVA and fix embedding inputs for multi-image models by @DarkLight1337 in #8238
  • Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) by @wschin in #8241
  • [tpu][misc] fix typo by @youkaichao in #8260
  • [Bugfix] Fix broken OpenAI tensorizer test by @DarkLight1337 in #8258
  • [Model][VLM] Support multi-images inputs for InternVL2 models by @Isotr0py in #8201
  • [Model][VLM] Decouple weight loading logic for Paligemma by @Isotr0py in #8269
  • ppc64le: Dockerfile fixed, and a script for buildkite by @sumitd2 in #8026
  • [CI/Build] Use python 3.12 in cuda image by @joerunde in #8133
  • [Bugfix] Fix async postprocessor in case of preemption by @alexm-neuralmagic in #8267
  • [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility by @K-Mistele in #8272
  • [Frontend] Add progress reporting to run_batch.py by @alugowski in #8060
  • [Bugfix] Correct adapter usage for cohere and jamba by @vladislavkruglikov in #8292
  • [Misc] GPTQ Activation Ordering by @kylesayrs in #8135
  • [Misc] Fused MoE Marlin support for GPTQ by @dsikka in #8217
  • Add NVIDIA Meetup slides, announce AMD meetup, and add contact info by @simon-mo in #8319
  • [Bugfix] Fix missing post_layernorm in CLIP by @DarkLight1337 in #8155
  • [CI/Build] enable ccache/scccache for HIP builds by @dtrifiro in #8327
  • [Frontend] Clean up type annotations for mistral tokenizer by @DarkLight1337 in #8314
  • [CI/Build] Enabling kernels tests for AMD, ignoring some of then that fail by @alexeykondrat in #8130
  • Fix ppc64le buildkite job by @sumitd2 in #8309
  • [Spec Decode] Move ops.advance_step to flash attn advance_step by @kevin314 in #8224
  • [Misc] remove peft as dependency for prompt models by @prashantgupta24 in #8162
  • [MISC] Keep chunked prefill enabled by default with long context when prefix caching is enabled by @comaniac in #8342
  • [Bugfix] Ensure multistep lookahead allocation is compatible with cuda graph max capture by @alexm-neuralmagic in #8340
  • [Core/Bugfix] pass VLLM_ATTENTION_BACKEND to ray workers by @SolitaryThinker in #8172
  • [CI/Build][Kernel] Update CUTLASS to 3.5.1 tag by @tlrmchlsmth in #8043
  • [Misc] Skip loading extra bias for Qwen2-MOE GPTQ models by @jeejeelee in #8329
  • [Bugfix] Fix InternVL2 vision embeddings process with pipeline parallel by @Isotr0py in #8299
  • [Hardware][NV] Add support for ModelOpt static scaling checkpoints. by @pavanimajety in #6112
  • [model] Support for Llava-Next-Video model by @TKONIY in #7559
  • [Frontend] Create ErrorResponse instead of raising exceptions in run_batch by @pooyadavoodi in #8347
  • [Model][VLM] Add Qwen2-VL model support by @fyabc in #7905
  • [Hardware][Intel] Support compressed-tensor W8A8 for CPU backend by @bigPYJ1151 in #7257
  • [CI/Build] Excluding test_moe.py from AMD Kernels tests for investigation by @alexeykondrat in #8373
  • [Bugfix] Add missing attributes in mistral tokenizer by @DarkLight1337 in #8364
  • [Kernel][Misc] Add meta functions for ops to prevent graph breaks by @bnellnm in #6917
  • [Misc] Move device options to a single place by @akx in #8322
  • [Speculative Decoding] Test refactor by @LiuXiaoxuanPKU in #8317
  • Pixtral by @patrickvonplaten in #8377
  • Bump version to v0.6.1 by @simon-mo in #8379

New Contributors

Full Changelog: v0.6.0...v0.6.1

v0.6.0

04 Sep 23:35
32e7db2

Choose a tag to compare

Highlights

Performance Update

  • We are excited to announce a faster vLLM delivering 2x more throughput compared to v0.5.3. The default parameters should achieve great speed up, but we recommend also try out turning on multi step scheduling. You can do so by setting --num-scheduler-steps 8 in the engine arguments. Please note that it still have some limitations and being actively hardened, see #7528 for known issues.
    • Multi-step scheduler now supports LLMEngine and log_probs (#7789, #7652)
    • Asynchronous output processor overlaps the output data structures construction with GPU works, delivering 12% throughput increase. (#7049, #7911, #7921, #8050)
    • Using FlashInfer backend for FP8 KV Cache (#7798, #7985), rejection sampling in Speculative Decoding (#7244)

Model Support

  • Support bitsandbytes 8-bit and FP4 quantized models (#7445)
  • New LLMs: Exaone (#7819), Granite (#7436), Phi-3.5-MoE (#7729)
  • A new tokenizer mode for mistral models to use the native mistral-commons package (#7739)
  • Multi-modality:
    • multi-image input support for LLaVA-Next (#7230), Phi-3-vision models (#7783)
    • Ultravox support for multiple audio chunks (#7963)
    • TP support for ViTs (#7186)

Hardware Support

  • NVIDIA GPU: extend cuda graph size for H200 (#7894)
  • AMD: Triton implementations awq_dequantize and awq_gemm to support AWQ (#7386)
  • Intel GPU: pipeline parallel support (#7810)
  • Neuron: context lengths and token generation buckets (#7885, #8062)
  • TPU: single and multi-host TPUs on GKE (#7613), Async output processing (#8011)

Production Features

  • OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models! (#5649)
  • Add json_schema support from OpenAI protocol (#7654)
  • Enable chunked prefill and prefix caching together (#7753, #8120)
  • Multimodal support in offline chat (#8098), and multiple multi-modal items in the OpenAI frontend (#8049)

Misc

  • Support benchmarking async engine in benchmark_throughput.py (#7964)
  • Progress in integration with torch.compile: avoid Dynamo guard evaluation overhead (#7898), skip compile for profiling (#7796)

What's Changed

  • [Core] Add multi-step support to LLMEngine by @alexm-neuralmagic in #7789
  • [Bugfix] Fix run_batch logger by @pooyadavoodi in #7640
  • [Frontend] Publish Prometheus metrics in run_batch API by @pooyadavoodi in #7641
  • [Frontend] add json_schema support from OpenAI protocol by @rockwotj in #7654
  • [misc][core] lazy import outlines by @youkaichao in #7831
  • [ci][test] exclude model download time in server start time by @youkaichao in #7834
  • [ci][test] fix RemoteOpenAIServer by @youkaichao in #7838
  • [Bugfix] Fix Phi-3v crash when input images are of certain sizes by @zifeitong in #7840
  • [Model][VLM] Support multi-images inputs for Phi-3-vision models by @Isotr0py in #7783
  • [Misc] Remove snapshot_download usage in InternVL2 test by @Isotr0py in #7835
  • [misc][cuda] improve pynvml warning by @youkaichao in #7852
  • [Spec Decoding] Streamline batch expansion tensor manipulation by @njhill in #7851
  • [Bugfix]: Use float32 for base64 embedding by @HollowMan6 in #7855
  • [CI/Build] Avoid downloading all HF files in RemoteOpenAIServer by @DarkLight1337 in #7836
  • [Performance][BlockManagerV2] Mark prefix cache block as computed after schedule by @comaniac in #7822
  • [Misc] Update qqq to use vLLMParameters by @dsikka in #7805
  • [Misc] Update gptq_marlin_24 to use vLLMParameters by @dsikka in #7762
  • [misc] fix custom allreduce p2p cache file generation by @youkaichao in #7853
  • [Bugfix] neuron: enable tensor parallelism by @omrishiv in #7562
  • [Misc] Update compressed tensors lifecycle to remove prefix from create_weights by @dsikka in #7825
  • [Core] Asynchronous Output Processor by @megha95 in #7049
  • [Tests] Disable retries and use context manager for openai client by @njhill in #7565
  • [core][torch.compile] not compile for profiling by @youkaichao in #7796
  • Revert #7509 by @comaniac in #7887
  • [Model] Add Mistral Tokenization to improve robustness and chat encoding by @patrickvonplaten in #7739
  • [CI/Build][VLM] Cleanup multiple images inputs model test by @Isotr0py in #7897
  • [Hardware][Intel GPU] Add intel GPU pipeline parallel support. by @jikunshang in #7810
  • [CI/Build][ROCm] Enabling tensorizer tests for ROCm by @alexeykondrat in #7237
  • [Bugfix] Fix phi3v incorrect image_idx when using async engine by @Isotr0py in #7916
  • [cuda][misc] error on empty CUDA_VISIBLE_DEVICES by @youkaichao in #7924
  • [Kernel] Expand MoE weight loading + Add Fused Marlin MoE Kernel by @dsikka in #7766
  • [benchmark] Update TGI version by @philschmid in #7917
  • [Model] Add multi-image input support for LLaVA-Next offline inference by @zifeitong in #7230
  • [mypy] Enable mypy type checking for vllm/core by @jberkhahn in #7229
  • [Core][VLM] Stack multimodal tensors to represent multiple images within each prompt by @petersalas in #7902
  • [hardware][rocm] allow rocm to override default env var by @youkaichao in #7926
  • [Bugfix] Allow ScalarType to be compiled with pytorch 2.3 and add checks for registering FakeScalarType and dynamo support. by @bnellnm in #7886
  • [mypy][CI/Build] Fix mypy errors by @DarkLight1337 in #7929
  • [Core] Async_output_proc: Add virtual engine support (towards pipeline parallel) by @alexm-neuralmagic in #7911
  • [Performance] Enable chunked prefill and prefix caching together by @comaniac in #7753
  • [ci][test] fix pp test failure by @youkaichao in #7945
  • [Doc] fix the autoAWQ example by @stas00 in #7937
  • [Bugfix][VLM] Fix incompatibility between #7902 and #7230 by @DarkLight1337 in #7948
  • [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. by @pavanimajety in #7798
  • [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ by @rasmith in #7386
  • [TPU] Upgrade PyTorch XLA nightly by @WoosukKwon in #7967
  • [Doc] fix 404 link by @stas00 in #7966
  • [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM by @mzusman in #7651
  • [Bugfix] Make torch registration of punica ops optional by @bnellnm in #7970
  • [torch.compile] avoid Dynamo guard evaluation overhead by @youkaichao in #7898
  • Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test by @mgoin in #7961
  • [Frontend] Minor optimizations to zmq decoupled front-end by @njhill in #7957
  • [torch.compile] remove reset by @youkaichao in #7975
  • [VLM][Core] Fix exceptions on ragged NestedTensors by @petersalas in #7974
  • Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." by @youkaichao in #7982
  • [Bugfix] Unify rank computation across regular decoding and speculative decoding by @jmkuebler in #7899
  • [Core] Combine async postprocessor and multi-step by @alexm-neuralmagic in #7921
  • [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend. + BugFix for kv_cache_dtype=auto by @pavanimajety in #7985
  • extend cuda graph size for H200 by @kushanam in #7894
  • [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism by @Isotr0py in #7954
  • [misc] update tpu int8 to use new vLLM Parameters by @dsikka in #7973
  • [Neuron] Adding support for context-lenght, token-gen buckets. by @hbikki in #7885
  • support bitsandbytes 8-bit and FP4 quantized models by @chenqianfzh in #7445
  • Add more percentiles and latencies by @...
Read more

v0.5.5

23 Aug 18:37
09c7792

Choose a tag to compare

Highlights

Performance Update

  • We introduced a new mode that schedule multiple GPU steps in advance, reducing CPU overhead (#7000, #7387, #7452, #7703). Initial result shows 20% improvements in QPS for a single GPU running 8B and 30B models. You can set --num-scheduler-steps 8 as a parameter to the API server (via vllm serve) or AsyncLLMEngine. We are working on expanding the coverage to LLM class and aiming to turning it on by default
  • Various enhancements:
    • Use flashinfer sampling kernel when avaiable, leading to 7% decoding throughput speedup (#7137)
    • Reduce Python allocations, leading to 24% throughput speedup (#7162, 7364)
    • Improvements to the zeromq based decoupled frontend (#7570, #7716, #7484)

Model Support

  • Support Jamba 1.5 (#7415, #7601, #6739)
  • Support for the first audio model UltravoxModel (#7615, #7446)
  • Improvements to vision models:
    • Support image embeddings as input (#6613)
    • Support SigLIP encoder and alternative decoders for LLaVA models (#7153)
  • Support loading GGUF model (#5191) with tensor parallelism (#7520)
  • Progress in encoder decoder models: support for serving encoder/decoder models (#7258), and architecture for cross-attention (#4942)

Hardware Support

  • AMD: Add fp8 Linear Layer for rocm (#7210)
  • Enhancements to TPU support: load time W8A16 quantization (#7005), optimized rope (#7635), and support multi-host inference (#7457).
  • Intel: various refactoring for worker, executor, and model runner (#7686, #7712)

Others

  • Optimize prefix caching performance (#7193)
  • Speculative decoding
    • Use target model max length as default for draft model (#7706)
    • EAGLE Implementation with Top-1 proposer (#6830)
  • Entrypoints
    • A new chat method in the LLM class (#5049)
    • Support embeddings in the run_batch API (#7132)
    • Support prompt_logprobs in Chat Completion (#7453)
  • Quantizations
    • Expand MoE weight loading + Add Fused Marlin MoE Kernel (#7527)
    • Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
  • torch.compile: register custom ops for kernels (#7591, #7594, #7536)

What's Changed

Read more

v0.5.4

05 Aug 22:38
4db5176

Choose a tag to compare

Highlights

Model Support

  • Enhanced pipeline parallelism support for DeepSeek v2 (#6519), Qwen (#6974), Qwen2 (#6924), and Nemotron (#6863)
  • Enhanced vision language model support for InternVL2 (#6514, #7067), BLIP-2 (#5920), MiniCPM-V (#4087, #7122).
  • Added H2O Danube3-4b (#6451)
  • Added Nemotron models (Nemotron-3, Nemotron-4, Minitron) (#6611)

Hardware Support

  • TPU enhancements: collective communication, TP for async engine, faster compile time (#6891, #6933, #6856, #6813, #5871)
  • Intel CPU: enable multiprocessing and tensor parallelism (#6125)

Performance

We are progressing along our quest to quickly improve performance. Each of the following PRs contributed some improvements, and we anticipate more enhancements in the next release.

  • Separated OpenAI Server's HTTP request handling and model inference loop with zeromq. This brought 20% speedup over time to first token and 2x speedup over inter token latency. (#6883)
  • Used Python's native array data structure speedup padding. This bring 15% throughput enhancement in large batch size scenarios. (#6779)
  • Reduce unnecessary compute when logprobs=None. This reduced latency of get log probs from ~30ms to ~5ms in large batch size scenarios. (#6532)
  • Optimize get_seqs function, bring 2% throughput enhancements. (#7051)

Production Features

  • Enhancements to speculative decoding: FlashInfer in DraftModelRunner (#6926), observability (#6963), and benchmarks (#6964)
  • Refactor the punica kernel based on Triton (#5036)
  • Support for guided decoding for offline LLM (#6878)

Quantization

  • Support W4A8 quantization for vllm (#5218)
  • Tuned FP8 and INT8 Kernels for Ada Lovelace and SM75 T4 (#6677, #6996, #6848)
  • Support reading bitsandbytes pre-quantized model (#5753)

What's Changed

Read more

v0.5.3.post1

23 Jul 17:09
38c4b7e

Choose a tag to compare

Highlights

  • We fixed an configuration incompatibility between vLLM (which tested against pre-released version) and the published Meta Llama 3.1 weights (#6693)

What's Changed

Full Changelog: v0.5.3...v0.5.3.post1

v0.5.3

23 Jul 07:01
bb2fc08

Choose a tag to compare

Highlights

Model Support

  • vLLM now supports Meta Llama 3.1! Please checkout our blog here for initial details on running the model.
    • Please checkout this thread for any known issues related to the model.
    • The model runs on a single 8xH100 or 8xA100 node using FP8 quantization (#6606, #6547, #6487, #6593, #6511, #6515, #6552)
    • The BF16 version of the model should run on multiple nodes using pipeline parallelism (docs). If you have fast network interconnect, you might want to consider full tensor paralellism as well. (#6599, #6598, #6529, #6569)
    • In order to support long context, a new rope extension method has been added and chunked prefill has been turned on by default for Meta Llama 3.1 series of model. (#6666, #6553, #6673)
  • Support Mistral-Nemo (#6548)
  • Support Chameleon (#6633, #5770)
  • Pipeline parallel support for Mixtral (#6516)

Hardware Support

Performance Enhancements

  • Add AWQ support to the Marlin kernel. This brings significant (1.5-2x) perf improvements to existing AWQ models! (#6612)
  • Progress towards refactoring for SPMD worker execution. (#6032)
  • Progress in improving prepare inputs procedure. (#6164, #6338, #6596)
  • Memory optimization for pipeline parallelism. (#6455)

Production Engine

  • Correctness testing for pipeline parallel and CPU offloading (#6410, #6549)
  • Support dynamically loading Lora adapter from HuggingFace (#6234)
  • Pipeline Parallel using stdlib multiprocessing module (#6130)

Others

  • A CPU offloading implementation, you can now use --cpu-offload-gb to control how much memory to "extend" the RAM with. (#6496)
  • The new vllm CLI is now ready for testing. It comes with three commands: serve, complete, and chat. Feedback and improvements are greatly welcomed! (#6431)
  • The wheels now build on Ubuntu 20.04 instead of 22.04. (#6517)

What's Changed

Read more

v0.5.2

15 Jul 18:01
4cf256a

Choose a tag to compare

Major Changes

  • ❗Planned breaking change ❗: we plan to remove beam search (see more in #6226) in the next few releases. This release come with a warning when beam search is enabled for the request. Please voice your concern in the RFC if you do have a valid use case for beam search in vLLM
  • The release has moved to a Python version agnostic wheel (#6394). A single wheel can be installed across Python versions vLLM supports.

Highlights

Model Support

Hardware

  • AMD: unify CUDA_VISIBLE_DEVICES usage (#6352)

Performance

  • ZeroMQ fallback for broadcasting large objects (#6183)
  • Simplify code to support pipeline parallel (#6406)
  • Turn off CUTLASS scaled_mm for Ada Lovelace (#6384)
  • Use CUTLASS kernels for the FP8 layers with Bias (#6270)

Features

  • Enabling bonus token in speculative decoding for KV cache based models (#5765)
  • Medusa Implementation with Top-1 proposer (#4978)
  • An experimental vLLM CLI for serving and querying OpenAI compatible server (#5090)

Others

  • Add support for multi-node on CI (#5955)
  • Benchmark: add H100 suite (#6047)
  • [CI/Build] Add nightly benchmarking for tgi, tensorrt-llm and lmdeploy (#5362)
  • Build some nightly wheels (#6380)

What's Changed

Read more

v0.5.1

05 Jul 19:47
79d406e

Choose a tag to compare

Highlights

  • vLLM now has pipeline parallelism! (#4412, #5408, #6115, #6120). You can now run the API server with --pipeline-parallel-size. This feature is in early stage, please let us know your feedback.

Model Support

  • Support Gemma 2 (#5908, #6051). Please note that for correctness, Gemma should run with FlashInfer backend which supports logits soft cap. The wheels for FlashInfer can be downloaded here
  • Support Jamba (#4115). This is vLLM's first state space model!
  • Support Deepseek-V2 (#4650). Please note that MLA (Multi-head Latent Attention) is not implemented and we are looking for contribution!
  • Vision Language Model adding support for Phi3-Vision, dynamic image size, and a registry for processing model inputs (#4986, #5276, #5214)
    • Notably, it has a breaking change that all VLM specific arguments are now removed from engine APIs so you no longer need to set it globally via CLI. However, you now only need to pass in <image> into the prompt instead of complicated prompt formatting. See more here
    • There is also a new guide on adding VLMs! We would love your contribution for new models!

Hardware Support

Production Service

  • Support for sharded tensorized models (#4990)
  • Continous streaming of OpenAI response token stats (#5742)

Performance

  • Enhancement in distributed communication via shared memory (#5399)
  • Latency enhancement in block manager (#5584)
  • Enhancements to compressed-tensors supporting Marlin, W4A16 (#5435, #5385)
  • Faster FP8 quantize kernel (#5396), FP8 on Ampere (#5975)
  • Option to use FlashInfer for prefill, decode, and CUDA Graph for decode (#4628)
  • Speculative Decoding
  • Draft Model Runner (#5799)

Development Productivity

  • Post merge benchmark is now available at perf.vllm.ai!
  • Addition of A100 in CI environment (#5658)
  • Step towards nightly wheel publication (#5610)

What's Changed

Read more

v0.5.0.post1

14 Jun 02:43
50eed24

Choose a tag to compare

Highlights

  • Add initial TPU integration (#5292)
  • Fix crashes when using FlashAttention backend (#5478)
  • Fix issues when using num_devices < num_available_devices (#5473)

What's Changed

New Contributors

Full Changelog: v0.5.0...v0.5.0.post1