Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 69.1k 13k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.7k 380

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 358 133

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 213 33

  5. semantic-router semantic-router Public

    System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge

    Go 3k 514

Repositories

Showing 10 of 31 repositories
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 223 Apache-2.0 91 29 (1 issue needs help) 122 Updated Jan 30, 2026
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 29 Apache-2.0 56 0 27 Updated Jan 30, 2026
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 4,598 Apache-2.0 524 282 (21 issues need help) 30 Updated Jan 30, 2026
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,656 Apache-2.0 380 82 (22 issues need help) 39 Updated Jan 30, 2026
  • speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    vllm-project/speculators’s past year of commit activity
    Python 213 Apache-2.0 33 13 (2 issues need help) 2 Updated Jan 30, 2026
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 69,073 Apache-2.0 13,040 1,697 (46 issues need help) 1,501 Updated Jan 30, 2026
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 39 Apache-2.0 31 27 11 Updated Jan 30, 2026
  • compressed-tensors Public

    A safetensors extension to efficiently store sparse quantized tensors on disk

    vllm-project/compressed-tensors’s past year of commit activity
    Python 238 Apache-2.0 52 3 (1 issue needs help) 13 Updated Jan 31, 2026
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 828 Apache-2.0 119 47 (4 issues need help) 16 Updated Jan 30, 2026
  • vllm-omni Public

    A framework for efficient model inference with omni-modality models

    vllm-project/vllm-omni’s past year of commit activity
    Python 2,472 Apache-2.0 358 198 (36 issues need help) 123 Updated Jan 30, 2026