Skip to content

Releases: MetaX-MACA/vLLM-metax

v0.15.0

24 Mar 07:37

Choose a tag to compare

What's Changed

  • Optimize Fused MoE LoRA Kernel PerformanceOptimize fused MoE LoRA kernel performance by @cwazai in #203
  • Support vllm 0.15.0 by @ILikeIneine in #209

Full Changelog: v0.14.0...v0.15.0

v0.14.0

23 Mar 05:46

Choose a tag to compare

What's Changed

Full Changelog: v0.13.0...v0.14.0

v0.13.0

06 Feb 10:32

Choose a tag to compare

Release Note

This release is a routine update vLLM upstream v0.13.0, includes some upstream synchronization, feature enhancements, bug fixes, and build-system improvements.

What's Changed

Full Changelog: v0.12.0...v0.13.0

v0.12.0

29 Jan 06:47

Choose a tag to compare

Release Note

This release is a routine update vLLM upstream v0.12.0, includes some upstream synchronization, feature enhancements, bug fixes, and build-system improvements.

What's Changed

Full Changelog: v0.11.2...v0.12.0

v0.11.2

23 Jan 10:46

Choose a tag to compare

Release Note

This release is a routine update focused on catching up with the vLLM upstream v0.11.2 release. It includes upstream synchronization, feature enhancements, bug fixes, and build-system improvements.

What's Changed

New Contributors

Full Changelog: https://github.com/MetaX-MACA/vLLM-metax/commits/v0.11.2

v0.10.2

27 Nov 09:52

Choose a tag to compare

🚀 vllm-metax 0.10.2 Release Notes

This release delivers a regular iteration on vllm-metax, bringing improved compatibility, performance boosts, and important bug fixes.

✨ What's New

  • 🔄 Upgraded compatibility with vLLM v0.10.2, ensuring smooth integration and feature alignment.
  • Performance optimizations for:
    • LoRA fine-tuning workflows
    • Qwen3-Next-80B
    • GLM4.5
  • 🐞 Bug fixes addressing issues in:
    • deepseek_mtp
    • eagle speculative decoding