-
Notifications
You must be signed in to change notification settings - Fork 156
Description
Prerequisites
- I am running the latest code. Mention the version if possible as well.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
Tensor Parallelism is a model-parallelism technique used in Large Language Model (LLM) inference to distribute the model's tensor computations (e.g., matrix multiplications) across multiple devices (like GPUs or TPUs). This allows different parts of the model's layers to be processed in parallel, improving inference speed and scalability.
Key Features:
- Model Splitting: Splits model layers (especially large weight matrices) across multiple devices.
- Distributed Computation: Performs tensor operations in parallel, reducing computation time.
- Communication Overhead: Requires inter-device communication (e.g., using AllReduce) to synchronize results.
- Efficient Scaling: Enables inference on larger models that don't fit on a single device.
Use Case: Ideal for large-scale LLM inference where model size exceeds a single GPU's memory capacity.
Motivation
The performance of current methods(--split-mode row) is much worse than vllm or mlc-llm.
On the 4xP100 platform, using the vLLM or mlc-llm for inference with the Qwen2.5-72B-4bit model achieves a generation speed of approximately 20 tok/s. In contrast, when using the llama.cpp with "--split-mode row", the generation speed only reaches 10 tok/s, which is merely 50% of the former speed.
mlc-llm development is less active and supports fewer models.
In the upcoming 1.0 version, vllm will abandon a large number of Turing and older hardware.
Possible Implementation
No response