Skip to content

Feature Request: Multi NUMA Tensor Parallel #663

@aikitoria

Description

@aikitoria

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

By splitting the weights between NUMA nodes, and then doing tensor parallel between those nodes, bandwidth utilization can be significantly improved on multi socket systems, circumventing the problem of overloading the link between them when weights are shared globally. This was also recently implemented by sglang: https://lmsys.org/blog/2025-07-14-intel-xeon-optimization/#multi-numa-parallelism

Motivation

Likely faster

Possible Implementation

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions