Skip to content

[feat]: add --kt-numa-nodes for explicit NUMA node mapping#28

Open
ErvinXie wants to merge 1 commit intomainfrom
feat/kt-numa-nodes
Open

[feat]: add --kt-numa-nodes for explicit NUMA node mapping#28
ErvinXie wants to merge 1 commit intomainfrom
feat/kt-numa-nodes

Conversation

@ErvinXie
Copy link

@ErvinXie ErvinXie commented Mar 18, 2026

Summary

  • Add --kt-numa-nodes CLI parameter to ServerArgs for specifying NUMA node IDs
  • Thread numa_nodes through KTConfig dataclass to KTMoEWrapper instantiation
  • Parse comma-separated string (e.g., "1" or "0,1") into List[int]

Motivation

Companion to kvcache-ai/ktransformers#1891.

Currently subpool_numa_map in kt-kernel defaults to [0, 1, ..., threadpool_count-1]. This makes it impossible to bind a KTransformers instance to a specific NUMA node (e.g., node 1) without external numactl.

Usage

Deploy two independent instances on a dual-NUMA machine, each bound to a different NUMA node:

# Instance 1: bind to NUMA node 0
python -m sglang.launch_server \
  --model /path/to/model \
  --kt-threadpool-count 1 --kt-numa-nodes 0 \
  --kt-cpuinfer 48 \
  --port 30000 \
  ...

# Instance 2: bind to NUMA node 1
python -m sglang.launch_server \
  --model /path/to/model \
  --kt-threadpool-count 1 --kt-numa-nodes 1 \
  --kt-cpuinfer 48 \
  --port 30001 \
  ...

You can also specify multiple NUMA nodes in custom order:

# Reverse order
python -m sglang.launch_server \
  --kt-threadpool-count 2 --kt-numa-nodes 1,0 \
  ...

If --kt-numa-nodes is not specified, the behavior is unchanged.

Tested on

  • AMD EPYC 9355 dual-socket (2 NUMA nodes, 128 threads)
  • Verified CPUInfer creates worker pool on correct NUMA node
  • Verified backward compatibility

Test plan

  • from sglang.srt.server_args import ServerArgs imports successfully
  • --kt-numa-nodes is accepted by argparse and parsed correctly
  • Backward compatible: omitting --kt-numa-nodes behaves as before
  • End-to-end with companion kt-kernel PR on a multi-NUMA machine

🤖 Generated with Claude Code

Add --kt-numa-nodes parameter to ServerArgs and thread it through
KTConfig to KTMoEWrapper. This allows users to specify which NUMA
node IDs to bind to, enabling multi-instance deployment on different
NUMA nodes without external numactl workarounds.

Usage: --kt-threadpool-count 1 --kt-numa-nodes 1
(binds to NUMA node 1 instead of defaulting to node 0)

Companion to kvcache-ai/ktransformers#1891
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the KTransformers integration by providing explicit control over NUMA node mapping. It introduces a new command-line argument that allows users to specify which NUMA nodes KTransformers thread pools should utilize. This change is crucial for optimizing performance and resource allocation on systems with multiple NUMA nodes, enabling more granular control over how KTransformers instances are distributed across hardware resources.

Highlights

  • New CLI Parameter: Introduced --kt-numa-nodes CLI parameter to ServerArgs for explicit NUMA node mapping.
  • NUMA Node Configuration: Threaded the numa_nodes configuration through the KTConfig dataclass to the KTMoEWrapper instantiation.
  • Input Parsing: Implemented parsing of a comma-separated string (e.g., "1" or "0,1") into a list of integers for NUMA node IDs.
  • Enhanced Multi-NUMA Support: Enabled users to bind KTransformers instances to specific NUMA nodes, facilitating optimized resource utilization on multi-NUMA machines.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link

Warning

Gemini encountered an error creating the review. You can try again by commenting /gemini review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant