Skip to content

Conversation

@zxpdemonio
Copy link
Collaborator

Description

Type of Change

  • Types
    • Bug fix
    • New feature
      • Transfer Engine
      • Mooncake Store
      • Mooncake EP
      • Integration
      • P2P Store
      • Python Wheel
    • Breaking change
    • CI/CD
    • Documentation update
    • Other

How Has This Been Tested?

Checklist

  • I have performed a self-review of my own code.
  • I have updated the documentation.
  • I have added tests to prove my changes are effective.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @zxpdemonio, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly optimizes the data retrieval process for tensors within the Mooncake store's Python integration. By introducing a new get_allocated_internal method, the Python bindings for get_tensor() and batch_get_tensor() can now directly access data buffers provided by the underlying C++ client. This refactoring removes an redundant data copy operation that previously occurred in the Python wrapper, leading to improved performance and reduced memory overhead when handling tensor data. While the underlying C++ client still performs an initial allocation and copy, this change ensures that no additional copies are made specifically within the Python integration layer.

Highlights

  • New Internal Data Retrieval Method: Introduced a new get_allocated_internal method across the PyClient interface and its concrete implementations (DummyClient, RealClient) to handle direct buffer allocation and data retrieval.
  • Zero-Copy Optimization in Python Bindings: Refactored the get_tensor() and batch_get_tensor() functions within the mooncake-integration/store/store_py.cpp Python wrapper to utilize the new get_allocated_internal method. This change eliminates an intermediate data copy operation that previously occurred when transferring data from the store's internal buffer to the Python-exposed buffer, thus providing a 'zero-copy' benefit at the Python binding layer.
  • Flexible Slice Allocation: Added an overloaded allocateSlices function in client_buffer.hpp and client_buffer.cpp that can now work directly with char* buffers, enhancing flexibility in memory management for data slices.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a 'zero-copy' optimization for get_tensor() and batch_get_tensor() by adding a new get_allocated_internal method. This avoids an extra memory copy when retrieving tensors. The changes to get_tensor() and the new C++ backend methods look good, but the implementation for batch_get_tensor() has some critical issues, including double data fetching, a performance regression due to loss of batching, and incorrect GIL handling. I've also suggested an improvement in the new get_allocated_internal function to use modern C++ for safer memory management.

Comment on lines 2018 to 2055
char *data_ptr = new char[total_length];
if (!data_ptr) {
LOG(ERROR) << "Failed to allocate memory for length: " << total_length;
return tl::unexpected(ErrorCode::INTERNAL_ERROR);
}

// register the buffer
auto register_result = register_buffer_internal(
reinterpret_cast<void *>(data_ptr), total_length);
if (!register_result) {
LOG(ERROR) << "Failed to register buffer";
delete[] data_ptr;
return tl::unexpected(register_result.error());
}

// Create slices for the allocated buffer
std::vector<Slice> slices;
allocateSlices(slices, replica, data_ptr);

// Get the object data
auto get_result = client_->Get(key, query_result.value(), slices);

// unregister the buffer for whatever cases
auto unregister_result =
unregister_buffer_internal(reinterpret_cast<void *>(data_ptr));
if (!unregister_result) {
LOG(WARNING) << "Failed to unregister buffer";
}

if (!get_result) {
delete[] data_ptr;
LOG(ERROR) << "Get failed for key: " << key;
return tl::unexpected(get_result.error());
}

// return the data ptr transferring the ownership to the caller
data_length = total_length;
return data_ptr;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function manually manages the memory for data_ptr using new[] and delete[]. This can be error-prone, especially with multiple return paths. Using std::unique_ptr would leverage RAII for automatic memory management, making the code safer and cleaner by removing the need for manual delete[] calls in error paths.

    auto data_ptr = std::make_unique<char[]>(total_length);

    // register the buffer
    auto register_result = register_buffer_internal(
        reinterpret_cast<void *>(data_ptr.get()), total_length);
    if (!register_result) {
        LOG(ERROR) << "Failed to register buffer";
        return tl::unexpected(register_result.error());
    }

    // Create slices for the allocated buffer
    std::vector<Slice> slices;
    allocateSlices(slices, replica, data_ptr.get());

    // Get the object data
    auto get_result = client_->Get(key, query_result.value(), slices);

    // unregister the buffer for whatever cases
    auto unregister_result =
        unregister_buffer_internal(reinterpret_cast<void *>(data_ptr.get()));
    if (!unregister_result) {
        LOG(WARNING) << "Failed to unregister buffer";
    }

    if (!get_result) {
        LOG(ERROR) << "Get failed for key: " << key;
        return tl::unexpected(get_result.error());
    }

    // return the data ptr transferring the ownership to the caller
    data_length = total_length;
    return data_ptr.release();

@stmatengss
Copy link
Collaborator

I will fix the code format.

@github-actions github-actions bot added the Store label Dec 11, 2025
@zxpdemonio zxpdemonio force-pushed the cruz/zero_copy branch 4 times, most recently from 9c82bb6 to 4fa75d6 Compare December 13, 2025 07:34
@stmatengss
Copy link
Collaborator

@zxpdemonio Don't forget to use clang-format to pass the code checking.

@zxpdemonio
Copy link
Collaborator Author

test result for batch_get_tensor_into()(test case 3), comparing with standard batch get (test case1) and tp batch get( test case 2)

test_benchmark_01_batch_put_get (main.TestMooncakeBenchmark.test_benchmark_01_batch_put_get)
Benchmark: Standard Batch Put/Get. ...
[Gen] Generating 16 tensors (~64MB each)...
--- Running Standard Batch Benchmark (5 iters) ---
👉 [Result] Standard Batch Put | Avg Time: 0.3623s | Throughput: 23.71 Gbps
👉 [Result] Standard Batch Get | Avg Time: 1.2176s | Throughput: 7.06 Gbps
ok
test_benchmark_02_tp_batch (main.TestMooncakeBenchmark.test_benchmark_02_tp_batch)
Benchmark: TP Batch Put/Get. ...
[Gen] Generating 16 tensors (~64MB each)...
--- Running TP Batch Benchmark (TP=4) ---
👉 [Result] TP Batch Put (TP=4) | Avg Time: 0.4113s | Throughput: 20.88 Gbps
👉 [Result] TP Batch Get (TP=4) | Avg Time: 0.7431s | Throughput: 11.56 Gbps
ok
test_benchmark_03_batch_put_get_into (main.TestMooncakeBenchmark.test_benchmark_03_batch_put_get_into)
Benchmark: Zero copy Batch Get. ...
[Gen] Generating 16 tensors (~64MB each)...
--- Running zero copy Batch Benchmark (5 iters) ---
👉 [Result] Standard Batch Put | Avg Time: 0.3047s | Throughput: 28.19 Gbps
👉 [Result] Zero copy Batch Get | Avg Time: 0.3683s | Throughput: 23.33 Gbps
ok

@XucSh XucSh self-assigned this Dec 15, 2025
@zxpdemonio
Copy link
Collaborator Author

add zero copy for get tensor with tp:
test reuslts:

Benchmark: Standard Batch Put/Get. ...
[Gen] Generating 16 tensors (~64MB each)...
--- Running Standard Batch Benchmark (5 iters) ---
👉 [Result] Standard Batch Put | Avg Time: 0.3640s | Throughput: 23.60 Gbps
👉 [Result] Standard Batch Get | Avg Time: 0.9804s | Throughput: 8.76 Gbps
ok
test_benchmark_02_tp_batch (main.TestMooncakeBenchmark.test_benchmark_02_tp_batch)
Benchmark: TP Batch Put/Get. ...
[Gen] Generating 16 tensors (~64MB each)...
--- Running TP Batch Benchmark (TP=4) ---
👉 [Result] TP Batch Put (TP=4) | Avg Time: 0.3409s | Throughput: 25.19 Gbps
👉 [Result] TP Batch Get (TP=4) | Avg Time: 0.7106s | Throughput: 12.09 Gbps
ok
test_benchmark_03_batch_put_get_into (main.TestMooncakeBenchmark.test_benchmark_03_batch_put_get_into)
Benchmark: Zero copy Batch Get. ...
[Gen] Generating 16 tensors (~64MB each)...
--- Running zero copy Batch Benchmark (5 iters) ---
👉 [Result] Standard Batch Put | Avg Time: 0.2715s | Throughput: 31.64 Gbps
👉 [Result] Zero copy Batch Get | Avg Time: 0.3664s | Throughput: 23.44 Gbps
ok
test_benchmark_04_batch_put_get_into_with_tp (main.TestMooncakeBenchmark.test_benchmark_04_batch_put_get_into_with_tp)
Benchmark: Zero copy Batch Get with tp. ...
[Gen] Generating 16 tensors (~64MB each)...

--- Running zero copy Batch Benchmark (TP=4, 5 iters) ---
👉 [Result] Standard Batch Put with tp (TP=4) | Avg Time: 0.3373s | Throughput: 25.47 Gbps
👉 [Result] Zero copy Batch Get with tp (TP=4) | Avg Time: 0.3833s | Throughput: 22.41 Gbps

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements zero-copy tensor retrieval operations for the Mooncake distributed store. The main enhancement is adding new *_into API methods that allow tensors to be retrieved directly into pre-allocated, user-managed buffers, avoiding intermediate memory allocations and copies. This is particularly beneficial for high-performance inference scenarios where memory bandwidth and latency are critical.

Key Changes:

  • Added four new zero-copy API methods: get_tensor_into, batch_get_tensor_into, get_tensor_into_with_tp, and batch_get_tensor_into_with_tp
  • Implemented array_creators_without_free infrastructure to create numpy array views without memory ownership transfer
  • Added comprehensive functional and performance tests for all new zero-copy operations
  • Increased default buffer sizes for testing (4 GiB → 16 GiB for global segment, 2 GB → 8 GB for local buffer)

Reviewed changes

Copilot reviewed 4 out of 6 changed files in this pull request and generated 18 comments.

Show a summary per file
File Description
scripts/test_tensor_api.py Added test utilities (DTYPE_MAP, verify_tensor_equality) and comprehensive test cases for zero-copy operations with single/batch tensors and tensor parallelism support. Updated benchmark tests and increased default buffer sizes.
mooncake-integration/store/store_py.cpp Implemented four new zero-copy methods that write directly to user-provided buffers, including tensor parallel variants. Added Python bindings for the new API methods.
mooncake-integration/integration_utils.h Added create_typed_array_without_free and array_creators_without_free to create numpy arrays without memory ownership transfer for zero-copy operations.
mooncake-store/src/dummy_client.cpp Whitespace-only change (added newline at end of file).
mooncake-store/include/dummy_client.h Whitespace-only change (added newline at end of file).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 23 to 24
DEFAULT_GLOBAL_SEGMENT_SIZE = 16 * 1024 * 1024 * 1024 # 4 GiB
DEFAULT_LOCAL_BUFFER_SIZE = 8 * 1024 * 1024 * 1024 # 2 GB
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment indicates the old values (4 GiB, 2 GB) but the constants have been changed to 16 GiB and 8 GB. The comments should be updated to reflect the new values to avoid confusion.

Suggested change
DEFAULT_GLOBAL_SEGMENT_SIZE = 16 * 1024 * 1024 * 1024 # 4 GiB
DEFAULT_LOCAL_BUFFER_SIZE = 8 * 1024 * 1024 * 1024 # 2 GB
DEFAULT_GLOBAL_SEGMENT_SIZE = 16 * 1024 * 1024 * 1024 # 16 GiB
DEFAULT_LOCAL_BUFFER_SIZE = 8 * 1024 * 1024 * 1024 # 8 GiB

Copilot uses AI. Check for mistakes.

for rank in range(tp_size):
batch_size = len(keys)
buffer_spacing = 64 * 1024 * 1024 # 1GB per tensor slot
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "1GB per tensor slot" but the buffer_spacing is set to 64 MB. The comment should be updated to match the actual buffer size.

Suggested change
buffer_spacing = 64 * 1024 * 1024 # 1GB per tensor slot
buffer_spacing = 64 * 1024 * 1024 # 64MB per tensor slot

Copilot uses AI. Check for mistakes.
split_dim = 0
batch_size = len(self.keys)
self.store.remove_all()
buffer_spacing = 64 * 1024 * 1024 # 1GB per tensor slot
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "1GB per tensor slot" but the buffer_spacing is set to 64 MB. The comment should be updated to match the actual buffer size.

Suggested change
buffer_spacing = 64 * 1024 * 1024 # 1GB per tensor slot
buffer_spacing = 64 * 1024 * 1024 # 64MB per tensor slot

Copilot uses AI. Check for mistakes.
Comment on lines 559 to 565
np_array = array_creators_without_free[dtype_index](
static_cast<char *>(buffer), sizeof(TensorMetadata),
tensor_size);
} else {
LOG(ERROR) << "Unsupported dtype enum: " << dtype_index;
return pybind11::none();
}
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code uses array_creators_without_free but the variable name is inconsistent - it should check the correct array size. Also, line 677 has the same pattern. Both locations reference array_creators.size() in the condition but use array_creators_without_free in the actual call, which could be confusing. Consider checking array_creators_without_free.size() for consistency.

Copilot uses AI. Check for mistakes.
Comment on lines 676 to 509
if (dtype_index >= 0 &&
dtype_index < static_cast<int>(array_creators.size())) {
// This call MUST take ownership of exported_data
np_array = array_creators_without_free[dtype_index](
static_cast<char *>(buffer), sizeof(TensorMetadata),
tensor_size);
} else {
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code uses array_creators_without_free but checks against array_creators.size() in the condition. For consistency and clarity, consider checking array_creators_without_free.size() instead, matching the array being used.

Copilot uses AI. Check for mistakes.

def verify_tensor_equality(original, received, rtol=0, atol=0, verbose=True):
"""
compare two tensors。
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chinese punctuation used in the comment. The period (。) should be replaced with an English period (.).

Suggested change
compare two tensors
compare two tensors.

Copilot uses AI. Check for mistakes.
def test_benchmark_03_batch_put_get_into(self):
"""Benchmark: Zero copy Batch Get."""
self.store.remove_all()
buffer_spacing = 300 * 1024 * 1024 # 1GB per tensor slot
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "1GB per tensor slot" but the buffer_spacing is set to 300 MB. The comment should be updated to match the actual buffer size.

Suggested change
buffer_spacing = 300 * 1024 * 1024 # 1GB per tensor slot
buffer_spacing = 300 * 1024 * 1024 # 300MB per tensor slot

Copilot uses AI. Check for mistakes.
registered_buffers = [] # Keep track of (ptr, size) for cleanup

for rank in range(tp_size):
buffer_spacing = 64 * 1024 * 1024 # 1GB per tensor slot
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "1GB per tensor slot" but the buffer_spacing is set to 64 MB. The comment should be updated to match the actual buffer size.

Suggested change
buffer_spacing = 64 * 1024 * 1024 # 1GB per tensor slot
buffer_spacing = 64 * 1024 * 1024 # 64MB per tensor slot

Copilot uses AI. Check for mistakes.
import argparse
import unittest
import torch
import struct
Copy link

Copilot AI Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'struct' is not used.

Suggested change
import struct

Copilot uses AI. Check for mistakes.
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
@XucSh XucSh merged commit 1a5983e into kvcache-ai:main Dec 19, 2025
14 checks passed
nickyc975 pushed a commit to nickyc975/Mooncake that referenced this pull request Jan 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants