Skip to content

Conversation

@dczhu
Copy link
Collaborator

@dczhu dczhu commented Jan 22, 2026

Pull Request Description

Save and reuse the kvcache of the unaligned tail part between P and D.

Related Issues

Resolves: #[Insert issue number(s)]

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dczhu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the aibrix_pd_reuse_connector to improve performance by enabling the reuse of KV cache for unaligned tail parts of prompts. This change refines how token lengths are calculated and introduces specific handling for these unaligned segments, ensuring more efficient KV cache management during prompt and decode operations.

Highlights

  • Unaligned KV Cache Reuse: The AIBrixPDReuseConnector now supports saving and reusing the unaligned tail part of the KV cache, which is crucial for optimizing performance in prompt and decode disaggregation scenarios.
  • Refined Token Length Calculation: The logic for determining the number of tokens to process or check for existence has been updated. It now uses prompt_len - aligned_context_len (referred to as expected_tokens_len), ensuring that the entire prompt, including previously unhandled unaligned sections, is properly considered for KV cache operations.
  • Explicit Unaligned Token Handling: New mechanisms have been introduced within the _put_kv_cache_to_shfs method to explicitly allocate, offload, and store KV cache for any remaining unaligned tokens, ensuring their proper management and reuse.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dczhu dczhu requested review from DwyaneShi and Jeffwan January 22, 2026 00:04
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization by enabling the reuse of KV cache for unaligned tail parts between prefill and decode stages. The core change involves modifying the KV connector to process the entire prompt, including non-block-aligned tails, rather than just aligned chunks. This is implemented by adding logic to pad the unaligned tail to a full block on the sending side before caching. While the changes on the sending side seem correct, I've identified a critical issue where the receiving side does not appear to handle this new logic for unaligned tails, which would lead to incomplete cache loading. I've also included a couple of suggestions to improve code quality and maintainability.

Comment on lines 1108 to 1187
+ # Handle remaining unaligned tokens
+ remaining_tokens_start = aligned_context_len + total_sent
+ if remaining_tokens_start < prompt_len:
+ remaining_tokens = seq_all_tokens[remaining_tokens_start:prompt_len]
+ remaining_len = len(remaining_tokens)
+
+ # For alloc
+ remaining_tokens_rounded = round_up(
+ remaining_len,
+ self.cache_block_ntokens
+ )
+ # Extend tokens to full block
+ remaining_tokens_extended = remaining_tokens + [0] * (
+ remaining_tokens_rounded - remaining_len
+ )
+
+ remaining_prefix = seq_all_tokens[:remaining_tokens_start]
+
+ exists_status = self.cache.exists(
+ remaining_prefix, remaining_tokens_extended
+ )
+ if exists_status.is_ok():
+ num_existing_tokens = exists_status.value
+ if num_existing_tokens >= remaining_len:
+ return
+
+ # For remaining unaligned KV cache (rounded up to block size)
+ status = self.cache.allocate_for(
+ remaining_prefix, remaining_tokens_extended
+ )
+ if not status.is_ok():
+ log_every_n_seconds(
+ logger,
+ logging.ERROR,
+ f"Failed to allocate for remaining unaligned part: %s",
+ 3,
+ str(status)
+ )
+ return
+
+ handle = status.value
+ tensors = handle.to_tensors()
+ allocated_length = len(tensors) * self.cache_block_ntokens
+
+ # Slot mapping for remaining unaligned part (no padding)
+ remaining_slot_mapping = seq_cached_meta.context_slot_mapping[
+ remaining_tokens_start:remaining_tokens_start + remaining_len
+ ]
+
+ with perf_timer() as get_kernel_offload_dur_ms:
+ reshape_and_offload_multi_layer(
+ tensors,
+ self.layers_kv_caches,
+ remaining_slot_mapping,
+ self.engine_block_ntokens,
+ self.kv_cache_dtype,
+ self.k_scales,
+ self.v_scales,
+ self.block_layout.name,
+ )
+
+ # Full block with padding
+ status = self.cache.put(
+ remaining_prefix,
+ remaining_tokens_extended[:allocated_length],
+ handle
+ )
+ if not status.is_ok():
+ log_every_n_seconds(
+ logger,
+ logging.ERROR,
+ f"Failed to put remaining unaligned part to cache: %s",
+ 3,
+ str(status)
+ )
+ return
+
+ put_ntokens = status.get()
+ total_sent += min(put_ntokens, remaining_len)
+
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new block of code to handle remaining unaligned tokens is quite large (around 80 lines) and is nested inside the _send_kv_to_cache_impl method. To improve readability and maintainability of this function, consider extracting this logic into a separate private helper method, for example, _send_unaligned_tail_to_cache.

+ remaining_tokens_start:remaining_tokens_start + remaining_len
+ ]
+
+ with perf_timer() as get_kernel_offload_dur_ms:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable get_kernel_offload_dur_ms is assigned but never used. To avoid confusion and improve code clarity, you should either use it (e.g., for logging the duration) or remove the assignment by using an underscore _ to indicate it's intentionally ignored.

            with perf_timer() as _:

@dczhu dczhu force-pushed the dczhu/pd-reuse-tail branch from adff722 to 4048436 Compare January 29, 2026 01:38
+ chunk_len - num_existing_tokens
+ < self.cache_block_ntokens
+ ):
+ continue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which case will get into this if branch?

+ : new_chunk_prefix_len + chunk_len
+ ]
+ # Re-calc tokens_to_alloc after adjusting chunk_tokens
+ if is_unaligned:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems unreachable branch right? if is_unaligned is true then it will always hit L1061

+ continue
+ if is_unaligned:
+ # For unaligned, check if we have enough existing tokens
+ if num_existing_tokens >= chunk_len:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if num_existing_tokens < chunk_len

@dczhu dczhu force-pushed the dczhu/pd-reuse-tail branch 2 times, most recently from 1129d31 to b05037e Compare February 4, 2026 00:53
@dczhu dczhu force-pushed the dczhu/pd-reuse-tail branch from b05037e to fc17469 Compare February 4, 2026 01:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants