Skip to content

Commit 80dfd67

Browse files
committed
Remove unnecessary code
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
1 parent 8fa324d commit 80dfd67

File tree

1 file changed

+0
-7
lines changed
  • tensorrt_llm/_torch/attention_backend

1 file changed

+0
-7
lines changed

tensorrt_llm/_torch/attention_backend/trtllm.py

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -904,13 +904,6 @@ def prepare_flash_mla(self) -> None:
904904
self.block_ids_per_seq[:self.num_generations, :num_blocks].copy_(
905905
block_ids_per_seq[self.num_contexts:], non_blocking=True)
906906

907-
self.kv_lens_cuda_runtime = self.kv_lens_cuda[:self.num_seqs]
908-
self.kv_lens_runtime = self.kv_lens[:self.num_seqs]
909-
self.prompt_lens_cuda_runtime = self.prompt_lens_cuda[:self.num_seqs]
910-
self.prompt_lens_cpu_runtime = self.prompt_lens_cpu[:self.num_seqs]
911-
self.host_request_types_runtime = self.host_request_types[:self.
912-
num_seqs]
913-
914907
def pre_process_for_chunked_prefill(
915908
self,
916909
chunked_seq_len: torch.Tensor,

0 commit comments

Comments
 (0)