We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 45d8e91 commit f1dcc35Copy full SHA for f1dcc35
tensorrt_llm/_torch/auto_deploy/transformations/library/kvcache.py
@@ -68,6 +68,7 @@ def insert_cached_attention(
68
69
if not source_attn_nodes:
70
# If there are no nodes for kv cache insertion found, return current graph
71
+ ad_logger.info("No source attention nodes found, skipping cache insertion.")
72
return egm
73
74
# Sanity check
0 commit comments