Skip to content

Conversation

@jackzhxng
Copy link
Contributor

Summary

[PLEASE REMOVE] See CONTRIBUTING.md's Pull Requests for ExecuTorch PR guidelines.

[PLEASE REMOVE] If this PR closes an issue, please add a Fixes #<issue-id> line.

[PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out CONTRIBUTING.md's Pull Requests.

Test plan

[PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable.

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 18, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6926

Note: Links to docs will display an error until the docs builds have been completed.

❗ 2 Active SEVs

There are 2 currently active SEVs. If your PR is affected, please view them below:

❌ 13 New Failures

As of commit a3daba3 with merge base b4ab76f (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 18, 2024
Comment on lines +145 to +151
# if self.use_kv_cache:
# print("Setting up KV cache on the model...")
# self.model_.setup_caches(
# batch_size=1,
# dtype=self.dtype,
# decoder_max_seq_len=self.max_seq_len,
# )
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to do this because source transform happens after the model is set up, and we need to call the new swapped-in ET attention's setup_cache function. So we move the setup_cache to after the source transform

Comment on lines +567 to +575
if args.model in TORCHTUNE_DEFINED_MODELS:
if args.use_kv_cache:
print("Setting up the KV cache...")
model_manager.model.setup_caches(
batch_size=1,
dtype=dtype_override.to_torch_dtype(),
decoder_max_seq_len=args.max_seq_length,
)
return model_manager
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setup_cache moved here

batch_size=batch_size,
max_seq_len=max_seq_len,
num_kv_heads=self.num_kv_heads,
# self.kv_cache = InferenceKVCache(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you try adding from executorch.extension.llm.custom_ops import * here and see if that works?

@jackzhxng jackzhxng closed this Feb 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants