-
Notifications
You must be signed in to change notification settings - Fork 699
Fix cache_positions tensor size in TextLLMRunner #12476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In Huggingface causal LM forward convention, `cache_position` should have the same length as `input_ids`. The previous logic will allocate `cache_position` based on method metadata which by default is equal to the maximum length of this tensor (normally max context length). Now changing the logic to align the size of `cache_position` to `input_ids`. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12476
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 6 PendingAs of commit efd7096 with merge base 042eb1a ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #12476 by @larryliu0820 ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/orig @diff-train-skip-merge Co-authored-by: Mengwei Liu <[email protected]>
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #12476 by @larryliu0820 ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/orig @diff-train-skip-merge Co-authored-by: Mengwei Liu <[email protected]> (cherry picked from commit aa44c06)
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #12476 by @larryliu0820 ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/68/orig @diff-train-skip-merge Co-authored-by: Mengwei Liu <[email protected]> Co-authored-by: Siddartha Pothapragada <[email protected]>
Stack from ghstack (oldest at bottom):
In Huggingface causal LM forward convention,
cache_positionshouldhave the same length as
input_ids. The previous logic will allocatecache_positionbased on method metadata which by default is equal tothe maximum length of this tensor (normally max context length). Now
changing the logic to align the size of
cache_positiontoinput_ids.