-
Notifications
You must be signed in to change notification settings - Fork 712
Add last_token_pos in llama_transformer #11793
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: Add last_token_pos in the forward options. Purpose: * the last norm and output of lm-head can be performed with the last valid token at prefill. * If the input sequence length is fixed when an accelerator doesn't support the dynamic shapes, selecting the last token from the input is not always guaranteed as valid. * Thus, it needs an additional pointer to select the last valid token only to perform the last norm and output. Differential Revision: D76440105
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11793
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 1 Unrelated FailureAs of commit b526c43 with merge base 3c05b6c ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D76440105 |
This PR needs a
|
JacobSzwejbka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@larryliu0820 any thoughts?
|
This pull request was exported from Phabricator. Differential Revision: D76440105 |
|
I'm not the author of this code! Closing it... |
Summary: Add last_token_pos in the forward options. Purpose: * the last norm and output of lm-head can be performed with the last valid token at prefill. * If the input sequence length is fixed when an accelerator doesn't support the dynamic shapes, selecting the last token from the input is not always guaranteed as valid. * Thus, it needs an additional pointer to select the last valid token only to perform the last norm and output. Reviewed By: JacobSzwejbka Differential Revision: D76440105
Differential Revision: D76440105 Pull Request resolved: #12239
Differential Revision: D76440105 Pull Request resolved: pytorch#12239
Summary:
Add last_token_pos in the forward options.
Purpose:
Differential Revision: D76440105