-
Notifications
You must be signed in to change notification settings - Fork 699
Add last_token_pos in llama_transformer (#11793) #12239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add last_token_pos in llama_transformer (#11793) #12239
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12239
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit ce3d29b with merge base 97047c0 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D76440105 |
This PR needs a
|
Summary: Add last_token_pos in the forward options. Purpose: * the last norm and output of lm-head can be performed with the last valid token at prefill. * If the input sequence length is fixed when an accelerator doesn't support the dynamic shapes, selecting the last token from the input is not always guaranteed as valid. * Thus, it needs an additional pointer to select the last valid token only to perform the last norm and output. Reviewed By: JacobSzwejbka Differential Revision: D76440105
6273e89 to
13eb7d4
Compare
|
This pull request was exported from Phabricator. Differential Revision: D76440105 |
13eb7d4 to
9bb4371
Compare
Summary: Add last_token_pos in the forward options. Purpose: * the last norm and output of lm-head can be performed with the last valid token at prefill. * If the input sequence length is fixed when an accelerator doesn't support the dynamic shapes, selecting the last token from the input is not always guaranteed as valid. * Thus, it needs an additional pointer to select the last valid token only to perform the last norm and output. Reviewed By: JacobSzwejbka Differential Revision: D76440105
|
This pull request was exported from Phabricator. Differential Revision: D76440105 |
Summary: Add last_token_pos in the forward options. Purpose: * the last norm and output of lm-head can be performed with the last valid token at prefill. * If the input sequence length is fixed when an accelerator doesn't support the dynamic shapes, selecting the last token from the input is not always guaranteed as valid. * Thus, it needs an additional pointer to select the last valid token only to perform the last norm and output. Reviewed By: JacobSzwejbka Differential Revision: D76440105
9bb4371 to
ce3d29b
Compare
|
This pull request was exported from Phabricator. Differential Revision: D76440105 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D76440105 |
Differential Revision: D76440105 Pull Request resolved: pytorch#12239
Summary:
Add last_token_pos in the forward options.
Purpose:
Reviewed By: JacobSzwejbka
Differential Revision: D76440105