Skip to content

Conversation

@larryliu0820
Copy link
Contributor

@larryliu0820 larryliu0820 commented Jun 24, 2025

Stack from ghstack (oldest at bottom):

For huggingface models, forward() is taking tokens as well as cache_positions, which is a list of cache indices. This is different than the .pte files export_llama gives, which are taking tokens and input_pos where input_pos is a scalar tensor.

This PR adds support inside text_decoder_runner.cpp to handle both shapes of input_pos/cache_positions.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in input_pos or cache_position.

Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jun 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11869

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 9481d79 with merge base 222d9e3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

larryliu0820 added a commit that referenced this pull request Jun 24, 2025
For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

ghstack-source-id: 292265667
Pull Request resolved: #11869
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 24, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

@larryliu0820 larryliu0820 added the release notes: llm Changes to llm utilities label Jun 24, 2025
For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 24, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292466463
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 24, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292469061
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 25, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292517675
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 25, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292523573
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 25, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292529628
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 25, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292529864
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 25, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292546578
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)

[ghstack-poisoned]
larryliu0820 added a commit that referenced this pull request Jun 25, 2025
Pull Request resolved: #11869

For huggingface models, `forward()` is taking `tokens` as well as `cache_positions`, which is a list of cache indices. This is different than the .pte files `export_llama` gives, which are taking `tokens` and `input_pos` where `input_pos` is a scalar tensor.

This PR adds support inside `text_decoder_runner.cpp` to handle both shapes of `input_pos`/`cache_positions`.

To make the logic more generic without relying on extra metadata, here I'm adding the logic of inspecting method meta and input tensor info, to make a decision if we want to feed in `input_pos` or `cache_position`.
ghstack-source-id: 292560636
@exported-using-ghexport

Differential Revision: [D77203700](https://our.internmc.facebook.com/intern/diff/D77203700/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77203700

@facebook-github-bot facebook-github-bot merged commit 84c41a8 into gh/larryliu0820/67/base Jun 25, 2025
102 of 104 checks passed
@facebook-github-bot facebook-github-bot deleted the gh/larryliu0820/67/head branch June 25, 2025 12:11
larryliu0820 added a commit that referenced this pull request Jun 27, 2025
This PR was created by the merge bot to help merge the original PR into
the main branch.
ghstack PR number: #11869 by
@larryliu0820
^ Please use this as the source of truth for the PR details, comments,
and reviews
ghstack PR base:
https://github.com/pytorch/executorch/tree/gh/larryliu0820/67/base
ghstack PR head:
https://github.com/pytorch/executorch/tree/gh/larryliu0820/67/head
Merge bot PR base:
https://github.com/pytorch/executorch/tree/gh/larryliu0820/66/orig
Merge bot PR head:
https://github.com/pytorch/executorch/tree/gh/larryliu0820/67/orig
@diff-train-skip-merge

---------

Co-authored-by: Mengwei Liu <[email protected]>
Co-authored-by: Mengwei Liu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: llm Changes to llm utilities

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants