-
Notifications
You must be signed in to change notification settings - Fork 747
model : support LiquidAI LFM2 hybrid family #13805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13805
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 3 PendingAs of commit 78798c8 with merge base 497f59c ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @tdakhran! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
jackzhxng
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, can you try running with the C++ runner? https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#step-3-run-on-your-computer-to-validate
| } | ||
|
|
||
|
|
||
| def lfm_2_tune_to_meta(state_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just call this lfm_2_to_meta, I think some of the others are only named that way because we used to use TorchTune to load the checkpoints
|
|
||
| from torchtune.models.convert_weights import get_mapped_key | ||
|
|
||
| _LFM_2_FROM_META = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| _LFM_2_FROM_META = { | |
| _LFM_2_TO_META = { |
And reverse keys and values
| Dict[str, torch.Tensor]: State dict in Meta's format. | ||
| """ | ||
| converted_state_dict = {} | ||
| inverted_mapping_dict = {v: k for k, v in _LFM_2_FROM_META.items()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| inverted_mapping_dict = {v: k for k, v in _LFM_2_FROM_META.items()} | |
| inverted_mapping_dict = {v: k for k, v in _LFM_2_FROM_META.items()} |
Remove, use _LFM_2_TO_META
|
Please rebase to main and resolve the conflicts Don't merge yet, I will have to import internally and run a few internal tests. |
73aa3b8 to
32441b0
Compare
|
@jackzhxng @mergennachin thanks for the review, I've rebased and addressed the feedback. All commands in README.md work, including the C++ runner. However, the C++ runner output differs from the Python runner. |
32441b0 to
d6f6dee
Compare
jackzhxng
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks the work @tdakhran!
|
@mergennachin has imported this pull request. If you are a Meta employee, you can view this in D81491136. |
|
Ready merge, one last thing @tdakhran can you lint? |
examples/models/lfm2/short_conv.py
Outdated
| Bx = torch.cat([self.conv_state, Bx], dim=-1) # (batch_size, dim, seq_len + L_cache - 1) | ||
|
|
||
| ## Update the conv_state | ||
| new_conv_state = Bx[..., -(self.conv.weight.size(-1) - 1) :] # (batch_size, dim, L_cache - 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not directly use L_cache - 1 instead of querying from conv.weight?
| if model_args.layer_types and model_args.layer_types[layer_id] == "conv": | ||
| layers.append( | ||
| ShortConvBlock(dim=model_args.dim, hidden_dim=model_args.hidden_dim, norm_eps=model_args.norm_eps) | ||
| ) | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can you not do if... else? why continue pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With a continue pattern, existing code doesn't need to be modified, resulting in a smaller diff, will change to if else.
|
left some small comments. is there any perf numbers on ios or android? |
Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family. For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models). - Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models. - Add `ShortConvBlock`. - Modify `construct_transformer` to construct hybrid architectures. - Move FeedForward to avoid cyclid dependency
83622a7 to
e008d6c
Compare
|
@jackzhxng ran the linter and fixed PR-related files.
Perf numbers for IOS and Android for each model are available at https://leap.liquid.ai/models. Here are the numbers for LFM2-1.2B |
|
@tdakhran - still a few minor lints -- https://github.com/pytorch/executorch/actions/runs/17426424686/job/49500257182?pr=13805 |
Had an impression that |
|
Thanks a lot @tdakhran We'll take it from here on merging. There are a few internal tests that are failing but we we'll do a forward fix and patch on top of this. |
|
@jackzhxng has imported this pull request. If you are a Meta employee, you can view this in D81593776. |
Summary: Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family. For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models). - Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models. - Add `ShortConvBlock`. - Modify `construct_transformer` to construct hybrid architectures. - Move FeedForward to avoid cyclid dependency Instructions are in `examples/models/lfm2/README.md`. Pull Request resolved: pytorch#13805 Test Plan: All commands in `README.md` are tests. ``` ❯ python -m examples.models.llama.runner.native \ --model lfm2_700m \ --pte lfm2_700m_8da4w.pte \ --tokenizer ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer.json \ --tokenizer_config ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer_config.json \ --prompt "<|startoftext|><|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n" \ --params examples/models/lfm2/config/lfm2_700m_config.json \ --max_len 128 \ -kv \ --temperature 0.3 ... I'm an AI designed to assist with generating text based on the prompts you provide. I'm a type of language model, but I don't have a physical form or consciousness. I operate based on complex algorithms and vast amounts of training data. How can I help you today? If you have a specific question or need assistance with something, feel free to ask! ... ``` Differential Revision: D81593776 Pulled By: jackzhxng
Summary: Fix internal tests for diff external diff generated by pytorch#13805 for adding LiquidAI's LFM2 model to ExecuTorch. Reviewed By: mergennachin Differential Revision: D81491136
Summary: Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family. For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models). - Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models. - Add `ShortConvBlock`. - Modify `construct_transformer` to construct hybrid architectures. - Move FeedForward to avoid cyclid dependency Instructions are in `examples/models/lfm2/README.md`. Pull Request resolved: pytorch#13805 Test Plan: All commands in `README.md` are tests. ``` ❯ python -m examples.models.llama.runner.native \ --model lfm2_700m \ --pte lfm2_700m_8da4w.pte \ --tokenizer ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer.json \ --tokenizer_config ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer_config.json \ --prompt "<|startoftext|><|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n" \ --params examples/models/lfm2/config/lfm2_700m_config.json \ --max_len 128 \ -kv \ --temperature 0.3 ... I'm an AI designed to assist with generating text based on the prompts you provide. I'm a type of language model, but I don't have a physical form or consciousness. I operate based on complex algorithms and vast amounts of training data. How can I help you today? If you have a specific question or need assistance with something, feel free to ask! ... ``` Differential Revision: D81593776 Pulled By: jackzhxng
Summary: Pull Request resolved: pytorch#13916 Fix internal tests for diff external diff generated by pytorch#13805 for adding LiquidAI's LFM2 model to ExecuTorch. Reviewed By: mergennachin Differential Revision: D81491136
Summary: Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family. For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models). - Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models. - Add `ShortConvBlock`. - Modify `construct_transformer` to construct hybrid architectures. - Move FeedForward to avoid cyclid dependency Instructions are in `examples/models/lfm2/README.md`. Pull Request resolved: pytorch#13805 Test Plan: All commands in `README.md` are tests. ``` ❯ python -m examples.models.llama.runner.native \ --model lfm2_700m \ --pte lfm2_700m_8da4w.pte \ --tokenizer ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer.json \ --tokenizer_config ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer_config.json \ --prompt "<|startoftext|><|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n" \ --params examples/models/lfm2/config/lfm2_700m_config.json \ --max_len 128 \ -kv \ --temperature 0.3 ... I'm an AI designed to assist with generating text based on the prompts you provide. I'm a type of language model, but I don't have a physical form or consciousness. I operate based on complex algorithms and vast amounts of training data. How can I help you today? If you have a specific question or need assistance with something, feel free to ask! ... ``` Differential Revision: D81593776 Pulled By: jackzhxng
Summary: Pull Request resolved: pytorch#13916 Fix internal tests for diff external diff generated by pytorch#13805 for adding LiquidAI's LFM2 model to ExecuTorch. Reviewed By: mergennachin Differential Revision: D81491136
| Bx = torch.cat( | ||
| [self.conv_state, Bx], dim=-1 | ||
| ) # (batch_size, dim, seq_len + L_cache - 1) | ||
|
|
||
| ## Update the conv_state | ||
| new_conv_state = Bx[ | ||
| ..., -(self.L_cache - 1) : | ||
| ] # (batch_size, dim, L_cache - 1) | ||
| with torch.no_grad(): | ||
| self.conv_state.copy_(new_conv_state) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW it looks like you want ring buffer kind of update here which can be done maybe bit more efficiently. However, it complicates the conv computation .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A ring buffer of size N stores N entries. Here we store a single conv state, effectively a ring buffer of size 1, which is semantically equivalent to a registered buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isnt this
new_conv_state = Bx[
..., -(self.L_cache - 1) :
] # (batch_size, dim, L_cache - 1)
effectively slicing Bx for last L_cache entries after concat, which sounds like sliding window of size L_cache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see, you mean Bx as a ring buffer, that's right. I had self.conv_state in mind as a ring buffer.
|
@jackzhxng has imported this pull request. If you are a Meta employee, you can view this in D81593776. |
|
@jackzhxng has imported this pull request. If you are a Meta employee, you can view this in D81593776. |
Summary
Add support for LiquidAI LFM2 model family. For more information about models, please read the blog post.
ShortConvBlock.construct_transformerto construct hybrid architectures.Instructions are in
examples/models/lfm2/README.md.Test plan
All commands in
README.mdare tests.