Skip to content

Conversation

@tdakhran
Copy link
Contributor

Summary

Add support for LiquidAI LFM2 model family. For more information about models, please read the blog post.

  • Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models.
  • Add ShortConvBlock.
  • Modify construct_transformer to construct hybrid architectures.
  • Move FeedForward to avoid cyclid dependency

Instructions are in examples/models/lfm2/README.md.

Test plan

All commands in README.md are tests.

❯ python -m examples.models.llama.runner.native \
  --model lfm2_700m \
  --pte lfm2_700m_8da4w.pte \
  --tokenizer ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer.json \
  --tokenizer_config ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer_config.json \
  --prompt "<|startoftext|><|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n" \
  --params examples/models/lfm2/config/lfm2_700m_config.json \
  --max_len 128 \
  -kv \
  --temperature 0.3

...
I'm an AI designed to assist with generating text based on the prompts you provide. I'm a type of language model, but I don't have a physical form or consciousness. I operate based on complex algorithms and vast amounts of training data. How can I help you today? If you have a specific question or need assistance with something, feel free to ask!
...

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 29, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13805

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 3 Pending

As of commit 78798c8 with merge base 497f59c (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link

meta-cla bot commented Aug 29, 2025

Hi @tdakhran!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

Copy link
Contributor

@jackzhxng jackzhxng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}


def lfm_2_tune_to_meta(state_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just call this lfm_2_to_meta, I think some of the others are only named that way because we used to use TorchTune to load the checkpoints


from torchtune.models.convert_weights import get_mapped_key

_LFM_2_FROM_META = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
_LFM_2_FROM_META = {
_LFM_2_TO_META = {

And reverse keys and values

Dict[str, torch.Tensor]: State dict in Meta's format.
"""
converted_state_dict = {}
inverted_mapping_dict = {v: k for k, v in _LFM_2_FROM_META.items()}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
inverted_mapping_dict = {v: k for k, v in _LFM_2_FROM_META.items()}
inverted_mapping_dict = {v: k for k, v in _LFM_2_FROM_META.items()}

Remove, use _LFM_2_TO_META

@mergennachin
Copy link
Contributor

Please rebase to main and resolve the conflicts

Don't merge yet, I will have to import internally and run a few internal tests.

@mergennachin mergennachin added ciflow/trunk release notes: llm Changes to llm utilities labels Aug 29, 2025
@tdakhran tdakhran force-pushed the tarek/feat/lfm2_upstream branch from 73aa3b8 to 32441b0 Compare August 30, 2025 18:53
@tdakhran
Copy link
Contributor Author

@jackzhxng @mergennachin thanks for the review, I've rebased and addressed the feedback.

All commands in README.md work, including the C++ runner. However, the C++ runner output differs from the Python runner.

@tdakhran tdakhran force-pushed the tarek/feat/lfm2_upstream branch from 32441b0 to d6f6dee Compare September 1, 2025 17:59
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 1, 2025
Copy link
Contributor

@jackzhxng jackzhxng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks the work @tdakhran!

@facebook-github-bot
Copy link
Contributor

@mergennachin has imported this pull request. If you are a Meta employee, you can view this in D81491136.

@jackzhxng
Copy link
Contributor

Ready merge, one last thing @tdakhran can you lint? lintrunner -a

Bx = torch.cat([self.conv_state, Bx], dim=-1) # (batch_size, dim, seq_len + L_cache - 1)

## Update the conv_state
new_conv_state = Bx[..., -(self.conv.weight.size(-1) - 1) :] # (batch_size, dim, L_cache - 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not directly use L_cache - 1 instead of querying from conv.weight?

Comment on lines 248 to 252
if model_args.layer_types and model_args.layer_types[layer_id] == "conv":
layers.append(
ShortConvBlock(dim=model_args.dim, hidden_dim=model_args.hidden_dim, norm_eps=model_args.norm_eps)
)
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can you not do if... else? why continue pattern?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With a continue pattern, existing code doesn't need to be modified, resulting in a smaller diff, will change to if else.

@kimishpatel
Copy link
Contributor

left some small comments. is there any perf numbers on ios or android?

Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family.
For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).

- Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models.
- Add `ShortConvBlock`.
- Modify `construct_transformer` to construct hybrid architectures.
- Move FeedForward to avoid cyclid dependency
@tdakhran tdakhran force-pushed the tarek/feat/lfm2_upstream branch from 83622a7 to e008d6c Compare September 3, 2025 07:34
@tdakhran
Copy link
Contributor Author

tdakhran commented Sep 3, 2025

@jackzhxng ran the linter and fixed PR-related files.
@kimishpatel, addressed the comments.

left some small comments. is there any perf numbers on ios or android?

Perf numbers for IOS and Android for each model are available at https://leap.liquid.ai/models.

Here are the numbers for LFM2-1.2B

| Device           | context|    TTFT |     t\s  |
|------------------|--------|---------|----------|
| Galaxy S24 Ultra |  short |  471 ms | 39 tok/s |
| Galaxy S24 Ultra |   long | 3310 ms | 35 tok/s |
| iPhone 16 Pro    |  short |  418 ms | 70 tok/s |
| iPhone 16 Pro    |   long | 2668 ms | 52 tok/s |

@mergennachin
Copy link
Contributor

@tdakhran
Copy link
Contributor Author

tdakhran commented Sep 3, 2025

@tdakhran - still a few minor lints -- https://github.com/pytorch/executorch/actions/runs/17426424686/job/49500257182?pr=13805

Had an impression that lintrunner -a fixes them all, pushed the fix. Thanks for noticing!

@mergennachin
Copy link
Contributor

Thanks a lot @tdakhran

We'll take it from here on merging. There are a few internal tests that are failing but we we'll do a forward fix and patch on top of this.

@facebook-github-bot
Copy link
Contributor

@jackzhxng has imported this pull request. If you are a Meta employee, you can view this in D81593776.

jackzhxng added a commit to jackzhxng/executorch that referenced this pull request Sep 3, 2025
Summary:
Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family. For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).

- Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models.
- Add `ShortConvBlock`.
- Modify `construct_transformer` to construct hybrid architectures.
- Move FeedForward to avoid cyclid dependency

Instructions are in `examples/models/lfm2/README.md`.

Pull Request resolved: pytorch#13805

Test Plan:
All commands in `README.md` are tests.
```
❯ python -m examples.models.llama.runner.native \
  --model lfm2_700m \
  --pte lfm2_700m_8da4w.pte \
  --tokenizer ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer.json \
  --tokenizer_config ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer_config.json \
  --prompt "<|startoftext|><|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n" \
  --params examples/models/lfm2/config/lfm2_700m_config.json \
  --max_len 128 \
  -kv \
  --temperature 0.3

...
I'm an AI designed to assist with generating text based on the prompts you provide. I'm a type of language model, but I don't have a physical form or consciousness. I operate based on complex algorithms and vast amounts of training data. How can I help you today? If you have a specific question or need assistance with something, feel free to ask!
...
```

Differential Revision: D81593776

Pulled By: jackzhxng
jackzhxng added a commit to jackzhxng/executorch that referenced this pull request Sep 3, 2025
Summary: Fix internal tests for diff external diff generated by pytorch#13805 for adding LiquidAI's LFM2 model to ExecuTorch.

Reviewed By: mergennachin

Differential Revision: D81491136
jackzhxng added a commit to jackzhxng/executorch that referenced this pull request Sep 3, 2025
Summary:
Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family. For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).

- Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models.
- Add `ShortConvBlock`.
- Modify `construct_transformer` to construct hybrid architectures.
- Move FeedForward to avoid cyclid dependency

Instructions are in `examples/models/lfm2/README.md`.

Pull Request resolved: pytorch#13805

Test Plan:
All commands in `README.md` are tests.
```
❯ python -m examples.models.llama.runner.native \
  --model lfm2_700m \
  --pte lfm2_700m_8da4w.pte \
  --tokenizer ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer.json \
  --tokenizer_config ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer_config.json \
  --prompt "<|startoftext|><|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n" \
  --params examples/models/lfm2/config/lfm2_700m_config.json \
  --max_len 128 \
  -kv \
  --temperature 0.3

...
I'm an AI designed to assist with generating text based on the prompts you provide. I'm a type of language model, but I don't have a physical form or consciousness. I operate based on complex algorithms and vast amounts of training data. How can I help you today? If you have a specific question or need assistance with something, feel free to ask!
...
```

Differential Revision: D81593776

Pulled By: jackzhxng
jackzhxng added a commit to jackzhxng/executorch that referenced this pull request Sep 3, 2025
Summary:
Pull Request resolved: pytorch#13916

Fix internal tests for diff external diff generated by pytorch#13805 for adding LiquidAI's LFM2 model to ExecuTorch.

Reviewed By: mergennachin

Differential Revision: D81491136
jackzhxng added a commit to jackzhxng/executorch that referenced this pull request Sep 3, 2025
Summary:
Add support for [LiquidAI LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) model family. For more information about models, please read [the blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).

- Support hybrid LFM2-350M, LFM2-700M, and LFM2-1.2B models.
- Add `ShortConvBlock`.
- Modify `construct_transformer` to construct hybrid architectures.
- Move FeedForward to avoid cyclid dependency

Instructions are in `examples/models/lfm2/README.md`.

Pull Request resolved: pytorch#13805

Test Plan:
All commands in `README.md` are tests.
```
❯ python -m examples.models.llama.runner.native \
  --model lfm2_700m \
  --pte lfm2_700m_8da4w.pte \
  --tokenizer ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer.json \
  --tokenizer_config ~/.cache/huggingface/hub/models--LiquidAI--LFM2-700M/snapshots/ab260293733f05dd4ce22399bea1cae2cf9b272d/tokenizer_config.json \
  --prompt "<|startoftext|><|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n" \
  --params examples/models/lfm2/config/lfm2_700m_config.json \
  --max_len 128 \
  -kv \
  --temperature 0.3

...
I'm an AI designed to assist with generating text based on the prompts you provide. I'm a type of language model, but I don't have a physical form or consciousness. I operate based on complex algorithms and vast amounts of training data. How can I help you today? If you have a specific question or need assistance with something, feel free to ask!
...
```

Differential Revision: D81593776

Pulled By: jackzhxng
jackzhxng added a commit to jackzhxng/executorch that referenced this pull request Sep 3, 2025
Summary:
Pull Request resolved: pytorch#13916

Fix internal tests for diff external diff generated by pytorch#13805 for adding LiquidAI's LFM2 model to ExecuTorch.

Reviewed By: mergennachin

Differential Revision: D81491136
Comment on lines +64 to +73
Bx = torch.cat(
[self.conv_state, Bx], dim=-1
) # (batch_size, dim, seq_len + L_cache - 1)

## Update the conv_state
new_conv_state = Bx[
..., -(self.L_cache - 1) :
] # (batch_size, dim, L_cache - 1)
with torch.no_grad():
self.conv_state.copy_(new_conv_state)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW it looks like you want ring buffer kind of update here which can be done maybe bit more efficiently. However, it complicates the conv computation .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A ring buffer of size N stores N entries. Here we store a single conv state, effectively a ring buffer of size 1, which is semantically equivalent to a registered buffer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isnt this

        new_conv_state = Bx[
            ..., -(self.L_cache - 1) :
        ]  # (batch_size, dim, L_cache - 1)

effectively slicing Bx for last L_cache entries after concat, which sounds like sliding window of size L_cache?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see, you mean Bx as a ring buffer, that's right. I had self.conv_state in mind as a ring buffer.

@facebook-github-bot
Copy link
Contributor

@jackzhxng has imported this pull request. If you are a Meta employee, you can view this in D81593776.

@jackzhxng jackzhxng added release notes: examples Changes to any of our example LLMs integrations, such as Llama3 and Llava and removed release notes: llm Changes to llm utilities labels Sep 4, 2025
@facebook-github-bot
Copy link
Contributor

@jackzhxng has imported this pull request. If you are a Meta employee, you can view this in D81593776.

@jackzhxng jackzhxng merged commit 8607c89 into pytorch:main Sep 5, 2025
116 of 117 checks passed
@tdakhran tdakhran deleted the tarek/feat/lfm2_upstream branch September 5, 2025 14:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: examples Changes to any of our example LLMs integrations, such as Llama3 and Llava

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants