-
Notifications
You must be signed in to change notification settings - Fork 5
Llama-4 mapping #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama-4 mapping #20
Conversation
WalkthroughThe pull request extends the tensor mapping configuration in the Changes
Sequence Diagram(s)sequenceDiagram
participant U as User
participant TM as TensorNameMap
U->>TM: Request tensor mapping for a given key
alt Lookup in mappings_cfg
TM->>TM: Retrieve mapping (embed_tokens, lm_head, norm)
end
alt Lookup in block_mappings_cfg
TM->>TM: Retrieve mapping (layers: input_layernorm, self_attn, feed_forward, etc.)
end
TM-->>U: Return the mapped configuration
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🧰 Additional context used🧬 Code Definitions (1)gguf-py/gguf/tensor_mapping.py (1)
🔇 Additional comments (5)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@danielhanchen oh sorry I didn't notice this PR. Thanks a lot!! I'll "forward" this PR to upstream llama.cpp (doing a git cherry-pick) |
Make sure to read the contributing guidelines before submitting a PR
Summary by CodeRabbit