Skip to content

Commit 93d7e5d

Browse files
authored
chore(model gallery): Add entry for Magistral Small 1.2 with mmproj (#8248)
Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
1 parent ff5a54b commit 93d7e5d

File tree

1 file changed

+45
-0
lines changed

1 file changed

+45
-0
lines changed

gallery/index.yaml

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12002,6 +12002,51 @@
1200212002
- filename: mistralai_Magistral-Small-2509-Q4_K_M.gguf
1200312003
sha256: 1d638bc931de30d29fc73ad439206ff185f76666a096e7ad723866a20f78728d
1200412004
uri: huggingface://bartowski/mistralai_Magistral-Small-2509-GGUF/mistralai_Magistral-Small-2509-Q4_K_M.gguf
12005+
- !!merge <<: *mistral03
12006+
name: "mistralai_magistral-small-2509-multimodal"
12007+
urls:
12008+
- https://huggingface.co/mistralai/Magistral-Small-2509
12009+
- https://huggingface.co/unsloth/Magistral-Small-2509-GGUF
12010+
description: |
12011+
Magistral Small 1.2
12012+
Building upon Mistral Small 3.2 (2506), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
12013+
12014+
Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
12015+
12016+
Learn more about Magistral in our blog post.
12017+
12018+
The model was presented in the paper Magistral.
12019+
12020+
Quantization from unsloth, using their recommended parameters as defaults and including mmproj for multimodality.
12021+
tags:
12022+
- llm
12023+
- gguf
12024+
- gpu
12025+
- mistral
12026+
- cpu
12027+
- function-calling
12028+
- multimodal
12029+
overrides:
12030+
context_size: 40960
12031+
parameters:
12032+
model: llama-cpp/models/Magistral-Small-2509-Q4_K_M.gguf
12033+
temperature: 0.7
12034+
repeat_penalty: 1.0
12035+
top_k: -1
12036+
top_p: 0.95
12037+
backend: llama-cpp
12038+
known_usecases:
12039+
- chat
12040+
mmproj: llama-cpp/mmproj/mmproj-F32.gguf
12041+
options:
12042+
- use_jinja:true
12043+
files:
12044+
- filename: llama-cpp/models/Magistral-Small-2509-Q4_K_M.gguf
12045+
sha256: 6d3e5f2a83ed9d64bd3382fb03be2f6e0bc7596a9de16e107bf22f959891945b
12046+
uri: huggingface://unsloth/Magistral-Small-2509-GGUF/Magistral-Small-2509-Q4_K_M.gguf
12047+
- filename: llama-cpp/mmproj/mmproj-F32.gguf
12048+
sha256: 5861a0938164a7e56cd137a8fcd49a300b9e00861f7f1cb5dfcf2483d765447c
12049+
uri: huggingface://unsloth/Magistral-Small-2509-GGUF/mmproj-F32.gguf
1200512050
- &mudler
1200612051
url: "github:mudler/LocalAI/gallery/mudler.yaml@master" ### START mudler's LocalAI specific-models
1200712052
name: "LocalAI-llama3-8b-function-call-v0.2"

0 commit comments

Comments
 (0)