|
12002 | 12002 | - filename: mistralai_Magistral-Small-2509-Q4_K_M.gguf |
12003 | 12003 | sha256: 1d638bc931de30d29fc73ad439206ff185f76666a096e7ad723866a20f78728d |
12004 | 12004 | uri: huggingface://bartowski/mistralai_Magistral-Small-2509-GGUF/mistralai_Magistral-Small-2509-Q4_K_M.gguf |
| 12005 | +- !!merge <<: *mistral03 |
| 12006 | + name: "mistralai_magistral-small-2509-multimodal" |
| 12007 | + urls: |
| 12008 | + - https://huggingface.co/mistralai/Magistral-Small-2509 |
| 12009 | + - https://huggingface.co/unsloth/Magistral-Small-2509-GGUF |
| 12010 | + description: | |
| 12011 | + Magistral Small 1.2 |
| 12012 | + Building upon Mistral Small 3.2 (2506), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters. |
| 12013 | + |
| 12014 | + Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized. |
| 12015 | + |
| 12016 | + Learn more about Magistral in our blog post. |
| 12017 | + |
| 12018 | + The model was presented in the paper Magistral. |
| 12019 | + |
| 12020 | + Quantization from unsloth, using their recommended parameters as defaults and including mmproj for multimodality. |
| 12021 | + tags: |
| 12022 | + - llm |
| 12023 | + - gguf |
| 12024 | + - gpu |
| 12025 | + - mistral |
| 12026 | + - cpu |
| 12027 | + - function-calling |
| 12028 | + - multimodal |
| 12029 | + overrides: |
| 12030 | + context_size: 40960 |
| 12031 | + parameters: |
| 12032 | + model: llama-cpp/models/Magistral-Small-2509-Q4_K_M.gguf |
| 12033 | + temperature: 0.7 |
| 12034 | + repeat_penalty: 1.0 |
| 12035 | + top_k: -1 |
| 12036 | + top_p: 0.95 |
| 12037 | + backend: llama-cpp |
| 12038 | + known_usecases: |
| 12039 | + - chat |
| 12040 | + mmproj: llama-cpp/mmproj/mmproj-F32.gguf |
| 12041 | + options: |
| 12042 | + - use_jinja:true |
| 12043 | + files: |
| 12044 | + - filename: llama-cpp/models/Magistral-Small-2509-Q4_K_M.gguf |
| 12045 | + sha256: 6d3e5f2a83ed9d64bd3382fb03be2f6e0bc7596a9de16e107bf22f959891945b |
| 12046 | + uri: huggingface://unsloth/Magistral-Small-2509-GGUF/Magistral-Small-2509-Q4_K_M.gguf |
| 12047 | + - filename: llama-cpp/mmproj/mmproj-F32.gguf |
| 12048 | + sha256: 5861a0938164a7e56cd137a8fcd49a300b9e00861f7f1cb5dfcf2483d765447c |
| 12049 | + uri: huggingface://unsloth/Magistral-Small-2509-GGUF/mmproj-F32.gguf |
12005 | 12050 | - &mudler |
12006 | 12051 | url: "github:mudler/LocalAI/gallery/mudler.yaml@master" ### START mudler's LocalAI specific-models |
12007 | 12052 | name: "LocalAI-llama3-8b-function-call-v0.2" |
|
0 commit comments