You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: static/llms-full.txt
+46-16Lines changed: 46 additions & 16 deletions
Original file line number
Diff line number
Diff line change
@@ -3117,7 +3117,7 @@ Now that we have our document library agent ready, we can search them on demand
3117
3117
3118
3118
```py
3119
3119
response = client.beta.conversations.start(
3120
-
agent_id=image_agent.id,
3120
+
agent_id=library_agent.id,
3121
3121
inputs="How does the vision encoder for pixtral 12b work"
3122
3122
)
3123
3123
```
@@ -11711,11 +11711,11 @@ Currently we have two reasoning models:
11711
11711
- `magistral-medium-latest`: Our more powerful reasoning model balancing performance and cost.
11712
11712
11713
11713
:::info
11714
-
Currently, `-latest` points to `-2507`, our most recent version of our reasoning models. If you were previously using `-2506`, a **migration** regarding the thinking chunks is required.
11715
-
- `-2507` **(new)**: Uses tokenized thinking chunks via control tokens, providing the thinking traces in different types of content chunks.
11714
+
Currently, `-latest` points to `-2509`, our most recent version of our reasoning models. If you were previously using `-2506`, a **migration** regarding the thinking chunks is required.
11715
+
- `-2507` & `-2509` **(new)**: Uses tokenized thinking chunks via control tokens, providing the thinking traces in different types of content chunks.
11716
11716
- `-2506` **(old)**: Used `<think>\n` and `\n</think>\n` tags as strings to encapsulate the thinking traces for input and output within the same content type.
@@ -11794,7 +11794,33 @@ To have the best performance out of our models, we recommend having the followin
11794
11794
<summary><b>System Prompt</b></summary>
11795
11795
11796
11796
<Tabs groupId="version">
11797
-
<TabItem value="2507" label="2507 (new)" default>
11797
+
<TabItem value="2509" label="2509 (new)" default>
11798
+
```json
11799
+
{
11800
+
"role": "system",
11801
+
"content": [
11802
+
{
11803
+
"type": "text",
11804
+
"text": "# HOW YOU SHOULD THINK AND ANSWER\n\nFirst draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.\n\nYour thinking process must follow the template below:"
11805
+
},
11806
+
{
11807
+
"type": "thinking",
11808
+
"thinking": [
11809
+
{
11810
+
"type": "text",
11811
+
"text": "Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response to the user."
11812
+
}
11813
+
]
11814
+
},
11815
+
{
11816
+
"type": "text",
11817
+
"text": "Here, provide a self-contained response."
The output of the model will include different chunks of content, but mostly a `thinking` type with the reasoning traces and a `text` type with the answer like so:
11920
11946
```json
11921
11947
"content": [
@@ -13101,10 +13127,12 @@ Vision capabilities enable models to analyze images and provide insights based o
13101
13127
For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../document_ai/document_ai_overview).
13102
13128
13103
13129
## Models with Vision Capabilities:
13130
+
- Mistral Medium 3.1 2508 (`mistral-medium-latest`)
13131
+
- Mistral Small 3.2 2506 (`mistral-small-latest`)
13132
+
- Magistral Small 1.2 2509 (`magistral-small-latest`)
13133
+
- Magistral Medium 1.2 2509 (`magistral-medium-latest`)
13104
13134
- Pixtral 12B (`pixtral-12b-latest`)
13105
13135
- Pixtral Large 2411 (`pixtral-large-latest`)
13106
-
- Mistral Medium 2505 (`mistral-medium-latest`)
13107
-
- Mistral Small 2503 (`mistral-small-latest`)
13108
13136
13109
13137
## Passing an Image URL
13110
13138
If the image is hosted online, you can simply provide the URL of the image in the request. This method is straightforward and does not require any encoding.
@@ -16214,17 +16242,16 @@ Mistral provides two types of models: open models and premier models.
16214
16242
| Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version|
| Mistral Medium 3.1 | | :heavy_check_mark: | Our frontier-class multimodal model released August 2025. Improving tone and performance. Read more about Medium 3 in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2508` | 25.08|
16217
-
| Magistral Medium 1.1 | | :heavy_check_mark: | Our frontier-class reasoning model released July 2025. | 40k | `magistral-medium-2507` | 25.07|
16245
+
| Magistral Medium 1.2 | | :heavy_check_mark: | Our frontier-class reasoning model update released September 2025 with vision support. | 128k | `magistral-medium-2509` | 25.09|
16218
16246
| Codestral 2508 | | :heavy_check_mark: | Our cutting-edge language model for coding released end of July 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-25-08/) | 256k | `codestral-2508` | 25.08|
16219
16247
| Voxtral Mini Transcribe | | :heavy_check_mark: | An efficient audio input model, fine-tuned and optimized for transcription purposes only. | | `voxtral-mini-2507` via `audio/transcriptions` | 25.07|
16220
16248
| Devstral Medium | | :heavy_check_mark: | An enterprise grade text model, that excels at using tools to explore codebases, editing multiple files and power software engineering agents. Learn more in our [blog post](https://mistral.ai/news/devstral-2507) | 128k | `devstral-medium-2507` | 25.07|
16221
16249
| Mistral OCR 2505 | | :heavy_check_mark: | Our OCR service powering our Document AI stack that enables our users to extract interleaved text and images | | `mistral-ocr-2505` | 25.05|
16222
-
| Magistral Medium 1 | | :heavy_check_mark: | Our first frontier-class reasoning model released June 2025. Learn more in our [blog post](https://mistral.ai/news/magistral/) | 40k | `magistral-medium-2506` | 25.06|
16223
16250
| Ministral 3B | | :heavy_check_mark: | World’s best edge model. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-3b-2410` | 24.10|
16224
16251
| Ministral 8B | :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Powerful edge model with extremely high performance/price ratio. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-8b-2410` | 24.10|
16225
16252
| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05|
16226
-
| Codestral 2501 | | :heavy_check_mark: | Our cutting-edge language model for coding with the second version released January 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-2501/) | 256k | `codestral-2501` | 25.01|
16227
16253
| Mistral Large 2.1 |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our top-tier large model for high-complexity tasks with the lastest version released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `mistral-large-2411` | 24.11|
16254
+
| Codestral 2501 | | :heavy_check_mark: | Our cutting-edge language model for coding released in January 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-2501) | 256k | `codestral-2501` | 25.01|
16228
16255
| Pixtral Large |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our first frontier-class multimodal model released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `pixtral-large-2411` | 24.11|
16229
16256
| Mistral Small 2| :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md) | :heavy_check_mark: | Our updated small version, released September 2024. Learn more in our [blog post](https://mistral.ai/news/september-24-release) | 32k | `mistral-small-2407` | 24.07|
16230
16257
| Mistral Embed | | :heavy_check_mark: | Our state-of-the-art semantic for extracting representation of text extracts | 8k | `mistral-embed` | 23.12|
@@ -16235,15 +16262,13 @@ Mistral provides two types of models: open models and premier models.
16235
16262
16236
16263
| Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version|
| Magistral Small 1.1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our small reasoning model released July 2025. | 40k | `magistral-small-2507` | 25.07|
16265
+
| Magistral Small 1.2 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our small reasoning model released September 2025 with vision support. | 128k | `magistral-small-2509` | 25.09|
16239
16266
| Voxtral Small | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our first model with audio input capabilities for instruct use cases. | 32k | `voxtral-small-2507` | 25.07|
16240
16267
| Voxtral Mini | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A mini version of our first audio input model. | 32k | `voxtral-mini-2507` | 25.07|
16241
16268
| Mistral Small 3.2 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | An update to our previous small model, released June 2025. | 128k | `mistral-small-2506` | 25.06|
16242
-
| Magistral Small 1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our first small reasoning model released June 2025. Learn more in our [blog post](https://mistral.ai/news/magistral/) | 40k | `magistral-small-2506` | 25.06|
16243
16269
| Devstral Small 1.1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | An update to our open source model that excels at using tools to explore codebases, editing multiple files and power software engineering agents. Learn more in our [blog post](https://mistral.ai/news/devstral-2507) | 128k | `devstral-small-2507` | 25.07|
16244
16270
| Mistral Small 3.1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A new leader in the small models category with image understanding capabilities, released March 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-small-3-1/) | 128k | `mistral-small-2503` | 25.03|
16245
16271
| Mistral Small 3| :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A new leader in the small models category, released January 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-small-3) | 32k | `mistral-small-2501` | 25.01|
16246
-
| Devstral Small 1| :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A 24B text model, open source model that excels at using tools to explore codebases, editing multiple files and power software engineering agents. Learn more in our [blog post](https://mistral.ai/news/devstral/) | 128k | `devstral-small-2505` | 25.05|
16247
16272
| Pixtral 12B | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A 12B model with image understanding capabilities in addition to text. Learn more in our [blog post](https://mistral.ai/news/pixtral-12b/)| 128k | `pixtral-12b-2409` | 24.09|
16248
16273
| Mistral Nemo 12B | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our best multilingual open source model released July 2024. Learn more in our [blog post](https://mistral.ai/news/mistral-nemo/) | 128k | `open-mistral-nemo`| 24.07|
16249
16274
@@ -16255,8 +16280,8 @@ it is recommended to use the dated versions of the Mistral AI API.
16255
16280
Additionally, be prepared for the deprecation of certain endpoints in the coming months.
16256
16281
16257
16282
Here are the details of the available versions:
16258
-
- `magistral-medium-latest`: currently points to `magistral-medium-2507`.
16259
-
- `magistral-small-latest`: currently points to `magistral-small-2507`.
16283
+
- `magistral-medium-latest`: currently points to `magistral-medium-2509`.
16284
+
- `magistral-small-latest`: currently points to `magistral-small-2509`.
16260
16285
- `mistral-medium-latest`: currently points to `mistral-medium-2508`.
16261
16286
- `mistral-large-latest`: currently points to `mistral-medium-2508`, previously `mistral-large-2411`.
16262
16287
- `pixtral-large-latest`: currently points to `pixtral-large-2411`.
@@ -16300,6 +16325,11 @@ To prepare for model retirements and version upgrades, we recommend that custome
0 commit comments