Skip to content

Commit d1f63fc

Browse files
authored
Merge pull request #2223 from MicrosoftDocs/main
1/9/2025 AM Publish
2 parents 6c9883f + 8845abc commit d1f63fc

30 files changed

+206
-143
lines changed

articles/ai-services/document-intelligence/language-support/prebuilt.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ Azure AI Document Intelligence models provide multilingual document processing s
207207
| • Spanish (`es`) |Spain (`es`)|
208208
| • Swedish (`sv`) | Sweden (`se`)|
209209
| • Thai (`th`) | Thailand (`th`)|
210-
| • Turkish (`tr`) | Turkey (`tr`)|
210+
| • Turkish (`tr`) | Türkiye (`tr`)|
211211
| • Ukrainian (`uk`) | Ukraine (`uk`)|
212212
| • Vietnamese (`vi`) | Vietnam (`vi`)|
213213

articles/ai-services/language-service/language-detection/language-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ If you have content expressed in a less frequently used language, you can try La
187187
| Kannada | `kn` | `Latn`, `Knda` |
188188
| Malayalam | `ml` | `Latn`, `Mlym` |
189189
| Marathi | `mr` | `Latn`, `Deva` |
190-
| Oriya | `or` | `Latn`, `Orya` |
190+
| Odia | `or` | `Latn`, `Orya` |
191191
| Punjabi | `pa` | `Latn`, `Guru` |
192192
| Tamil | `ta` | `Latn`, `Taml` |
193193
| Telugu | `te` | `Latn`, `Telu` |

articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ Consider the following scenarios:
115115

116116
* *"My data has already rolled up and the dimension value is represented by: NULL or Empty (Default), NULL only, Others."*
117117

118-
This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
118+
This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries/regions and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
119119

120120
| Country/Region | Language | Income |
121121
|---------|----------|--------|

articles/ai-services/openai/assistants-reference.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,62 @@ Create an assistant with a model and instructions.
4242
| response_format | string or object | Optional | Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106. Setting this parameter to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON. Importantly, when using JSON mode, you must also instruct the model to produce JSON yourself using a system or user message. Without this instruction, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Additionally, the message content may be partially cut off if you use `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length. |
4343
| tool_resources | object | Optional | A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs. |
4444

45+
### response_format types
46+
47+
**string**
48+
49+
`auto` is the default value.
50+
51+
**object**
52+
53+
Possible `type` values: `text`, `json_object`, `json_schema`.
54+
55+
***json_schema***
56+
57+
| Name | Type | Description | Default | Required/Optional |
58+
|--- |--- |--- |--- |--- |
59+
| `description` | string | A description of what the response format is for, used by the model to determine how to respond in the format. | | Optional |
60+
| `name` | string | The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | | Required |
61+
| `schema` | object | The schema for the response format, described as a JSON Schema object. | | Optional |
62+
| `strict` | boolean or null | Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the `schema` field. Only a subset of JSON Schema is supported when `strict` is `true`. | false | Optional |
63+
64+
### tool_resources properties
65+
66+
**code_interpreter**
67+
68+
| Name | Type | Description | Default |
69+
|--- |--- |--- |--- |
70+
| `file_ids` | array | A list of file IDs made available to the code_interpreter tool. There can be a maximum of 20 files associated with the tool. | `[]` |
71+
72+
**file_search**
73+
74+
| Name | Type | Description | Required/Optional |
75+
|--- |--- |--- |--- |
76+
| `vector_store_ids` | array | The vector store attached to this thread. There can be a maximum of 1 vector store attached to the thread. | Optional |
77+
| `vector_stores` | array | A helper to create a vector store with file_ids and attach it to this thread. There can be a maximum of 1 vector store attached to the thread. | Optional |
78+
79+
***vector_stores***
80+
81+
| Name | Type | Description | Required/Optional |
82+
|--- |--- |--- |--- |
83+
| `file_ids` | array | A list of file IDs to add to the vector store. There can be a maximum of 10000 files in a vector store. | Optional |
84+
| `chunking_strategy` | object | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. | Optional |
85+
| `metadata` | map | Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. | Optional |
86+
87+
***chunking_strategy***
88+
89+
| Name | Type | Description | Required/optional |
90+
|--- |--- |--- |---|
91+
| `Auto Chunking Strategy` | object | The default strategy. This strategy currently uses a `max_chunk_size_tokens` of `800` and `chunk_overlap_tokens` of `400`. `type` is always `auto` | Required |
92+
| `Static Chunking Strategy` | object | `type` Always `static` | Required |
93+
94+
***Static Chunking Strategy***
95+
96+
| Name | Type | Description | Required/Optional |
97+
|--- |--- |--- |--- |
98+
| `max_chunk_size_tokens` | integer | The maximum number of tokens in each chunk. The default value is `800`. The minimum value is `100` and the maximum value is `4096`. | Required |
99+
| `chunk_overlap_tokens` | integer | The number of tokens that overlap between chunks. The default value is `400`. Note that the overlap must not exceed half of `max_chunk_size_tokens`. | Required |
100+
45101
### Returns
46102

47103
An [assistant](#assistant-object) object.

articles/ai-services/openai/concepts/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The Azure OpenAI `o1` and `o1-mini` models are specifically designed to tackle r
3434

3535
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
3636
| --- | :--- |:--- |:---: |
37-
| `o1` (2024-12-17) | The most capable model in the o1 series, offering enhanced reasoning abilities. <br> **Request access: [limited access model application](https://aka.ms/OAI/o1access)** <br> - Structured outputs<br> - Text, image processing <br> - Functions/Tools <br> | Input: 200,000 <br> Output: 100,000 | |
37+
| `o1` (2024-12-17) | The most capable model in the o1 series, offering [enhanced reasoning abilities](../how-to/reasoning.md). <br> - Structured outputs<br> - Text, image processing <br> - Functions/Tools <br> <br> **Request access: [limited access model application](https://aka.ms/OAI/o1access)** | Input: 200,000 <br> Output: 100,000 | Oct 2023 |
3838
|`o1-preview` (2024-09-12) | Older preview version | Input: 128,000 <br> Output: 32,768 | Oct 2023 |
3939
| `o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption.| Input: 128,000 <br> Output: 65,536 | Oct 2023 |
4040

articles/ai-services/speech-service/includes/release-notes/release-notes-tts.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ Added support and general availability for new voices in the following locales:
166166
| Locale (BCP-47) | Language | Text to speech voices |
167167
| ----- | ----- | ----- |
168168
| `as-IN` | Assamese (India) | `as-IN-YashicaNeural` (Female)<br/>`as-IN-PriyomNeural` (Male) |
169-
| `or-IN` | Oriya (India) | `or-IN-SubhasiniNeural` (Female)<br/>`or-IN-SukantNeural` (Male) |
169+
| `or-IN` | Odia (India) | `or-IN-SubhasiniNeural` (Female)<br/>`or-IN-SukantNeural` (Male) |
170170
| `pa-IN` | Punjabi (India) | `pa-IN-OjasNeural` (Male)<br/>`pa-IN-VaaniNeural` (Female) |
171171

172172
The one voice in this table is generally available and supports only the 'en-IN' locale.
@@ -293,7 +293,7 @@ Text to speech avatar is now generally available. For more information, see [tex
293293
| `pt-PT`| Portuguese (Portugal)|
294294
| `sv-SE`| Swedish (Sweden)|
295295
| `th-TH`| Thai (Thailand)|
296-
| `tr-TR`| Turkish (Turkey)|
296+
| `tr-TR`| Turkish (Türkiye)|
297297
| `zh-CN`| Chinese (Mandarin, Simplified)|
298298
| `zh-HK`| Chinese (Cantonese, Traditional)|
299299
| `zh-TW`| Chinese (Taiwanese Mandarin, Traditional)|
@@ -306,8 +306,8 @@ Text to speech avatar is now generally available. For more information, see [tex
306306

307307
| Locale | Language | Text to speech voices |
308308
|--------|-----------------|-------------------------|
309-
| `or-IN` | Oriya (India) | `or-IN-SubhasiniNeural` (Female) |
310-
| `or-IN` | Oriya (India) | `or-IN-SukantNeural` (Male) |
309+
| `or-IN` | Odia (India) | `or-IN-SubhasiniNeural` (Female) |
310+
| `or-IN` | Odia (India) | `or-IN-SukantNeural` (Male) |
311311
| `pa-IN` | Punjabi (India) | `pa-IN-VaaniNeural` (Female) |
312312
| `pa-IN` | Punjabi (India) | `pa-IN-OjasNeural` (Male) |
313313
| `as-IN` | Assamese (India)| `as-IN-YashicaNeural` (Female) |

articles/ai-studio/how-to/deploy-models-cohere-command.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2129,7 +2129,7 @@ For more examples of how to use Cohere models, see the following examples and tu
21292129
| Description | Language | Sample |
21302130
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
21312131
| Web requests | Bash | [Command-R](https://aka.ms/samples/cohere-command-r/webrequests) - [Command-R+](https://aka.ms/samples/cohere-command-r-plus/webrequests) |
2132-
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
2132+
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
21332133
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
21342134
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/samples/cohere-command/openaisdk) |
21352135
| LangChain | Python | [Link](https://aka.ms/samples/cohere/langchain) |

articles/ai-studio/how-to/deploy-models-cohere-embed.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -631,7 +631,7 @@ Cohere Embed V3 models can optimize the embeddings based on its use case.
631631
| Description | Language | Sample |
632632
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
633633
| Web requests | Bash | [cohere-embed.ipynb](https://aka.ms/samples/embed-v3/webrequests) |
634-
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
634+
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
635635
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
636636
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/samples/cohere-embed/openaisdk) |
637637
| LangChain | Python | [Link](https://aka.ms/samples/cohere-embed/langchain) |

articles/ai-studio/how-to/deploy-models-jais.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1169,7 +1169,7 @@ For more examples of how to use Jais models, see the following examples and tuto
11691169

11701170
| Description | Language | Sample |
11711171
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
1172-
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
1172+
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
11731173
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
11741174

11751175
## Cost and quota considerations for Jais models deployed as serverless API endpoints

articles/ai-studio/how-to/deploy-models-mistral-nemo.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2016,7 +2016,7 @@ For more examples of how to use Mistral models, see the following examples and t
20162016
| Description | Language | Sample |
20172017
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
20182018
| CURL request | Bash | [Link](https://aka.ms/mistral-large/webrequests-sample) |
2019-
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
2019+
| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
20202020
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
20212021
| Python web requests | Python | [Link](https://aka.ms/mistral-large/webrequests-sample) |
20222022
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/mistral-large/openaisdk) |

0 commit comments

Comments
 (0)