Skip to content

Commit eb74609

Browse files
committed
Revert "Phi 4 mini updates only"
This reverts commit 0154b59.
1 parent 0154b59 commit eb74609

File tree

3 files changed

+65
-17
lines changed

3 files changed

+65
-17
lines changed

articles/ai-studio/how-to/deploy-models-phi-4.md

Lines changed: 63 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,18 @@ The Phi-4 family of small language models (SLMs) is a collection of instruction-
2929

3030
The Phi-4 family chat models include the following models:
3131

32+
# [Phi-4-multimodal-instruct](#tab/phi-4-multimodal-instruct)
33+
34+
Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, and generates text outputs. The model underwent an enhancement process, incorporating both supervised fine-tuning, and direct preference optimization to support precise instruction adherence and safety measures.
35+
36+
The Phi-4-multimodal-instruct model comes in the following variant with a 128K token length.
37+
38+
39+
The following models are available:
40+
41+
* [Phi-4-multimodal-instruct](https://aka.ms/azureai/landing/Phi-4-multimodal-instruct)
42+
43+
3244
# [Phi-4-mini-instruct](#tab/phi-4-mini-instruct)
3345

3446
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
@@ -155,7 +167,7 @@ print("Model provider name:", model_info.model_provider_name)
155167
```
156168

157169
```console
158-
Model name: Phi-4-mini-instruct
170+
Model name: Phi-4-multimodal-instruct
159171
Model type: chat-completions
160172
Model provider name: Microsoft
161173
```
@@ -176,7 +188,7 @@ response = client.complete(
176188
```
177189

178190
> [!NOTE]
179-
> Phi-4-mini-instruct and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
191+
> Phi-4-multimodal-instruct, Phi-4-mini-instruct, and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
180192
181193
The response is as follows, where you can see the model's usage statistics:
182194

@@ -192,7 +204,7 @@ print("\tCompletion tokens:", response.usage.completion_tokens)
192204

193205
```console
194206
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
195-
Model: Phi-4-mini-instruct
207+
Model: Phi-4-multimodal-instruct
196208
Usage:
197209
Prompt tokens: 19
198210
Total tokens: 91
@@ -344,6 +356,18 @@ except HttpResponseError as ex:
344356

345357
The Phi-4 family chat models include the following models:
346358

359+
# [Phi-4-multimodal-instruct](#tab/phi-4-multimodal-instruct)
360+
361+
Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, and generates text outputs. The model underwent an enhancement process, incorporating both supervised fine-tuning, and direct preference optimization to support precise instruction adherence and safety measures.
362+
363+
The Phi-4-multimodal-instruct model comes in the following variant with a 128K token length.
364+
365+
366+
The following models are available:
367+
368+
* [Phi-4-multimodal-instruct](https://aka.ms/azureai/landing/Phi-4-multimodal-instruct)
369+
370+
347371
# [Phi-4-mini-instruct](#tab/phi-4-mini-instruct)
348372

349373
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
@@ -468,7 +492,7 @@ console.log("Model provider name: ", model_info.body.model_provider_name)
468492
```
469493

470494
```console
471-
Model name: Phi-4-mini-instruct
495+
Model name: Phi-4-multimodal-instruct
472496
Model type: chat-completions
473497
Model provider name: Microsoft
474498
```
@@ -491,7 +515,7 @@ var response = await client.path("/chat/completions").post({
491515
```
492516

493517
> [!NOTE]
494-
> Phi-4-mini-instruct and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
518+
> Phi-4-multimodal-instruct, Phi-4-mini-instruct, and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
495519
496520
The response is as follows, where you can see the model's usage statistics:
497521

@@ -511,7 +535,7 @@ console.log("\tCompletion tokens:", response.body.usage.completion_tokens);
511535

512536
```console
513537
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
514-
Model: Phi-4-mini-instruct
538+
Model: Phi-4-multimodal-instruct
515539
Usage:
516540
Prompt tokens: 19
517541
Total tokens: 91
@@ -682,6 +706,18 @@ catch (error) {
682706
683707
The Phi-4 family chat models include the following models:
684708
709+
# [Phi-4-multimodal-instruct](#tab/phi-4-multimodal-instruct)
710+
711+
Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, and generates text outputs. The model underwent an enhancement process, incorporating both supervised fine-tuning, and direct preference optimization to support precise instruction adherence and safety measures.
712+
713+
The Phi-4-multimodal-instruct model comes in the following variant with a 128K token length.
714+
715+
716+
The following models are available:
717+
718+
* [Phi-4-multimodal-instruct](https://aka.ms/azureai/landing/Phi-4-multimodal-instruct)
719+
720+
685721
# [Phi-4-mini-instruct](#tab/phi-4-mini-instruct)
686722
687723
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
@@ -821,7 +857,7 @@ Console.WriteLine($"Model provider name: {modelInfo.Value.ModelProviderName}");
821857
```
822858
823859
```console
824-
Model name: Phi-4-mini-instruct
860+
Model name: Phi-4-multimodal-instruct
825861
Model type: chat-completions
826862
Model provider name: Microsoft
827863
```
@@ -843,7 +879,7 @@ Response<ChatCompletions> response = client.Complete(requestOptions);
843879
```
844880
845881
> [!NOTE]
846-
> Phi-4-mini-instruct and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
882+
> Phi-4-multimodal-instruct, Phi-4-mini-instruct, and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
847883
848884
The response is as follows, where you can see the model's usage statistics:
849885

@@ -859,7 +895,7 @@ Console.WriteLine($"\tCompletion tokens: {response.Value.Usage.CompletionTokens}
859895

860896
```console
861897
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
862-
Model: Phi-4-mini-instruct
898+
Model: Phi-4-multimodal-instruct
863899
Usage:
864900
Prompt tokens: 19
865901
Total tokens: 91
@@ -1032,6 +1068,18 @@ catch (RequestFailedException ex)
10321068

10331069
The Phi-4 family chat models include the following models:
10341070

1071+
# [Phi-4-multimodal-instruct](#tab/phi-4-multimodal-instruct)
1072+
1073+
Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, and generates text outputs. The model underwent an enhancement process, incorporating both supervised fine-tuning, and direct preference optimization to support precise instruction adherence and safety measures.
1074+
1075+
The Phi-4-multimodal-instruct model comes in the following variant with a 128K token length.
1076+
1077+
1078+
The following models are available:
1079+
1080+
* [Phi-4-multimodal-instruct](https://aka.ms/azureai/landing/Phi-4-multimodal-instruct)
1081+
1082+
10351083
# [Phi-4-mini-instruct](#tab/phi-4-mini-instruct)
10361084

10371085
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
@@ -1122,7 +1170,7 @@ The response is as follows:
11221170

11231171
```json
11241172
{
1125-
"model_name": "Phi-4-mini-instruct",
1173+
"model_name": "Phi-4-multimodal-instruct",
11261174
"model_type": "chat-completions",
11271175
"model_provider_name": "Microsoft"
11281176
}
@@ -1148,7 +1196,7 @@ The following example shows how you can create a basic chat completions request
11481196
```
11491197

11501198
> [!NOTE]
1151-
> Phi-4-mini-instruct and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
1199+
> Phi-4-multimodal-instruct, Phi-4-mini-instruct, and Phi-4 don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
11521200

11531201
The response is as follows, where you can see the model's usage statistics:
11541202
@@ -1158,7 +1206,7 @@ The response is as follows, where you can see the model's usage statistics:
11581206
"id": "0a1234b5de6789f01gh2i345j6789klm",
11591207
"object": "chat.completion",
11601208
"created": 1718726686,
1161-
"model": "Phi-4-mini-instruct",
1209+
"model": "Phi-4-multimodal-instruct",
11621210
"choices": [
11631211
{
11641212
"index": 0,
@@ -1215,7 +1263,7 @@ You can visualize how streaming generates content:
12151263
"id": "23b54589eba14564ad8a2e6978775a39",
12161264
"object": "chat.completion.chunk",
12171265
"created": 1718726371,
1218-
"model": "Phi-4-mini-instruct",
1266+
"model": "Phi-4-multimodal-instruct",
12191267
"choices": [
12201268
{
12211269
"index": 0,
@@ -1238,7 +1286,7 @@ The last message in the stream has `finish_reason` set, indicating the reason fo
12381286
"id": "23b54589eba14564ad8a2e6978775a39",
12391287
"object": "chat.completion.chunk",
12401288
"created": 1718726371,
1241-
"model": "Phi-4-mini-instruct",
1289+
"model": "Phi-4-multimodal-instruct",
12421290
"choices": [
12431291
{
12441292
"index": 0,
@@ -1289,7 +1337,7 @@ Explore other parameters that you can specify in the inference client. For a ful
12891337
"id": "0a1234b5de6789f01gh2i345j6789klm",
12901338
"object": "chat.completion",
12911339
"created": 1718726686,
1292-
"model": "Phi-4-mini-instruct",
1340+
"model": "Phi-4-multimodal-instruct",
12931341
"choices": [
12941342
{
12951343
"index": 0,

articles/ai-studio/how-to/model-catalog-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Gretel | Not available | Gretel-Navigator
8383
Healthcare AI family Models | MedImageParse<BR> MedImageInsight<BR> CxrReportGen<BR> Virchow<BR> Virchow2<BR> Prism<BR> BiomedCLIP-PubMedBERT<BR> microsoft-llava-med-v1.5<BR> m42-health-llama3-med4<BR> biomistral-biomistral-7b<BR> microsoft-biogpt-large-pub<BR> microsoft-biomednlp-pub<BR> stanford-crfm-biomedlm<BR> medicalai-clinicalbert<BR> microsoft-biogpt<BR> microsoft-biogpt-large<BR> microsoft-biomednlp-pub<BR> | Not Available
8484
JAIS | Not available | jais-30b-chat
8585
Meta Llama family models | Llama-3.3-70B-Instruct<BR> Llama-3.2-3B-Instruct<BR> Llama-3.2-1B-Instruct<BR> Llama-3.2-1B<BR> Llama-3.2-90B-Vision-Instruct<BR> Llama-3.2-11B-Vision-Instruct<BR> Llama-3.1-8B-Instruct<BR> Llama-3.1-8B<BR> Llama-3.1-70B-Instruct<BR> Llama-3.1-70B<BR> Llama-3-8B-Instruct<BR> Llama-3-70B<BR> Llama-3-8B<BR> Llama-Guard-3-1B<BR> Llama-Guard-3-8B<BR> Llama-Guard-3-11B-Vision<BR> Llama-2-7b<BR> Llama-2-70b<BR> Llama-2-7b-chat<BR> Llama-2-13b-chat<BR> CodeLlama-7b-hf<BR> CodeLlama-7b-Instruct-hf<BR> CodeLlama-34b-hf<BR> CodeLlama-34b-Python-hf<BR> CodeLlama-34b-Instruct-hf<BR> CodeLlama-13b-Instruct-hf<BR> CodeLlama-13b-Python-hf<BR> Prompt-Guard-86M<BR> CodeLlama-70b-hf<BR> | Llama-3.3-70B-Instruct<BR> Llama-3.2-90B-Vision-Instruct<br> Llama-3.2-11B-Vision-Instruct<br> Llama-3.1-8B-Instruct<br> Llama-3.1-70B-Instruct<br> Llama-3.1-405B-Instruct<br> Llama-3-8B-Instruct<br> Llama-3-70B-Instruct<br> Llama-2-7b<br> Llama-2-7b-chat<br> Llama-2-70b<br> Llama-2-70b-chat<br> Llama-2-13b<br> Llama-2-13b-chat<br>
86-
Microsoft Phi family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct <br> Phi-3-vision-128k-Instruct <br> Phi-3.5-mini-Instruct <br> Phi-3.5-vision-Instruct <br> Phi-3.5-MoE-Instruct <br> Phi-4 <br> Phi-4-mini-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct <br> <br> Phi-3.5-mini-Instruct <br> Phi-3.5-vision-Instruct <br> Phi-3.5-MoE-Instruct <br> Phi-4 <br> Phi-4-mini-instruct
86+
Microsoft Phi family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct <br> Phi-3-vision-128k-Instruct <br> Phi-3.5-mini-Instruct <br> Phi-3.5-vision-Instruct <br> Phi-3.5-MoE-Instruct <br> Phi-4 <br> Phi-4-mini-instruct <br> Phi-4-multimodal-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct <br> <br> Phi-3.5-mini-Instruct <br> Phi-3.5-vision-Instruct <br> Phi-3.5-MoE-Instruct <br> Phi-4 <br> Phi-4-mini-instruct <br> Phi-4-multimodal-instruct
8787
Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Ministral-3B <br> Mistral-NeMo
8888
Nixtla | Not available | TimeGEN-1
8989

articles/ai-studio/includes/region-availability-maas.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Llama 3.1 405B Instruct | [Microsoft Managed countries/regions](/partner-center
5454

5555
| Model | Offer Availability Region | Hub/Project Region for Deployment | Hub/Project Region for Fine tuning |
5656
|---------|---------|---------|---------|
57-
Phi-4 <br> Phi-4-mini-instruct | Not applicable | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
57+
Phi-4 <br> Phi-4-mini-instruct <br> Phi-4-multimodal-instruct | Not applicable | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
5858
Phi-3.5-vision-Instruct | Not applicable | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
5959
Phi-3.5-MoE-Instruct | Not applicable | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | East US 2 |
6060
Phi-3.5-Mini-Instruct | Not applicable | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | East US 2 | East US 2 |

0 commit comments

Comments
 (0)