Skip to content

Commit 576f080

Browse files
author
Jill Grant
authored
Merge pull request #285102 from msakande/phi-3.5-fix-PR-warnings
fix warnings from PR #284763
2 parents 3250994 + 4fd130b commit 576f080

11 files changed

+44
-44
lines changed

articles/ai-studio/how-to/deploy-models-phi-3-5-moe.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -599,7 +599,7 @@ using Azure.Identity;
599599
using Azure.AI.Inference;
600600
```
601601

602-
This example also use the following namespaces but you may not always need them:
602+
This example also uses the following namespaces but you may not always need them:
603603

604604

605605
```csharp

articles/ai-studio/how-to/deploy-models-phi-3-5-vision.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The Phi-3.5 small language models (SLMs) are a collection of instruction-tuned g
2727

2828
## Phi-3.5 chat model with vision
2929

30-
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens)that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
30+
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
3131

3232

3333
You can learn more about the models in their respective model card:
@@ -298,7 +298,7 @@ import IPython.display as Disp
298298
Disp.Image(requests.get(image_url).content)
299299
```
300300

301-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
301+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
302302

303303
Now, create a chat completion request with the image:
304304

@@ -347,7 +347,7 @@ Usage:
347347

348348
## Phi-3.5 chat model with vision
349349

350-
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens)that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
350+
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
351351

352352

353353
You can learn more about the models in their respective model card:
@@ -632,7 +632,7 @@ img.src = data_url;
632632
document.body.appendChild(img);
633633
```
634634
635-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
635+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
636636
637637
Now, create a chat completion request with the image:
638638
@@ -690,7 +690,7 @@ Usage:
690690
691691
## Phi-3.5 chat model with vision
692692
693-
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens)that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
693+
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
694694
695695
696696
You can learn more about the models in their respective model card:
@@ -741,7 +741,7 @@ using Azure.Identity;
741741
using Azure.AI.Inference;
742742
```
743743

744-
This example also use the following namespaces but you may not always need them:
744+
This example also uses the following namespaces but you may not always need them:
745745

746746

747747
```csharp
@@ -980,7 +980,7 @@ string dataUrl = $"data:image/{imageFormat};base64,{imageBase64}";
980980
981981
Visualize the image:
982982
983-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
983+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
984984
985985
Now, create a chat completion request with the image:
986986
@@ -1030,7 +1030,7 @@ Usage:
10301030

10311031
## Phi-3.5 chat model with vision
10321032

1033-
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens)that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
1033+
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
10341034

10351035

10361036
You can learn more about the models in their respective model card:
@@ -1333,11 +1333,11 @@ Phi-3.5-vision-Instruct can reason across text and images and generate text comp
13331333
To see this capability, download an image and encode the information as `base64` string. The resulting data should be inside of a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs):
13341334

13351335
> [!TIP]
1336-
> You will need to construct the data URL using an scripting or programming language. This tutorial use [this sample image](../media/how-to/sdks/slms-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
1336+
> You will need to construct the data URL using an scripting or programming language. This tutorial use [this sample image](../media/how-to/sdks/small-language-models-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
13371337

13381338
Visualize the image:
13391339

1340-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
1340+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
13411341

13421342
Now, create a chat completion request with the image:
13431343

articles/ai-studio/how-to/deploy-models-phi-3-vision.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The Phi-3 family of small language models (SLMs) is a collection of instruction-
2727

2828
## Phi-3 chat model with vision
2929

30-
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
30+
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
3131

3232

3333
You can learn more about the models in their respective model card:
@@ -298,7 +298,7 @@ import IPython.display as Disp
298298
Disp.Image(requests.get(image_url).content)
299299
```
300300

301-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
301+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
302302

303303
Now, create a chat completion request with the image:
304304

@@ -347,7 +347,7 @@ Usage:
347347

348348
## Phi-3 chat model with vision
349349

350-
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
350+
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
351351

352352

353353
You can learn more about the models in their respective model card:
@@ -632,7 +632,7 @@ img.src = data_url;
632632
document.body.appendChild(img);
633633
```
634634
635-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
635+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
636636
637637
Now, create a chat completion request with the image:
638638
@@ -690,7 +690,7 @@ Usage:
690690
691691
## Phi-3 chat model with vision
692692
693-
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
693+
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
694694
695695
696696
You can learn more about the models in their respective model card:
@@ -741,7 +741,7 @@ using Azure.Identity;
741741
using Azure.AI.Inference;
742742
```
743743

744-
This example also use the following namespaces but you may not always need them:
744+
This example also uses the following namespaces but you may not always need them:
745745

746746

747747
```csharp
@@ -980,7 +980,7 @@ string dataUrl = $"data:image/{imageFormat};base64,{imageBase64}";
980980
981981
Visualize the image:
982982
983-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
983+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
984984
985985
Now, create a chat completion request with the image:
986986
@@ -1030,7 +1030,7 @@ Usage:
10301030

10311031
## Phi-3 chat model with vision
10321032

1033-
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly-available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
1033+
Phi-3 Vision is a lightweight, state-of-the-art, open multimodal model. The model was built upon datasets that include synthetic data and filtered, publicly available websites - with a focus on high-quality, reasoning-dense data, both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) that it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization, to ensure precise instruction adherence and robust safety measures.
10341034

10351035

10361036
You can learn more about the models in their respective model card:
@@ -1333,11 +1333,11 @@ Phi-3-vision-128k-Instruct can reason across text and images and generate text c
13331333
To see this capability, download an image and encode the information as `base64` string. The resulting data should be inside of a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs):
13341334

13351335
> [!TIP]
1336-
> You will need to construct the data URL using an scripting or programming language. This tutorial use [this sample image](../media/how-to/sdks/slms-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
1336+
> You will need to construct the data URL using an scripting or programming language. This tutorial use [this sample image](../media/how-to/sdks/small-language-models-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
13371337

13381338
Visualize the image:
13391339

1340-
:::image type="content" source="../media/how-to/sdks/slms-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/slms-chart-example.jpg":::
1340+
:::image type="content" source="../media/how-to/sdks/small-language-models-chart-example.jpg" alt-text="A chart displaying the relative capabilities between large language models and small language models." lightbox="../media/how-to/sdks/small-language-models-chart-example.jpg":::
13411341

13421342
Now, create a chat completion request with the image:
13431343

articles/ai-studio/how-to/deploy-models-phi-3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -785,7 +785,7 @@ using Azure.Identity;
785785
using Azure.AI.Inference;
786786
```
787787

788-
This example also use the following namespaces but you may not always need them:
788+
This example also uses the following namespaces but you may not always need them:
789789

790790

791791
```csharp

articles/machine-learning/how-to-deploy-models-phi-3-5-moe.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -599,7 +599,7 @@ using Azure.Identity;
599599
using Azure.AI.Inference;
600600
```
601601

602-
This example also use the following namespaces but you may not always need them:
602+
This example also uses the following namespaces but you may not always need them:
603603

604604

605605
```csharp

0 commit comments

Comments
 (0)