Skip to content

Commit b8ffd2a

Browse files
authored
Update fine-tune-phi-3.md
1 parent d90d46c commit b8ffd2a

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

articles/ai-studio/how-to/fine-tune-phi-3.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The Phi-3 family of SLMs is a collection of instruction-tuned generative text mo
2121

2222
[!INCLUDE [models-preview](../includes/models-preview.md)]
2323

24-
## [Phi-3-mini](#tab/phi-3-mini)
24+
# [Phi-3-mini](#tab/phi-3-mini)
2525

2626
Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model built upon datasets used for Phi-2 - synthetic data and filtered websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-3 model family, and the Mini version comes in two variants 4K and 128K which is the context length (in tokens) it can support.
2727

@@ -31,7 +31,7 @@ Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model built
3131
The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct and Phi-3 Mini-128K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
3232

3333

34-
## [Phi-3-medium](#tab/phi-3-medium)
34+
# [Phi-3-medium](#tab/phi-3-medium)
3535
Phi-3 Medium is a 14B parameters, lightweight, state-of-the-art open model. Phi-3-Medium was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties.
3636

3737
The model belongs to the Phi-3 model family, and the Medium version comes in two variants, 4K and 128K, which denote the context length (in tokens) that each model variant can support.
@@ -42,7 +42,8 @@ The model belongs to the Phi-3 model family, and the Medium version comes in two
4242
The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4k-Instruct and Phi-3-Medium-128k-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
4343

4444

45-
## [Phi-3.5](#tab/phi-3.5)
45+
# [Phi-3.5](#tab/phi-3.5)
46+
4647
Phi-3.5-mini-Instruct is a 3.8B parameter model enhances multi-lingual support, reasoning capability, and offers an extended context length of 128K tokens
4748

4849
Phi-3.5-MoE-Instruct. Featuring 16 experts and 6.6B active parameters, this model delivers high performance, reduced latency, multi-lingual support, and robust safety measures, surpassing the capabilities of larger models while maintaining the efficacy of the Phi models.
@@ -64,7 +65,7 @@ The following models are available in Azure AI Studio for Phi 3 when fine-tuning
6465

6566
Fine-tuning of Phi-3 models is currently supported in projects located in East US 2.
6667

67-
## [Phi-3-medium](#tab/phi-3-medium)
68+
# [Phi-3-medium](#tab/phi-3-medium)
6869

6970
The following models are available in Azure AI Studio for Phi 3 when fine-tuning as a service with pay-as-you-go:
7071

@@ -73,7 +74,7 @@ The following models are available in Azure AI Studio for Phi 3 when fine-tuning
7374

7475
Fine-tuning of Phi-3 models is currently supported in projects located in East US 2.
7576

76-
## [Phi-3.5](#tab/phi-3.5)
77+
# [Phi-3.5](#tab/phi-3.5)
7778

7879
The following models are available in Azure AI Studio for Phi 3.5 when fine-tuning as a service with pay-as-you-go:
7980

@@ -243,6 +244,7 @@ To fine-tune a Phi-3.5 model:
243244
1. Review your selections and proceed to train your model.
244245

245246
Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Studio](./deploy-models-phi-3.md).
247+
246248
---
247249

248250
## Cleaning up your fine-tuned models

0 commit comments

Comments
 (0)