Skip to content

Commit c60f4b7

Browse files
authored
Update fine-tune-phi-3.md
1 parent fd9ee30 commit c60f4b7

File tree

1 file changed

+8
-27
lines changed

1 file changed

+8
-27
lines changed

articles/ai-studio/how-to/fine-tune-phi-3.md

Lines changed: 8 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ The Phi-3 family of SLMs is a collection of instruction-tuned generative text mo
2525

2626
Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model built upon datasets used for Phi-2 - synthetic data and filtered websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-3 model family, and the Mini version comes in two variants 4K and 128K which is the context length (in tokens) it can support.
2727

28-
- [Phi-3-mini-4k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-4k-instruct/version/4/registry/azureml)
29-
- [Phi-3-mini-128k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-128k-instruct/version/4/registry/azureml)
28+
- [Phi-3-mini-4k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-4k-instruct/version/4/registry/azureml) (preview)
29+
- [Phi-3-mini-128k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-128k-instruct/version/4/registry/azureml) (preview)
3030

3131
The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct and Phi-3 Mini-128K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
3232

@@ -36,44 +36,25 @@ Phi-3 Medium is a 14B parameters, lightweight, state-of-the-art open model. Phi-
3636

3737
The model belongs to the Phi-3 model family, and the Medium version comes in two variants, 4K and 128K, which denote the context length (in tokens) that each model variant can support.
3838

39-
- Phi-3-medium-4k-Instruct
40-
- Phi-3-medium-128k-Instruct
39+
- Phi-3-medium-4k-Instruct (preview)
40+
- Phi-3-medium-128k-Instruct (preview)
4141

4242
The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4k-Instruct and Phi-3-Medium-128k-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
4343

4444

45-
# [Phi-3.5](#tab/phi-3.5)
45+
# [Phi-3.5](#tab/phi-3-5)
4646

4747

4848
Phi-3.5-mini-Instruct is a 3.8B parameter model enhances multi-lingual support, reasoning capability, and offers an extended context length of 128K tokens
4949

5050
Phi-3.5-MoE-Instruct. Featuring 16 experts and 6.6B active parameters, this model delivers high performance, reduced latency, multi-lingual support, and robust safety measures, surpassing the capabilities of larger models while maintaining the efficacy of the Phi models.
5151

52-
- Phi-3.5-mini-Instruct
53-
- Phi-3.5-MoE-Instruct
52+
- Phi-3.5-mini-Instruct (preview)
53+
- Phi-3.5-MoE-Instruct (preview)
5454

5555
The models underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3.5-mini-Instruct and Phi-3.5-MoE-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
5656

5757

58-
## [Phi-3-mini](#tab/phi-3-mini)
59-
60-
The following models are available in Azure AI Studio for Phi 3 when fine-tuning as a service with pay-as-you-go:
61-
62-
- `Phi-3-mini-4k-instruct` (preview)
63-
- `Phi-3-mini-128k-instruct` (preview)
64-
65-
Fine-tuning of Phi-3 models is currently supported in projects located in East US 2.
66-
67-
# [Phi-3-medium](#tab/phi-3-medium)
68-
69-
The following models are available in Azure AI Studio for Phi 3 when fine-tuning as a service with pay-as-you-go:
70-
71-
- `Phi-3-medium-4k-instruct` (preview)
72-
- `Phi-3-medium-128k-instruct` (preview)
73-
74-
Fine-tuning of Phi-3 models is currently supported in projects located in East US 2.
75-
76-
7758
---
7859

7960
### Prerequisites
@@ -199,7 +180,7 @@ To fine-tune a Phi-3 model:
199180
Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Studio](./deploy-models-phi-3.md).
200181

201182

202-
# [Phi-3.5](#tab/phi-3.5)
183+
# [Phi-3.5](#tab/phi-3-5)
203184

204185
To fine-tune a Phi-3.5 model:
205186

0 commit comments

Comments
 (0)