Skip to content

Commit 08cfa68

Browse files
committed
amend tab ID
1 parent 3d6fac6 commit 08cfa68

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/ai-studio/how-to/deploy-models-phi-3.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ The following models are available:
4646
* [Phi-3-medium-128k-Instruct](https://aka.ms/azureai/landing/Phi-3-medium-128k-Instruct)
4747

4848

49-
# [Phi-3.5](#tab/phi-3.5)
49+
# [Phi-3.5](#tab/phi-3-5)
5050

5151
Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as Phi-3 Mini.
5252

@@ -369,7 +369,7 @@ The following models are available:
369369
* [Phi-3-medium-128k-Instruct](https://aka.ms/azureai/landing/Phi-3-medium-128k-Instruct)
370370

371371

372-
# [Phi-3.5](#tab/phi-3.5)
372+
# [Phi-3.5](#tab/phi-3-5)
373373

374374
Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as Phi-3 Mini.
375375

@@ -715,7 +715,7 @@ The following models are available:
715715
* [Phi-3-medium-128k-Instruct](https://aka.ms/azureai/landing/Phi-3-medium-128k-Instruct)
716716
717717
718-
# [Phi-3.5](#tab/phi-3.5)
718+
# [Phi-3.5](#tab/phi-3-5)
719719
720720
Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as Phi-3 Mini.
721721
@@ -1073,7 +1073,7 @@ The following models are available:
10731073
* [Phi-3-medium-128k-Instruct](https://aka.ms/azureai/landing/Phi-3-medium-128k-Instruct)
10741074
10751075
1076-
# [Phi-3.5](#tab/phi-3.5)
1076+
# [Phi-3.5](#tab/phi-3-5)
10771077
10781078
Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as Phi-3 Mini.
10791079

0 commit comments

Comments
 (0)