Skip to content

Commit 5af1161

Browse files
Merge pull request #1070 from ssalgadodev/patch-16
Update concept-model-distillation.md
2 parents bb677fb + 5c340ab commit 5af1161

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

articles/ai-studio/concepts/concept-model-distillation.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Distillation in AI Studio
2+
title: Distillation in AI Studio (preview)
33
titleSuffix: Azure AI Studio
44
description: Learn how to do distillation in Azure AI Studio.
55
manager: scottpolly
@@ -13,7 +13,9 @@ author: ssalgadodev
1313
ms.custom: references_regions
1414
---
1515

16-
# Distillation in Azure AI Studio
16+
# Distillation in Azure AI Studio (preview)
17+
18+
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1719

1820
In Azure AI Studio, you can use distillation to efficiently train a student model.
1921

@@ -31,7 +33,8 @@ The main steps in knowledge distillation are:
3133

3234
## Sample notebook
3335

34-
You can use the [sample notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/distillation) to see how to perform distillation. In this sample notebook, the teacher model uses the Meta Llama 3.1 405B instruction model, and the student model uses the Meta Llama 3.1 8B instruction model.
36+
Distillation in AI Studio is currently only available through a notebook experience. You can use the [sample notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/distillation) to see how to perform distillation. Model distillation is available for Microsoft models and a selection of OSS (open-source software) models available in the model catalog. In this sample notebook, the teacher model uses the Meta Llama 3.1 405B instruction model, and the student model uses the Meta Llama 3.1 8B instruction model.
37+
3538

3639

3740
We used an advanced prompt during synthetic data generation. The advanced prompt incorporates chain-of-thought (CoT) reasoning, which results in higher-accuracy data labels in the synthetic data. This labeling further improves the accuracy of the distilled model.

0 commit comments

Comments
 (0)