Skip to content

Commit 9a6cf8c

Browse files
authored
Merge pull request #275010 from MicrosoftDocs/main
5/10/2024 AM Publish
2 parents 1e4403a + fdb9cff commit 9a6cf8c

File tree

95 files changed

+1925
-1155
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

95 files changed

+1925
-1155
lines changed

articles/ai-services/openai/how-to/fine-tuning.md

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,10 @@ manager: nitinme
77
ms.service: azure-ai-openai
88
ms.custom: build-2023, build-2023-dataai, devx-track-python
99
ms.topic: how-to
10-
ms.date: 02/22/2024
10+
ms.date: 05/03/2024
1111
author: mrbullwinkle
1212
ms.author: mbullwin
13-
zone_pivot_groups: openai-fine-tuning
13+
zone_pivot_groups: openai-fine-tuning-new
1414
---
1515

1616
# Customize a model with fine-tuning
@@ -26,10 +26,15 @@ In contrast to few-shot learning, fine tuning improves the model by training on
2626

2727
We use LoRA, or low rank approximation, to fine-tune models in a way that reduces their complexity without significantly affecting their performance. This method works by approximating the original high-rank matrix with a lower rank one, thus only fine-tuning a smaller subset of "important" parameters during the supervised training phase, making the model more manageable and efficient. For users, this makes training faster and more affordable than other techniques.
2828

29-
3029
::: zone pivot="programming-language-studio"
3130

32-
[!INCLUDE [Studio fine-tuning](../includes/fine-tuning-studio.md)]
31+
[!INCLUDE [Azure OpenAI Studio fine-tuning](../includes/fine-tuning-studio.md)]
32+
33+
::: zone-end
34+
35+
::: zone pivot="programming-language-ai-studio"
36+
37+
[!INCLUDE [AI Studio fine-tuning](../includes/fine-tuning-openai-in-ai-studio.md)]
3338

3439
::: zone-end
3540

@@ -65,8 +70,8 @@ If your file upload fails, you can view the error message under “data files”
6570

6671
- **Bad data:** A poorly curated or unrepresentative dataset will produce a low-quality model. Your model may learn inaccurate or biased patterns from your dataset. For example, if you are training a chatbot for customer service, but only provide training data for one scenario (e.g. item returns) it will not know how to respond to other scenarios. Or, if your training data is bad (contains incorrect responses), your model will learn to provide incorrect results.
6772

68-
6973
## Next steps
7074

7175
- Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md).
7276
- Review fine-tuning [model regional availability](../concepts/models.md#fine-tuning-models)
77+
- Learn more about [Azure OpenAI quotas](../quotas-limits.md)

articles/ai-services/openai/includes/fine-tuning-openai-in-ai-studio.md

Lines changed: 284 additions & 0 deletions
Large diffs are not rendered by default.
147 KB
Loading
408 KB
Loading
139 KB
Loading
200 KB
Loading
201 KB
Loading
152 KB
Loading
129 KB
Loading

articles/ai-services/speech-service/spx-basics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ spx translate --microphone --source en-US --target ru-RU
165165
When you're translating into multiple languages, separate the language codes with a semicolon (`;`).
166166

167167
```console
168-
spx translate --microphone --source en-US --target ru-RU;fr-FR;es-ES
168+
spx translate --microphone --source en-US --target 'ru-RU;fr-FR;es-ES'
169169
```
170170

171171
If you want to save the output of your translation, use the `--output` flag. In this example, you also read from a file.

0 commit comments

Comments
 (0)