Skip to content

Commit c02db2b

Browse files
committed
fix warning
1 parent 72c5e37 commit c02db2b

File tree

3 files changed

+9
-9
lines changed

3 files changed

+9
-9
lines changed

articles/ai-services/openai/how-to/fine-tuning-vision.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: 'Customize a model with Azure OpenAI Service'
2+
title: 'Vision customization'
33
titleSuffix: Azure OpenAI
4-
description: Learn how to create your own customized model with Azure OpenAI Service by using Python, the REST APIs, or Azure AI Foundry portal.
4+
description: Learn how to fine-tune a model with image inputs.
55
#services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
@@ -13,17 +13,17 @@ ms.author: mbullwin
1313
zone_pivot_groups: openai-fine-tuning
1414
---
1515

16-
## Vision fine-tuning
16+
# Vision fine-tuning
1717

1818
Fine-tuning is also possible with images in your JSONL files. Just as you can send one or many image inputs to chat completions, you can include those same message types within your training data. Images can be provided either as publicly accessible URLs or data URIs containing [base64 encoded images](/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest#call-the-chat-completion-apis).
1919

20-
### Image dataset requirements
20+
## Image dataset requirements
2121

2222
- Your training file can contain a maximum of 50,000 examples that contain images (not including text examples).
2323
- Each example can have at most 64 images.
2424
- Each image can be at most 10 MB.
2525

26-
### Format
26+
## Format
2727

2828
Images must be:
2929

@@ -37,7 +37,7 @@ You cannot include images as output from messages with the assistant role.
3737

3838
As with all fine-tuning training your example file requires at least 10 examples.
3939

40-
#### Example file format
40+
### Example file format
4141

4242
```json
4343
{
@@ -59,7 +59,7 @@ As with all fine-tuning training your example file requires at least 10 examples
5959
```
6060

6161

62-
### Content moderation policy
62+
## Content moderation policy
6363

6464
We scan your images before training to ensure that they comply with our usage policy [Transparency Note](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=text). This may introduce latency in file validation before fine tuning begins.
6565

articles/ai-services/openai/includes/fine-tuning-openai-in-ai-studio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Take a moment to review the fine-tuning workflow for using Azure AI Foundry:
4747

4848
1. Prepare your training and validation data.
4949
1. Use the **Fine-tune model** wizard in Azure AI Foundry portal to train your custom model.
50-
1. [Select a model](#select-the-base-model).
50+
1. Select a model to finetune.
5151
1. [Choose your training data](#choose-your-training-data).
5252
1. Optionally, [choose your validation data](#choose-your-validation-data).
5353
1. Optionally, [configure your parameters](#configure-your-parameters) for your fine-tuning job.

articles/ai-services/openai/whats-new.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ To learn more about the advanced `o1` series models see, [getting started with o
9292

9393
### Preference fine-tuning (preview)
9494

95-
[Direct preference optimization (DPO)](./how-to/fine-tuning.md#direct-preference-optimization-dpo-preview) is a new alignment technique for large language models, designed to adjust model weights based on human preferences. Unlike reinforcement learning from human feedback (RLHF), DPO does not require fitting a reward model and uses simpler data (binary preferences) for training. This method is computationally lighter and faster, making it equally effective at alignment while being more efficient. DPO is especially useful in scenarios where subjective elements like tone, style, or specific content preferences are important. We’re excited to announce the public preview of DPO in Azure OpenAI Service, starting with the `gpt-4o-2024-08-06` model.
95+
[Direct preference optimization (DPO)](./how-to/fine-tuning-direct-preference-optimization.md) is a new alignment technique for large language models, designed to adjust model weights based on human preferences. Unlike reinforcement learning from human feedback (RLHF), DPO does not require fitting a reward model and uses simpler data (binary preferences) for training. This method is computationally lighter and faster, making it equally effective at alignment while being more efficient. DPO is especially useful in scenarios where subjective elements like tone, style, or specific content preferences are important. We’re excited to announce the public preview of DPO in Azure OpenAI Service, starting with the `gpt-4o-2024-08-06` model.
9696

9797
For fine-tuning model region availability, see the [models page](./concepts/models.md#fine-tuning-models).
9898

0 commit comments

Comments
 (0)