Skip to content

Commit 86db219

Browse files
authored
Merge pull request #4663 from voutilad/ft-dev-tier
Initial pass at introducing Developer Tier for FT AOAI models.
2 parents d09db52 + 0212f16 commit 86db219

File tree

8 files changed

+239
-88
lines changed

8 files changed

+239
-88
lines changed

articles/ai-services/openai/how-to/deployment-types.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ ms.author: mbullwin
1313

1414
Azure OpenAI provides customers with choices on the hosting structure that fits their business and usage patterns. The service offers two main types of deployments: **standard** and **provisioned**. For a given deployment type, customers can align their workloads with their data processing requirements by choosing an Azure geography (`Standard` or `Provisioned-Managed`), Microsoft specified data zone (`DataZone-Standard` or `DataZone Provisioned-Managed`), or Global (`Global-Standard` or `Global Provisioned-Managed`) processing options.
1515

16+
For fine-tuned models, an additional `Developer` deployment type provides a cost-efficient means of custom model evaluation, but without data residency.
17+
1618
All deployments can perform the exact same inference operations, however the billing, scale, and performance are substantially different. As part of your solution design, you will need to make two key decisions:
1719

1820
- **Data processing location**
@@ -146,9 +148,18 @@ You can use the following policy to disable access to any Azure OpenAI deploymen
146148
}
147149
```
148150

151+
## Developer (for fine-tuned models)
152+
153+
> [!IMPORTANT]
154+
> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
155+
156+
**SKU name in code:** `Developer`
157+
158+
Fine-tuned models support a Developer deployment specifically designed to support custom model evaluation. It offers no data residency guarantees nor does it offer an SLA. To learn more about using the Developer deployment type, see the [fine-tuning guide](./fine-tune-test.md).
159+
149160
## Deploy models
150161

151-
:::image type="content" source="../media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Foundry portal with three deployment types highlighted." lightbox="../media/deployment-types/deploy-models-new.png":::
162+
:::image type="content" source="../media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Foundry portal with three deployment types highlighted.":::
152163

153164
To learn about creating resources and deploying models refer to the [resource creation guide](./create-resource.md).
154165

Lines changed: 214 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,214 @@
1+
---
2+
title: 'Test a fine-tuned model'
3+
titleSuffix: Azure OpenAI
4+
description: Learn how to test your fine-tuned model with Azure OpenAI Service by using Python, the REST APIs, or Azure AI Foundry portal.
5+
manager: nitinme
6+
ms.service: azure-ai-openai
7+
ms.custom: build-2025
8+
ms.topic: how-to
9+
ms.date: 05/20/2025
10+
author: voutilad
11+
ms.author: davevoutila
12+
---
13+
14+
# Deploy a fine-tuned model for testing (Preview)
15+
16+
After you've fine-tuned a model, you may want to test its quality via the Chat Completions API or the [Evaluations](./evaluations.md) service.
17+
18+
A Developer Tier deployment allows you to deploy your new model without the hourly hosting fee incurred by Standard or Global deployments. The only charges incurred are per-token. Consult the [pricing page](https://aka.ms/aoaipricing) for the most up-to-date pricing.
19+
20+
> [!IMPORTANT]
21+
> Developer Tier offers no availability SLA and no [data residency](https://aka.ms/data-residency) guarantees. If you require an SLA or data residency, choose an alternative [deployment type](./deployment-types.md) for testing your model.
22+
>
23+
> Developer Tier deployments have a fixed lifetime of **24 hours**. Learn more [below](#clean-up-your-deployment) about the deployment lifecycle.
24+
25+
## Deploy your fine-tuned model
26+
27+
## [Portal](#tab/portal)
28+
29+
To deploy your model candidate, select the fine-tuned model to deploy, and then select **Deploy**.
30+
31+
The **Deploy model** dialog box opens. In the dialog box, enter your **Deployment name** and then select **Developer** from the deployment type drop-down. Select **Create** to start the deployment of your custom model.
32+
33+
:::image type="content" source="../media/fine-tuning/developer.png" alt-text="Screenshot showing selecting Developer deployment in AI Foundry.":::
34+
35+
You can monitor the progress of your new deployment on the **Deployments** pane in Azure AI Foundry portal.
36+
37+
## [Python](#tab/python)
38+
39+
```python
40+
import json
41+
import os
42+
import requests
43+
44+
token = os.getenv("<TOKEN>")
45+
subscription = "<YOUR_SUBSCRIPTION_ID>"
46+
resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
47+
resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
48+
model_deployment_name = "gpt41-mini-candidate-01" # custom deployment name that you will use to reference the model when making inference calls.
49+
50+
deploy_params = {'api-version': "2024-10-21"}
51+
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
52+
53+
deploy_data = {
54+
"sku": {"name": "developer", "capacity": 50},
55+
"properties": {
56+
"model": {
57+
"format": "OpenAI",
58+
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt41-mini-candidate-01.ft-b044a9d3cf9c4228b5d393567f693b83
59+
"version": "1"
60+
}
61+
}
62+
}
63+
deploy_data = json.dumps(deploy_data)
64+
65+
request_url = f'https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.CognitiveServices/accounts/{resource_name}/deployments/{model_deployment_name}'
66+
67+
print('Creating a new deployment...')
68+
69+
r = requests.put(request_url, params=deploy_params, headers=deploy_headers, data=deploy_data)
70+
71+
print(r)
72+
print(r.reason)
73+
print(r.json())
74+
75+
```
76+
77+
|variable | Definition|
78+
|--------------|-----------|
79+
| token | There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account#az-account-get-access-token()). You can use this token as your temporary authorization token for API testing. We recommend storing this in a new environment variable. |
80+
| subscription | The subscription ID for the associated Azure OpenAI resource. |
81+
| resource_group | The resource group name for your Azure OpenAI resource. |
82+
| resource_name | The Azure OpenAI resource name. |
83+
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
84+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt41-mini-candidate-01.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
85+
86+
## [REST](#tab/rest)
87+
88+
The following example shows how to use the REST API to create a model deployment for your customized model. The REST API generates a name for the deployment of your customized model.
89+
90+
91+
```bash
92+
curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-21" \
93+
-H "Authorization: Bearer <TOKEN>" \
94+
-H "Content-Type: application/json" \
95+
-d '{
96+
"sku": {"name": "developer", "capacity": 50},
97+
"properties": {
98+
"model": {
99+
"format": "OpenAI",
100+
"name": "<FINE_TUNED_MODEL>",
101+
"version": "1"
102+
}
103+
}
104+
}'
105+
```
106+
107+
|variable | Definition|
108+
|--------------|-----------|
109+
| token | There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account#az-account-get-access-token()). You can use this token as your temporary authorization token for API testing. We recommend storing this in a new environment variable. |
110+
| subscription | The subscription ID for the associated Azure OpenAI resource. |
111+
| resource_group | The resource group name for your Azure OpenAI resource. |
112+
| resource_name | The Azure OpenAI resource name. |
113+
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
114+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
115+
116+
117+
### Deploy a model with Azure CLI
118+
119+
The following example shows how to use the Azure CLI to deploy your customized model. With the Azure CLI, you must specify a name for the deployment of your customized model. For more information about how to use the Azure CLI to deploy customized models, see [`az cognitiveservices account deployment`](/cli/azure/cognitiveservices/account/deployment).
120+
121+
To run this Azure CLI command in a console window, you must replace the following _\<placeholders>_ with the corresponding values for your customized model:
122+
123+
| Placeholder | Value |
124+
| --- | --- |
125+
| _\<YOUR_AZURE_SUBSCRIPTION>_ | The name or ID of your Azure subscription. |
126+
| _\<YOUR_RESOURCE_GROUP>_ | The name of your Azure resource group. |
127+
| _\<YOUR_RESOURCE_NAME>_ | The name of your Azure OpenAI resource. |
128+
| _\<YOUR_DEPLOYMENT_NAME>_ | The name you want to use for your model deployment. |
129+
| _\<YOUR_FINE_TUNED_MODEL_ID>_ | The name of your customized model. |
130+
131+
```azurecli
132+
az cognitiveservices account deployment create
133+
--resource-group <YOUR_RESOURCE_GROUP>
134+
--name <YOUR_RESOURCE_NAME>
135+
--deployment-name <YOUR_DEPLOYMENT_NAME>
136+
--model-name <YOUR_FINE_TUNED_MODEL_ID>
137+
--model-version "1"
138+
--model-format OpenAI
139+
--sku-capacity "50"
140+
--sku-name "Developer"
141+
```
142+
---
143+
144+
## Use your deployed fine-tuned model
145+
146+
## [Portal](#tab/portal)
147+
148+
After your custom model deploys, you can use it like any other deployed model. You can use the **Playgrounds** in the [Azure AI Foundry portal](https://ai.azure.com) to experiment with your new deployment. You can continue to use the same parameters with your custom model, such as `temperature` and `max_tokens`, as you can with other deployed models.
149+
150+
:::image type="content" source="../media/fine-tuning/chat-playground.png" alt-text="Screenshot of the Playground pane in Azure AI Foundry portal, with sections highlighted." lightbox="../media/fine-tuning/chat-playground.png":::
151+
152+
You can also use the [Evaluations](./evaluations.md) service to create and run model evaluations against your deployed model candidate as well as other model versions.
153+
154+
## [Python](#tab/python)
155+
156+
```python
157+
import os
158+
from openai import AzureOpenAI
159+
160+
client = AzureOpenAI(
161+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
162+
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
163+
api_version="2024-02-01"
164+
)
165+
166+
response = client.chat.completions.create(
167+
model="gpt41-mini-candidate-01", # model = "Custom deployment name you chose for your fine-tuning model"
168+
messages=[
169+
{"role": "system", "content": "You are a helpful assistant."},
170+
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
171+
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
172+
{"role": "user", "content": "Do other Azure AI services support this too?"}
173+
]
174+
)
175+
176+
print(response.choices[0].message.content)
177+
```
178+
179+
## [REST](#tab/rest)
180+
181+
```bash
182+
curl $AZURE_OPENAI_ENDPOINT/openai/deployments/<deployment_name>/chat/completions?api-version=2024-10-21 \
183+
-H "Content-Type: application/json" \
184+
-H "api-key: $AZURE_OPENAI_API_KEY" \
185+
-d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure AI services support this too?"}]}'
186+
```
187+
---
188+
189+
## Clean up your deployment
190+
191+
Developer deployments will auto-delete on their own regardless of activity. Each deployment has a fixed lifetime of **24 hours** after which it is subject to removal. The deletion of a deployment doesn't delete or affect the underlying customized model and the customized model can be redeployed at any time.
192+
193+
To delete a deployment manually, you can use the Azure AI Foundry portal or use [Azure CLI](/cli/azure/cognitiveservices/account/deployment?preserve-view=true#az-cognitiveservices-account-deployment-delete).
194+
195+
To use the [Deployments - Delete REST API](/rest/api/aiservices/accountmanagement/deployments/delete?view=rest-aiservices-accountmanagement-2024-10-01&tabs=HTTP&preserve-view=true) send an HTTP `DELETE` to the deployment resource. Like with creating deployments, you must include the following parameters:
196+
197+
- Azure subscription ID
198+
- Azure resource group name
199+
- Azure OpenAI resource name
200+
- Name of the deployment to delete
201+
202+
Below is the REST API example to delete a deployment:
203+
204+
```bash
205+
curl -X DELETE "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-21" \
206+
-H "Authorization: Bearer <TOKEN>"
207+
```
208+
209+
210+
## Next steps
211+
212+
- [Deploy for production](./fine-tuning-deploy.md)
213+
- Understand [Azure OpenAI Quotas & limits](./quota.md)
214+
- Read more about other [Azure OpenAI deployment types](./deployment-types.md)

articles/ai-services/openai/includes/fine-tuning-python.md

Lines changed: 3 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -274,59 +274,11 @@ Look for your loss to decrease over time, and your accuracy to increase. If you
274274

275275
## Deploy a fine-tuned model
276276

277-
When the fine-tuning job succeeds, the value of the `fine_tuned_model` variable in the response body is set to the name of your customized model. Your model is now also available for discovery from the [list Models API](/rest/api/azureopenai/models/list). However, you can't issue completion calls to your customized model until your customized model is deployed. You must deploy your customized model to make it available for use with completion calls.
277+
Once you're satisfied with the metrics from your fine-tuning job, or you just want to move onto inference, you must deploy the model.
278278

279-
Unlike the previous SDK commands, deployment must be done using the control plane API which requires separate authorization, a different API path, and a different API version.
280-
281-
|variable | Definition|
282-
|--------------|-----------|
283-
| token | There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account#az-account-get-access-token()). You can use this token as your temporary authorization token for API testing. We recommend storing this in a new environment variable. |
284-
| subscription | The subscription ID for the associated Azure OpenAI resource. |
285-
| resource_group | The resource group name for your Azure OpenAI resource. |
286-
| resource_name | The Azure OpenAI resource name. |
287-
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
288-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
289-
290-
```python
291-
import json
292-
import os
293-
import requests
294-
295-
token= os.getenv("<TOKEN>")
296-
subscription = "<YOUR_SUBSCRIPTION_ID>"
297-
resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
298-
resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
299-
model_deployment_name ="gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
300-
301-
deploy_params = {'api-version': "2024-10-01"} # control plane API version rather than dataplane API for this call
302-
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
303-
304-
deploy_data = {
305-
"sku": {"name": "standard", "capacity": 1},
306-
"properties": {
307-
"model": {
308-
"format": "OpenAI",
309-
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83
310-
"version": "1"
311-
}
312-
}
313-
}
314-
deploy_data = json.dumps(deploy_data)
315-
316-
request_url = f'https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.CognitiveServices/accounts/{resource_name}/deployments/{model_deployment_name}'
317-
318-
print('Creating a new deployment...')
319-
320-
r = requests.put(request_url, params=deploy_params, headers=deploy_headers, data=deploy_data)
321-
322-
print(r)
323-
print(r.reason)
324-
print(r.json())
325-
326-
```
327-
328-
Learn more about cross region deployment and use the deployed model [here](../how-to/fine-tuning-deploy.md#use-your-deployed-fine-tuned-model).
279+
If you're deploying for further validation, consider deploying for [testing](../how-to/fine-tune-test.md?tabs=python) using a Developer deployment.
329280

281+
If you're ready to deploy for production or have particular data residency needs, follow our [deployment guide](../how-to/fine-tuning-deploy.md?tabs=python).
330282

331283
## Continuous fine-tuning
332284

0 commit comments

Comments
 (0)