Skip to content

Commit 2f2c21d

Browse files
authored
Merge pull request #272031 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents b96cac6 + b8a60fd commit 2f2c21d

File tree

5 files changed

+69
-53
lines changed

5 files changed

+69
-53
lines changed

articles/ai-services/openai/gpt-v-quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: 'Quickstart: Use GPT-4 Turbo with Vision on your images and videos with the Azure Open AI Service'
2+
title: 'Quickstart: Use GPT-4 Turbo with Vision on your images and videos with the Azure OpenAI Service'
33
titleSuffix: Azure OpenAI
44
description: Use this article to get started using Azure OpenAI to deploy and use the GPT-4 Turbo with Vision model.
55
services: cognitive-services

articles/ai-services/openai/includes/get-key-endpoint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,6 @@ To successfully make a call against Azure OpenAI, you need an **endpoint** and a
2020
| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in the **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com/`.|
2121
| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
2222

23-
Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
23+
Go to your resource in the Azure portal. The **Keys & Endpoint** section can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
2424

2525
:::image type="content" source="../media/quickstarts/endpoint.png" alt-text="Screenshot of the overview UI for an Azure OpenAI resource in the Azure portal with the endpoint and access keys location circled in red." lightbox="../media/quickstarts/endpoint.png":::

articles/ai-services/openai/tutorials/fine-tune.md

Lines changed: 59 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: tutorial
99
ms.date: 10/16/2023
10-
author: mrbullwinkle
10+
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
1313
ms.custom:
@@ -29,15 +29,15 @@ In this tutorial you learn how to:
2929
3030
## Prerequisites
3131

32-
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).
33-
- Access granted to Azure OpenAI in the desired Azure subscription Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access.
32+
- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).
33+
- Access granted to Azure OpenAI in the desired Azure subscription Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access.
3434
- Python 3.8 or later version
35-
- The following Python libraries: `json`, `requests`, `os`, `tiktoken`, `time`, `openai`.
35+
- The following Python libraries: `json`, `requests`, `os`, `tiktoken`, `time`, `openai`, `numpy`.
3636
- The OpenAI Python library should be at least version: `0.28.1`.
3737
- [Jupyter Notebooks](https://jupyter.org/)
3838
- An Azure OpenAI resource in a [region where `gpt-35-turbo-0613` fine-tuning is available](../concepts/models.md). If you don't have a resource the process of creating one is documented in our resource [deployment guide](../how-to/create-resource.md).
3939
- Fine-tuning access requires **Cognitive Services OpenAI Contributor**.
40-
- If you do not already have access to view quota, and deploy models in Azure OpenAI Studio you will require [additional permissions](../how-to/role-based-access-control.md).
40+
- If you do not already have access to view quota, and deploy models in Azure OpenAI Studio you will require [additional permissions](../how-to/role-based-access-control.md).
4141

4242

4343
> [!IMPORTANT]
@@ -50,7 +50,7 @@ In this tutorial you learn how to:
5050
# [OpenAI Python 1.x](#tab/python-new)
5151

5252
```cmd
53-
pip install openai requests tiktoken
53+
pip install openai requests tiktoken numpy
5454
```
5555

5656
# [OpenAI Python 0.28.1](#tab/python)
@@ -60,7 +60,7 @@ pip install openai requests tiktoken
6060
If you haven't already, you need to install the following libraries:
6161

6262
```cmd
63-
pip install "openai==0.28.1" requests tiktoken
63+
pip install "openai==0.28.1" requests tiktoken numpy
6464
```
6565

6666
---
@@ -72,11 +72,11 @@ pip install "openai==0.28.1" requests tiktoken
7272
# [Command Line](#tab/command-line)
7373

7474
```CMD
75-
setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
75+
setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
7676
```
7777

7878
```CMD
79-
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
79+
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
8080
```
8181

8282
# [PowerShell](#tab/powershell)
@@ -159,6 +159,8 @@ Create the files in the same directory that you're running the Jupyter Notebook,
159159
Now you need to run some preliminary checks on our training and validation files.
160160

161161
```python
162+
# Run preliminary checks
163+
162164
import json
163165

164166
# Load the training set
@@ -203,6 +205,8 @@ In this case we only have 10 training and 10 validation examples so while this w
203205
Now you can then run some additional code from OpenAI using the tiktoken library to validate the token counts. Individual examples need to remain under the `gpt-35-turbo-0613` model's input token limit of 4096 tokens.
204206

205207
```python
208+
# Validate token counts
209+
206210
import json
207211
import tiktoken
208212
import numpy as np
@@ -248,7 +252,7 @@ for file in files:
248252
messages = ex.get("messages", {})
249253
total_tokens.append(num_tokens_from_messages(messages))
250254
assistant_tokens.append(num_assistant_tokens_from_messages(messages))
251-
255+
252256
print_distribution(total_tokens, "total tokens")
253257
print_distribution(assistant_tokens, "assistant tokens")
254258
print('*' * 50)
@@ -294,9 +298,9 @@ import os
294298
from openai import AzureOpenAI
295299

296300
client = AzureOpenAI(
297-
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
298-
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
299-
api_version="2024-02-01" # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
301+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
302+
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
303+
api_version = "2024-02-01" # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
300304
)
301305

302306
training_file_name = 'training_set.jsonl'
@@ -305,12 +309,12 @@ validation_file_name = 'validation_set.jsonl'
305309
# Upload the training and validation dataset files to Azure OpenAI with the SDK.
306310

307311
training_response = client.files.create(
308-
file=open(training_file_name, "rb"), purpose="fine-tune"
312+
file = open(training_file_name, "rb"), purpose="fine-tune"
309313
)
310314
training_file_id = training_response.id
311315

312316
validation_response = client.files.create(
313-
file=open(validation_file_name, "rb"), purpose="fine-tune"
317+
file = open(validation_file_name, "rb"), purpose="fine-tune"
314318
)
315319
validation_file_id = validation_response.id
316320

@@ -322,10 +326,11 @@ print("Validation file ID:", validation_file_id)
322326

323327
```Python
324328
# Upload fine-tuning files
329+
325330
import openai
326331
import os
327332

328-
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
333+
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
329334
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
330335
openai.api_type = 'azure'
331336
openai.api_version = '2024-02-01' # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
@@ -336,12 +341,12 @@ validation_file_name = 'validation_set.jsonl'
336341
# Upload the training and validation dataset files to Azure OpenAI with the SDK.
337342

338343
training_response = openai.File.create(
339-
file=open(training_file_name, "rb"), purpose="fine-tune", user_provided_filename="training_set.jsonl"
344+
file = open(training_file_name, "rb"), purpose="fine-tune", user_provided_filename="training_set.jsonl"
340345
)
341346
training_file_id = training_response["id"]
342347

343348
validation_response = openai.File.create(
344-
file=open(validation_file_name, "rb"), purpose="fine-tune", user_provided_filename="validation_set.jsonl"
349+
file = open(validation_file_name, "rb"), purpose="fine-tune", user_provided_filename="validation_set.jsonl"
345350
)
346351
validation_file_id = validation_response["id"]
347352

@@ -365,10 +370,12 @@ Now that the fine-tuning files have been successfully uploaded you can submit yo
365370
# [OpenAI Python 1.x](#tab/python-new)
366371

367372
```python
373+
# Submit fine-tuning training job
374+
368375
response = client.fine_tuning.jobs.create(
369-
training_file=training_file_id,
370-
validation_file=validation_file_id,
371-
model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
376+
training_file = training_file_id,
377+
validation_file = validation_file_id,
378+
model = "gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
372379
)
373380

374381
job_id = response.id
@@ -385,10 +392,12 @@ print(response.model_dump_json(indent=2))
385392
# [OpenAI Python 0.28.1](#tab/python)
386393

387394
```python
395+
# Submit fine-tuning training job
396+
388397
response = openai.FineTuningJob.create(
389-
training_file=training_file_id,
390-
validation_file=validation_file_id,
391-
model="gpt-35-turbo-0613",
398+
training_file = training_file_id,
399+
validation_file = validation_file_id,
400+
model = "gpt-35-turbo-0613",
392401
)
393402

394403
job_id = response["id"]
@@ -446,7 +455,7 @@ status = response.status
446455
# If the job isn't done yet, poll it every 10 seconds.
447456
while status not in ["succeeded", "failed"]:
448457
time.sleep(10)
449-
458+
450459
response = client.fine_tuning.jobs.retrieve(job_id)
451460
print(response.model_dump_json(indent=2))
452461
print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60)))
@@ -480,7 +489,7 @@ status = response["status"]
480489
# If the job isn't done yet, poll it every 10 seconds.
481490
while status not in ["succeeded", "failed"]:
482491
time.sleep(10)
483-
492+
484493
response = openai.FineTuningJob.retrieve(job_id)
485494
print(response)
486495
print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60)))
@@ -531,7 +540,7 @@ To get the full results, run the following:
531540
# [OpenAI Python 1.x](#tab/python-new)
532541

533542
```python
534-
#Retrieve fine_tuned_model name
543+
# Retrieve fine_tuned_model name
535544

536545
response = client.fine_tuning.jobs.retrieve(job_id)
537546

@@ -542,7 +551,7 @@ fine_tuned_model = response.fine_tuned_model
542551
# [OpenAI Python 0.28.1](#tab/python)
543552

544553
```python
545-
#Retrieve fine_tuned_model name
554+
# Retrieve fine_tuned_model name
546555

547556
response = openai.FineTuningJob.retrieve(job_id)
548557

@@ -571,20 +580,22 @@ Alternatively, you can deploy your fine-tuned model using any of the other commo
571580
[!INCLUDE [Fine-tuning deletion](../includes/fine-tune.md)]
572581

573582
```python
583+
# Deploy fine-tuned model
584+
574585
import json
575586
import requests
576587

577-
token= os.getenv("TEMP_AUTH_TOKEN")
578-
subscription = "<YOUR_SUBSCRIPTION_ID>"
588+
token = os.getenv("TEMP_AUTH_TOKEN")
589+
subscription = "<YOUR_SUBSCRIPTION_ID>"
579590
resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
580591
resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
581-
model_deployment_name ="YOUR_CUSTOM_MODEL_DEPLOYMENT_NAME"
592+
model_deployment_name = "YOUR_CUSTOM_MODEL_DEPLOYMENT_NAME"
582593

583-
deploy_params = {'api-version': "2023-05-01"}
594+
deploy_params = {'api-version': "2023-05-01"}
584595
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
585596

586597
deploy_data = {
587-
"sku": {"name": "standard", "capacity": 1},
598+
"sku": {"name": "standard", "capacity": 1},
588599
"properties": {
589600
"model": {
590601
"format": "OpenAI",
@@ -619,18 +630,20 @@ After your fine-tuned model is deployed, you can use it like any other deployed
619630
# [OpenAI Python 1.x](#tab/python-new)
620631

621632
```python
633+
# Use the deployed customized model
634+
622635
import os
623636
from openai import AzureOpenAI
624637

625638
client = AzureOpenAI(
626-
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
627-
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
628-
api_version="2024-02-01"
639+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
640+
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
641+
api_version = "2024-02-01"
629642
)
630643

631644
response = client.chat.completions.create(
632-
model="gpt-35-turbo-ft", # model = "Custom deployment name you chose for your fine-tuning model"
633-
messages=[
645+
model = "gpt-35-turbo-ft", # model = "Custom deployment name you chose for your fine-tuning model"
646+
messages = [
634647
{"role": "system", "content": "You are a helpful assistant."},
635648
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
636649
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
@@ -644,16 +657,19 @@ print(response.choices[0].message.content)
644657
# [OpenAI Python 0.28.1](#tab/python)
645658

646659
```python
660+
# Use the deployed customized model
661+
647662
import os
648663
import openai
664+
649665
openai.api_type = "azure"
650-
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
666+
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
651667
openai.api_version = "2024-02-01"
652668
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
653669

654670
response = openai.ChatCompletion.create(
655-
engine="gpt-35-turbo-ft", # engine = "Custom deployment name you chose for your fine-tuning model"
656-
messages=[
671+
engine = "gpt-35-turbo-ft", # engine = "Custom deployment name you chose for your fine-tuning model"
672+
messages = [
657673
{"role": "system", "content": "You are a helpful assistant."},
658674
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
659675
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
@@ -673,14 +689,14 @@ Unlike other types of Azure OpenAI models, fine-tuned/customized models have [an
673689

674690
Deleting the deployment won't affect the model itself, so you can re-deploy the fine-tuned model that you trained for this tutorial at any time.
675691

676-
You can delete the deployment in [Azure OpenAI Studio](https://oai.azure.com/), via [REST API](/rest/api/aiservices/accountmanagement/deployments/delete?tabs=HTTP), [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-delete()), or other supported deployment methods.
692+
You can delete the deployment in [Azure OpenAI Studio](https://oai.azure.com/), via [REST API](/rest/api/aiservices/accountmanagement/deployments/delete?tabs=HTTP), [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-delete()), or other supported deployment methods.
677693

678694
## Troubleshooting
679695

680696
### How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio?
681697

682698
In order to successfully access fine-tuning you need **Cognitive Services OpenAI Contributor assigned**. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor).
683-
699+
684700
## Next steps
685701

686702
- Learn more about [fine-tuning in Azure OpenAI](../how-to/fine-tuning.md)

articles/azure-app-configuration/quickstart-java-spring-app.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrm9084
66
ms.service: azure-app-configuration
77
ms.devlang: java
88
ms.topic: quickstart
9-
ms.date: 09/27/2023
9+
ms.date: 04/12/2024
1010
ms.custom: devx-track-java, mode-api, devx-track-extended-java
1111
ms.author: mametcal
1212
#Customer intent: As a Java Spring developer, I want to manage all my app settings in one place.
@@ -86,13 +86,13 @@ To install the Spring Cloud Azure Config starter module, add the following depen
8686

8787
To use the Spring Cloud Azure Config starter to have your application communicate with the App Configuration store that you create, configure the application by using the following steps.
8888

89-
1. Create a new Java file named *MessageProperties.java*, and add the following lines:
89+
1. Create a new Java file named *MyProperties.java*, and add the following lines:
9090

9191
```java
9292
import org.springframework.boot.context.properties.ConfigurationProperties;
9393

9494
@ConfigurationProperties(prefix = "config")
95-
public class MessageProperties {
95+
public class MyProperties {
9696
private String message;
9797

9898
public String getMessage() {
@@ -113,9 +113,9 @@ To use the Spring Cloud Azure Config starter to have your application communicat
113113

114114
@RestController
115115
public class HelloController {
116-
private final MessageProperties properties;
116+
private final MyProperties properties;
117117

118-
public HelloController(MessageProperties properties) {
118+
public HelloController(MyProperties properties) {
119119
this.properties = properties;
120120
}
121121

@@ -126,13 +126,13 @@ To use the Spring Cloud Azure Config starter to have your application communicat
126126
}
127127
```
128128

129-
1. In the main application Java file, add `@EnableConfigurationProperties` to enable the *MessageProperties.java* configuration properties class to take effect and register it with the Spring container.
129+
1. In the main application Java file, add `@EnableConfigurationProperties` to enable the *MyProperties.java* configuration properties class to take effect and register it with the Spring container.
130130

131131
```java
132132
import org.springframework.boot.context.properties.EnableConfigurationProperties;
133133

134134
@SpringBootApplication
135-
@EnableConfigurationProperties(MessageProperties.class)
135+
@EnableConfigurationProperties(MyProperties.class)
136136
public class DemoApplication {
137137
public static void main(String[] args) {
138138
SpringApplication.run(DemoApplication.class, args);

articles/communication-services/how-tos/call-automation/custom-context.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ For all the code samples, `client` is CallAutomationClient object that can be cr
3434
## Technical parameters
3535
Call Automation supports up to 5 custom SIP headers and 1000 custom VOIP headers. Additionally, developers can include a dedicated User-To-User header as part of SIP headers list.
3636

37-
The custom SIP header key must start with a mandatory ‘X-MS-Custom-’ prefix. The maximum length of a SIP header key is 64 chars, including the X-MS-Custom prefix. The SIP header key may consist of alphanumeric characters and a few selected symbols which includes ".", "!", "%", "\*", "_", "+", "~", "-". The maximum length of SIP header value is 256 chars. The same limitations apply when configuring the SIP headers on your SBC. The SIP header value may consist of alphanumeric characters and a few selected symbols which includes "=", ";", ".", "!", "%", "*", "_", "+", "~", "-".
37+
The custom SIP header key must start with a mandatory ‘X-MS-Custom-’ prefix. The maximum length of a SIP header key is 64 chars, including the X-MS-Custom prefix. The SIP header key may consist of alphanumeric characters and a few selected symbols which includes `.`, `!`, `%`, `*`, `_`, `+`, `~`, `-`. The maximum length of SIP header value is 256 chars. The same limitations apply when configuring the SIP headers on your SBC. The SIP header value may consist of alphanumeric characters and a few selected symbols which includes `=`, `;`, `.`, `!`, `%`, `*`, `_`, `+`, `~`, `-`.
3838

3939
The maximum length of a VOIP header key is 64 chars. These headers can be sent without ‘x-MS-Custom’ prefix. The maximum length of VOIP header value is 1024 chars.
4040

0 commit comments

Comments
 (0)