Skip to content

Commit 8fd5d11

Browse files
authored
Merge pull request #263316 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 255d9e6 + 6c45772 commit 8fd5d11

File tree

7 files changed

+33
-24
lines changed

7 files changed

+33
-24
lines changed

articles/ai-services/computer-vision/how-to/shelf-analyze.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ To analyze a shelf image, do the following steps:
3636
1. Copy the following `curl` command into a text editor.
3737

3838
```bash
39-
curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
39+
curl -X PUT -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
4040
'url':'<your_url_string>'
4141
}"
4242
```

articles/ai-services/openai/how-to/switching-endpoints.md

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -31,10 +31,11 @@ We recommend using environment variables. If you haven't done this before our [P
3131
<td>
3232

3333
```python
34+
import os
3435
from openai import OpenAI
3536

3637
client = OpenAI(
37-
api_key=os.environ["OPENAI_API_KEY"]
38+
api_key=os.getenv("OPENAI_API_KEY")
3839
)
3940

4041

@@ -51,7 +52,7 @@ from openai import AzureOpenAI
5152
client = AzureOpenAI(
5253
api_key=os.getenv("AZURE_OPENAI_KEY"),
5354
api_version="2023-12-01-preview",
54-
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
55+
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
5556
)
5657
```
5758

@@ -71,10 +72,11 @@ client = AzureOpenAI(
7172
<td>
7273

7374
```python
75+
import os
7476
from openai import OpenAI
7577

7678
client = OpenAI(
77-
api_key=os.environ["OPENAI_API_KEY"]
79+
api_key=os.getenv("OPENAI_API_KEY")
7880
)
7981

8082

@@ -93,7 +95,9 @@ client = OpenAI(
9395
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
9496
from openai import AzureOpenAI
9597

96-
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
98+
token_provider = get_bearer_token_provider(
99+
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
100+
)
97101

98102
api_version = "2023-12-01-preview"
99103
endpoint = "https://my-resource.openai.azure.com"
@@ -126,7 +130,7 @@ OpenAI uses the `model` keyword argument to specify what model to use. Azure Ope
126130
```python
127131
completion = client.completions.create(
128132
model="gpt-3.5-turbo-instruct",
129-
prompt="<prompt>")
133+
prompt="<prompt>"
130134
)
131135

132136
chat_completion = client.chat.completions.create(
@@ -135,8 +139,8 @@ chat_completion = client.chat.completions.create(
135139
)
136140

137141
embedding = client.embeddings.create(
138-
input="<input>",
139-
model="text-embedding-ada-002"
142+
model="text-embedding-ada-002",
143+
input="<input>"
140144
)
141145
```
142146

@@ -146,17 +150,17 @@ embedding = client.embeddings.create(
146150
```python
147151
completion = client.completions.create(
148152
model="gpt-35-turbo-instruct", # This must match the custom deployment name you chose for your model.
149-
prompt=<"prompt">
153+
prompt="<prompt>"
150154
)
151155

152156
chat_completion = client.chat.completions.create(
153157
model="gpt-35-turbo", # model = "deployment_name".
154-
messages=<"messages">
158+
messages="<messages>"
155159
)
156160

157161
embedding = client.embeddings.create(
158-
input = "<input>",
159-
model= "text-embedding-ada-002" # model = "deployment_name".
162+
model="text-embedding-ada-002", # model = "deployment_name".
163+
input="<input>"
160164
)
161165
```
162166

@@ -179,8 +183,8 @@ OpenAI and Azure OpenAI currently support input arrays up to 2048 input items fo
179183
inputs = ["A", "B", "C"]
180184

181185
embedding = client.embeddings.create(
182-
input=inputs,
183-
model="text-embedding-ada-002"
186+
input=inputs,
187+
model="text-embedding-ada-002"
184188
)
185189

186190

@@ -193,9 +197,9 @@ embedding = client.embeddings.create(
193197
inputs = ["A", "B", "C"] #max array size=2048
194198

195199
embedding = client.embeddings.create(
196-
input=inputs,
197-
model="text-embedding-ada-002" # This must match the custom deployment name you chose for your model.
198-
#engine="text-embedding-ada-002"
200+
input=inputs,
201+
model="text-embedding-ada-002" # This must match the custom deployment name you chose for your model.
202+
# engine="text-embedding-ada-002"
199203
)
200204

201205
```

articles/ai-services/openai/how-to/use-blocklists.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ The configurable content filters are sufficient for most content moderation need
2020

2121
- An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>.
2222
- Once you have your Azure subscription, create an Azure OpenAI resource in the Azure portal to get your token, key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier. Then select **Create**.
23-
- The resource takes a few minutes to deploy. After it finishes, sSelect **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.
23+
- The resource takes a few minutes to deploy. After it finishes, select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.
2424
- [Azure CLI](/cli/azure/install-azure-cli) installed
2525
- [cURL](https://curl.haxx.se/) installed
2626

@@ -279,4 +279,4 @@ You can also create custom blocklists in the Azure OpenAI Studio as part of your
279279

280280
- Read more about [content filtering categories and severity levels](/azure/ai-services/openai/concepts/content-filter?tabs=python) with Azure OpenAI Service.
281281

282-
- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs)](/azure/ai-services/openai/concepts/red-teaming) article.
282+
- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs)](/azure/ai-services/openai/concepts/red-teaming) article.

articles/azure-portal/azure-portal-dashboards-create-programmatically.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,8 +81,10 @@ Declare required template metadata and the parameters at the top of the JSON tem
8181
}
8282
},
8383
"variables": {},
84-
85-
... rest of template omitted ...
84+
"resources": [
85+
... rest of template omitted ...
86+
]
87+
}
8688
```
8789

8890
Once you've configured your template, deploy it using any of the following methods:

articles/machine-learning/how-to-secure-training-vnet.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ serverless_compute:
285285
Update workspace:
286286
287287
```azurecli
288-
az ml workspace update -n <workspace-name> -g <resource-group-name> -file serverlesscomputevnetsettings.yml
288+
az ml workspace update -n <workspace-name> -g <resource-group-name> --file serverlesscomputevnetsettings.yml
289289
```
290290

291291
```yaml
@@ -452,7 +452,7 @@ serverless_compute:
452452
Update workspace:
453453

454454
```azurecli
455-
az ml workspace update -n <workspace-name> -g <resource-group-name> -file serverlesscomputevnetsettings.yml
455+
az ml workspace update -n <workspace-name> -g <resource-group-name> --file serverlesscomputevnetsettings.yml
456456
```
457457

458458
```yaml

articles/storage/blobs/immutable-storage-overview.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,9 @@ When you enable blob inventory, Azure Storage generates an inventory report on a
146146

147147
For more information about blob inventory, see [Azure Storage blob inventory](blob-inventory.md).
148148

149+
> [!NOTE]
150+
> You can't configure an inventory policy in an account if support for version-level immutability is enabled on that account, or if support for version-level immutability is enabled on the destination container that is defined in the inventory policy.
151+
149152
## Pricing
150153

151154
There is no additional capacity charge for using immutable storage. Immutable data is priced in the same way as mutable data. For pricing details on Azure Blob Storage, see the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).

articles/virtual-machines/nd-h100-v5-series.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Ubuntu 20.04: 5.4.0-1046-azure
4343

4444
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU Memory GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max network bandwidth | Max NICs |
4545
|---------------------|------|------------|------------------------|----------------------------|----------------|----------------|-----------------------------------------|------------------------------|----------|
46-
| Standard_ND96isr_v5 | 96 | 1900 | 1000 | 8 H100 80 GB GPUs(NVLink) | 80 | 32 | 40800/612 | 80,000 Mbps | 8 |
46+
| Standard_ND96isr_H100_v5 | 96 | 1900 | 1000 | 8 H100 80 GB GPUs(NVLink) | 80 | 32 | 40800/612 | 80,000 Mbps | 8 |
4747

4848
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
4949

0 commit comments

Comments
 (0)