Skip to content

Commit f3a218f

Browse files
author
RoseHJM
committed
Resolving merge conflict in ToC
2 parents 8e77dfa + 03ac0e9 commit f3a218f

File tree

114 files changed

+2473
-2400
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

114 files changed

+2473
-2400
lines changed

.openpublishing.publish.config.json

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1189,6 +1189,12 @@
11891189
"url": "https://github.com/Azure-Samples/explore-iot-operations",
11901190
"branch": "main",
11911191
"branch_mapping": {}
1192+
},
1193+
{
1194+
"path_to_root": "SupportArticles-docs",
1195+
"url": "https://github.com/MicrosoftDocs/SupportArticles-docs",
1196+
"branch": "main",
1197+
"branch_mapping": {}
11921198
}
11931199
],
11941200
"branch_target_mapping": {

.openpublishing.redirection.json

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1712,12 +1712,22 @@
17121712
},
17131713
{
17141714
"source_path_from_root": "/articles/guides/developer/index.md",
1715-
"redirect_url": "/azure/guides/developer/azure-developer-guide",
1715+
"redirect_url": "/azure/developer/",
1716+
"redirect_document_id": false
1717+
},
1718+
{
1719+
"source_path_from_root": "/articles/guides/developer/azure-developer-guide.md",
1720+
"redirect_url": "/azure/developer/",
17161721
"redirect_document_id": false
17171722
},
17181723
{
17191724
"source_path_from_root": "/articles/guides/operations/index.md",
1720-
"redirect_url": "/azure/guides/operations/azure-operations-guide",
1725+
"redirect_url": "/azure/developer/intro/azure-developer-key-concepts",
1726+
"redirect_document_id": false
1727+
},
1728+
{
1729+
"source_path_from_root": "/articles/guides/operations/azure-operations-guide.md",
1730+
"redirect_url": "/azure/developer/intro/azure-developer-key-concepts",
17211731
"redirect_document_id": false
17221732
},
17231733
{

articles/ai-services/document-intelligence/how-to-guides/includes/v4-0/python-sdk.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -98,11 +98,11 @@ def analyze_read():
9898
# sample document
9999
formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/read.png"
100100

101-
document_intelligence_client = DocumentIntelligenceClient(
101+
client = DocumentIntelligenceClient(
102102
endpoint=endpoint, credential=AzureKeyCredential(key)
103103
)
104104

105-
poller = document_intelligence_client.begin_analyze_document(
105+
poller = client.begin_analyze_document(
106106
"prebuilt-read", formUrl
107107
)
108108
result = poller.result()
@@ -174,11 +174,11 @@ def analyze_layout():
174174
# sample document
175175
formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/layout.png"
176176

177-
document_intelligence_client = DocumentIntelligenceClient(
177+
client = DocumentIntelligenceClient(
178178
endpoint=endpoint, credential=AzureKeyCredential(key)
179179
)
180180

181-
poller = document_intelligence_client.begin_analyze_document(
181+
poller = client.begin_analyze_document(
182182
"prebuilt-layout", formUrl
183183
)
184184
result = poller.result()
@@ -295,9 +295,9 @@ def analyze_general_documents():
295295
docUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
296296

297297
# create your `DocumentIntelligenceClient` instance and `AzureKeyCredential` variable
298-
document_intelligence_client = DocumentIntelligenceClient(endpoint=endpoint, credential=AzureKeyCredential(key))
298+
client = DocumentIntelligenceClient(endpoint=endpoint, credential=AzureKeyCredential(key))
299299

300-
poller = document_intelligence_client.begin_analyze_document(
300+
poller = client.begin_analyze_document(
301301
"prebuilt-document", docUrl)
302302
result = poller.result()
303303

@@ -416,11 +416,11 @@ def analyze_tax_us_w2():
416416
# sample document
417417
formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/w2.png"
418418

419-
document_intelligence_client = DocumentIntelligenceClient(
419+
client = DocumentIntelligenceClient(
420420
endpoint=endpoint, credential=AzureKeyCredential(key)
421421
)
422422

423-
poller = document_intelligence_client.begin_analyze_document(
423+
poller = client.begin_analyze_document(
424424
"prebuilt-tax.us.w2", formUrl
425425
)
426426
w2s = poller.result()
@@ -747,11 +747,11 @@ def analyze_invoice():
747747

748748
invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"
749749

750-
document_intelligence_client = DocumentIntelligenceClient(
750+
client = DocumentIntelligenceClient(
751751
endpoint=endpoint, credential=AzureKeyCredential(key)
752752
)
753753

754-
poller = document_intelligence_client.begin_analyze_document(
754+
poller = client.begin_analyze_document(
755755
"prebuilt-invoice", invoiceUrl)
756756
invoices = poller.result()
757757

@@ -1027,10 +1027,10 @@ def analyze_receipts():
10271027
# sample document
10281028
receiptUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/receipt.png"
10291029

1030-
document_intelligence_client = DocumentIntelligenceClient(
1030+
client = DocumentIntelligenceClient(
10311031
endpoint=endpoint, credential=AzureKeyCredential(key)
10321032
)
1033-
poller = document_intelligence_client.begin_analyze_document(
1033+
poller = client.begin_analyze_document(
10341034
"prebuilt-receipt", receiptUrl, locale="en-US"
10351035
)
10361036
receipts = poller.result()
@@ -1125,11 +1125,11 @@ def analyze_identity_documents():
11251125
# sample document
11261126
identityUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/identity_documents.png"
11271127

1128-
document_intelligence_client = DocumentIntelligenceClient(
1128+
client = DocumentIntelligenceClient(
11291129
endpoint=endpoint, credential=AzureKeyCredential(key)
11301130
)
11311131

1132-
poller = document_intelligence_client.begin_analyze_document(
1132+
poller =client.begin_analyze_document(
11331133
"prebuilt-idDocument", identityUrl
11341134
)
11351135
id_documents = poller.result()

articles/ai-services/document-intelligence/quickstarts/includes/javascript-sdk.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ Extract text, selection marks, text styles, table structures, and bounding regio
146146
:::moniker range="doc-intel-4.0.0"
147147

148148
```javascript
149-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
149+
const { AzureKeyCredential, DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
150150

151151
// set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
152152
const key = "<your-key>";
@@ -156,7 +156,7 @@ Extract text, selection marks, text styles, table structures, and bounding regio
156156
const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
157157

158158
async function main() {
159-
const client = DocumentIntelligence(endpoint, new AzureKeyCredential(key));
159+
const client = DocumentIntelligenceClient(endpoint, new AzureKeyCredential(key));
160160

161161
const poller = await client.beginAnalyzeDocument("prebuilt-layout", formUrl);
162162

@@ -311,7 +311,7 @@ In this example, we analyze an invoice using the **prebuilt-invoice** model.
311311

312312
```javascript
313313

314-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
314+
const { AzureKeyCredential, DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
315315

316316
// set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
317317
const key = "<your-key>";
@@ -321,9 +321,9 @@ const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-doc
321321
invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"
322322

323323
async function main() {
324-
const client = DocumentIntelligence(endpoint, new AzureKeyCredential(key));
324+
const client = DocumentIntelligenceClient(endpoint, new AzureKeyCredential(key));
325325

326-
const poller = await client.beginAnalyzeDocumentFromUrl("prebuilt-invoice", invoiceUrl);
326+
const poller = await client.beginAnalyzeDocument("prebuilt-invoice", invoiceUrl);
327327
if (pages.length <= 0) {
328328
console.log("No pages were extracted from the document.");
329329
} else {

articles/ai-services/openai/concepts/use-your-data.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -621,6 +621,7 @@ You can use Azure OpenAI On Your Data with an Azure OpenAI resource in the follo
621621

622622
* `gpt-4` (0314)
623623
* `gpt-4` (0613)
624+
* `gpt-4` (0125)
624625
* `gpt-4-32k` (0314)
625626
* `gpt-4-32k` (0613)
626627
* `gpt-4` (1106-preview)

articles/ai-services/openai/whats-new.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,10 @@ Fine-tuning is now available in East US 2 with support for:
2828

2929
Check the [models page](concepts/models.md#fine-tuning-models), for the latest information on model availability and fine-tuning support in each region.
3030

31+
### GPT-4 (0125) is available for Azure OpenAI On Your Data
32+
33+
You can now use the GPT-4 (0125) model in [available regions](./concepts/models.md#public-cloud-regions) with Azure OpenAI On Your Data.
34+
3135
## March 2024
3236

3337
### Risks & Safety monitoring in Azure OpenAI Studio

articles/aks/ai-toolchain-operator.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ The following sections describe how to create an AKS cluster with the AI toolcha
196196
1. Deploy the Falcon 7B-instruct model from the KAITO model repository using the `kubectl apply` command.
197197
198198
```azurecli-interactive
199-
kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/kaito_workspace_falcon_7b-instruct.yaml
199+
kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/inference/kaito_workspace_falcon_7b-instruct.yaml
200200
```
201201
202202
2. Track the live resource changes in your workspace using the `kubectl get` command.

articles/aks/spot-node-pool.md

Lines changed: 26 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -5,20 +5,19 @@ ms.topic: article
55
ms.date: 03/29/2023
66
author: schaffererin
77
ms.author: schaffererin
8-
98
ms.subservice: aks-nodes
109
#Customer intent: As a cluster operator or developer, I want to learn how to add an Azure Spot node pool to an AKS Cluster.
1110
---
1211

1312
# Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster
1413

15-
A Spot node pool is a node pool backed by an [Azure Spot Virtual machine scale set][vmss-spot]. With Spot VMs in your AKS cluster, you can take advantage of unutilized Azure capacity with significant cost savings. The amount of available unutilized capacity varies based on many factors, such as node size, region, and time of day.
14+
In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster.
1615

17-
When you deploy a Spot node pool, Azure allocates the Spot nodes if there's capacity available and deploys a Spot scale set that backs the Spot node pool in a single default domain. There's no SLA for the Spot nodes. There are no high availability guarantees. If Azure needs capacity back, the Azure infrastructure will evict the Spot nodes.
16+
A Spot node pool is a node pool backed by an [Azure Spot Virtual Machine scale set][vmss-spot]. With Spot VMs in your AKS cluster, you can take advantage of unutilized Azure capacity with significant cost savings. The amount of available unutilized capacity varies based on many factors, such as node size, region, and time of day.
1817

19-
Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates to schedule on a Spot node pool.
18+
When you deploy a Spot node pool, Azure allocates the Spot nodes if there's capacity available and deploys a Spot scale set that backs the Spot node pool in a single default domain. There's no SLA for the Spot nodes. There are no high availability guarantees. If Azure needs capacity back, the Azure infrastructure evicts the Spot nodes.
2019

21-
In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster.
20+
Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads might be good candidates to schedule on a Spot node pool.
2221

2322
## Before you begin
2423

@@ -32,18 +31,18 @@ In this article, you add a secondary Spot node pool to an existing Azure Kuberne
3231
The following limitations apply when you create and manage AKS clusters with a Spot node pool:
3332

3433
* A Spot node pool can't be a default node pool, it can only be used as a secondary pool.
35-
* The control plane and node pools can't be upgraded at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time.
34+
* You can't upgrade the control plane and node pools at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time.
3635
* A Spot node pool must use Virtual Machine Scale Sets.
3736
* You can't change `ScaleSetPriority` or `SpotMaxPrice` after creation.
3837
* When setting `SpotMaxPrice`, the value must be *-1* or a *positive value with up to five decimal places*.
39-
* A Spot node pool will have the `kubernetes.azure.com/scalesetpriority:spot` label, the taint `kubernetes.azure.com/scalesetpriority=spot:NoSchedule`, and the system pods will have anti-affinity.
38+
* A Spot node pool has the `kubernetes.azure.com/scalesetpriority:spot` label, the `kubernetes.azure.com/scalesetpriority=spot:NoSchedule` taint, and the system pods have anti-affinity.
4039
* You must add a [corresponding toleration][spot-toleration] and affinity to schedule workloads on a Spot node pool.
4140

4241
## Add a Spot node pool to an AKS cluster
4342

4443
When adding a Spot node pool to an existing cluster, it must be a cluster with multiple node pools enabled. When you create an AKS cluster with multiple node pools enabled, you create a node pool with a `priority` of `Regular` by default. To add a Spot node pool, you must specify `Spot` as the value for `priority`. For more details on creating an AKS cluster with multiple node pools, see [use multiple node pools][use-multiple-node-pools].
4544

46-
* Create a node pool with a `priority` of `Spot` using the [az aks nodepool add][az-aks-nodepool-add] command.
45+
* Create a node pool with a `priority` of `Spot` using the [`az aks nodepool add`][az-aks-nodepool-add] command.
4746

4847
```azurecli-interactive
4948
az aks nodepool add \
@@ -68,19 +67,19 @@ The previous command also enables the [cluster autoscaler][cluster-autoscaler],
6867
> [!IMPORTANT]
6968
> Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. We recommend you set up [taints and tolerations][taints-tolerations] on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command adds a taint of `kubernetes.azure.com/scalesetpriority=spot:NoSchedule`, so only pods with a corresponding toleration are scheduled on this node.
7069
71-
### Verify the Spot node pool
70+
## Verify the Spot node pool
7271
73-
* Verify your node pool has been added using the [`az aks nodepool show`][az-aks-nodepool-show] command and confirming the `scaleSetPriority` is `Spot`.
72+
* Verify your node pool was added using the [`az aks nodepool show`][az-aks-nodepool-show] command and confirming the `scaleSetPriority` is `Spot`.
7473
75-
```azurecli
74+
```azurecli-interactive
7675
az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool
7776
```
7877
79-
### Schedule a pod to run on the Spot node
78+
## Schedule a pod to run on the Spot node
8079
8180
To schedule a pod to run on a Spot node, you can add a toleration and node affinity that corresponds to the taint applied to your Spot node.
8281
83-
The following example shows a portion of a YAML file that defines a toleration corresponding to the `kubernetes.azure.com/scalesetpriority=spot:NoSchedule` taint and a node affinity corresponding to the `kubernetes.azure.com/scalesetpriority=spot` label used in the previous step.
82+
The following example shows a portion of a YAML file that defines a toleration corresponding to the `kubernetes.azure.com/scalesetpriority=spot:NoSchedule` taint and a node affinity corresponding to the `kubernetes.azure.com/scalesetpriority=spot` label used in the previous step with `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution` node affinity rules:
8483
8584
```yaml
8685
spec:
@@ -100,10 +99,22 @@ spec:
10099
operator: In
101100
values:
102101
- "spot"
103-
...
102+
preferredDuringSchedulingIgnoredDuringExecution:
103+
- weight: 1
104+
preference:
105+
matchExpressions:
106+
- key: another-node-label-key
107+
operator: In
108+
values:
109+
- another-node-label-value
104110
```
105111

106-
When you deploy a pod with this toleration and node affinity, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.
112+
When you deploy a pod with this toleration and node affinity, Kubernetes successfully schedules the pod on the nodes with the taint and label applied. In this example, the following rules apply:
113+
114+
* The node *must* have a label with the key `kubernetes.azure.com/scalesetpriority`, and the value of that label *must* be `spot`.
115+
* The node *preferably* has a label with the key `another-node-label-key`, and the value of that label *must* be `another-node-label-value`.
116+
117+
For more information, see [Assigning pods to nodes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
107118

108119
## Upgrade a Spot node pool
109120

307 KB
Loading

0 commit comments

Comments
 (0)