Skip to content

Commit 2d34e4b

Browse files
authored
Merge branch 'MicrosoftDocs:main' into doc/mysql-consistent-backup
2 parents 04b65ea + 89d6b24 commit 2d34e4b

15 files changed

+174
-38
lines changed

articles/aks/nat-gateway.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,10 +67,10 @@ az group create --name myresourcegroup --location southcentralus
6767

6868
```azurecli-interactive
6969
az aks create \
70-
--resource-group myresourcegroup \
70+
--resource-group myResourceGroup \
7171
--name natcluster \
7272
--node-count 3 \
73-
--outbound-type managedNATGateway \
73+
--outbound-type managedNATGateway \
7474
--nat-gateway-managed-outbound-ip-count 2 \
7575
--nat-gateway-idle-timeout 30
7676
```

articles/blockchain/workbench/includes/retire.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
author: PatAltimore
33
ms.service: azure-blockchain
44
ms.topic: include
5-
ms.date: 02/18/2022
6-
ms.author: patricka
5+
ms.date: 04/19/2022
6+
ms.author: sunir
77
---
88

99
> [!IMPORTANT]
10-
> On August 16, 2022, Azure Blockchain Workbench will be retired. Please migrate workloads to ConsenSys [Quorum Blockchain Service](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) prior to the retirement date. Select the **Contact me** button on the [Quorum Blockchain Service Azure Marketplace page](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) to contact ConsenSys to learn about their offerings for your requirements.
10+
> On October 31, 2022, Azure Blockchain Workbench will be retired. Please migrate workloads to ConsenSys [Quorum Blockchain Service](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) prior to the retirement date. Select the **Contact me** button on the [Quorum Blockchain Service Azure Marketplace page](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) to contact ConsenSys to learn about their offerings for your requirements.

articles/container-apps/deploy-visual-studio-code.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Now that you have a container app environment in Azure you can create a containe
120120

121121
9) Choose **External** to configure the HTTP traffic that the endpoint will accept.
122122

123-
10) Leave the default value of 80 for the port, and then select **Enter** to complete the workflow.
123+
10) Enter a value of 3000 for the port, and then select **Enter** to complete the workflow. This value should be set to the port number that your container uses, which in the case of the sample app is 3000.
124124

125125
During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Click this link, and to view your app in the browser.
126126

articles/cosmos-db/sql/troubleshoot-changefeed-functions.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Common issues, workarounds, and diagnostic steps, when using the Az
44
author: ealsur
55
ms.service: cosmos-db
66
ms.subservice: cosmosdb-sql
7-
ms.date: 03/28/2022
7+
ms.date: 04/14/2022
88
ms.author: maquaran
99
ms.topic: troubleshooting
1010
ms.reviewer: sngun
@@ -53,6 +53,10 @@ The previous versions of the Azure Cosmos DB Extension did not support using a l
5353

5454
This error means that you are currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you are currently running on Azure Functions V1, you will need to upgrade to Azure Functions V2.
5555

56+
### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] cannot be authorized by AAD token in data plane"
57+
58+
This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You cannot use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
59+
5660
### Azure Function fails to start with "The lease collection, if partitioned, must have partition key equal to id."
5761

5862
This error means that your current leases container is partitioned, but the partition key path is not `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.

articles/cosmos-db/sql/troubleshoot-forbidden.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn how to diagnose and fix forbidden exceptions.
44
author: ealsur
55
ms.service: cosmos-db
66
ms.subservice: cosmosdb-sql
7-
ms.date: 10/06/2021
7+
ms.date: 04/14/2022
88
ms.author: maquaran
99
ms.topic: troubleshooting
1010
ms.reviewer: sngun
@@ -17,13 +17,13 @@ The HTTP status code 403 represents the request is forbidden to complete.
1717

1818
## Firewall blocking requests
1919

20-
Data plane requests can come to Cosmos DB via the following 3 paths.
20+
Data plane requests can come to Cosmos DB via the following three paths.
2121

2222
- Public internet (IPv4)
2323
- Service endpoint
2424
- Private endpoint
2525

26-
When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above 3 paths the request came to Cosmos DB.
26+
When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above three paths the request came to Cosmos DB.
2727

2828
- `Request originated from client IP {...} through public internet.`
2929
- `Request originated from client VNET through service endpoint.`
@@ -58,7 +58,7 @@ Partition key reached maximum size of {...} GB
5858
This error means that your current [partitioning design](../partitioning-overview.md#logical-partitions) and workload is trying to store more than the allowed amount of data for a given partition key value. There is no limit to the number of logical partitions in your container but the size of data each logical partition can store is limited. You can reach to support for clarification.
5959

6060
## Non-data operations are not allowed
61-
This scenario happens when non-data [operations are disallowed in the account](../how-to-setup-rbac.md#permission-model). On this scenario, it's common to see errors like the ones below:
61+
This scenario happens when [attempting to perform non-data operations](../how-to-setup-rbac.md#permission-model) using Azure Active Directory (Azure AD) identities. On this scenario, it's common to see errors like the ones below:
6262

6363
```
6464
Operation 'POST' on resource 'calls' is not allowed through Azure Cosmos DB endpoint
@@ -68,7 +68,8 @@ Forbidden (403); Substatus: 5300; The given request [PUT ...] cannot be authoriz
6868
```
6969

7070
### Solution
71-
Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell. Or reallow execution of non-data operations.
71+
Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell.
72+
If you are using the [Azure Functions Cosmos DB Trigger](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) make sure the `CreateLeaseContainerIfNotExists` property of the trigger isn't set to `true`. Using Azure AD identities blocks any non-data operation, such as creating the lease container.
7273

7374
## Next steps
7475
* Configure [IP Firewall](../how-to-configure-firewall.md).

articles/databox/data-box-troubleshoot-share-access.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: v-dalc
77
ms.service: databox
88
ms.subservice: pod
99
ms.topic: troubleshooting
10-
ms.date: 08/23/2021
10+
ms.date: 04/15/2022
1111
ms.author: alkohli
1212
---
1313

@@ -63,7 +63,11 @@ The failed connection attempts may include background processes, such as retries
6363

6464
**Suggested resolution.** To connect to an SMB share after a share account lockout, do these steps:
6565

66-
1. Verify the SMB credentials for the share. In the local web UI of your device, go to **Connect and copy**, and select **SMB** for the share. You'll see the following dialog box.
66+
1. If the dashboard status indicates the device is locked, unlock the device from the top command bar and retry the connection.
67+
68+
:::image type="content" source="media/data-box-troubleshoot-share-access/dashboard-locked.png" alt-text="Screenshot of the dashboard locked status.":::
69+
70+
1. If you are still unable to connect to an SMB share after unlocking your device, verify the SMB credentials for the share. In the local web UI of your device, go to **Connect and copy**, and select **SMB** for the share. You'll see the following dialog box.
6771

6872
![Screenshot of Access Share And Copy Data screen for an SMB share on a Data Box. Copy icons for the account, username, and password are highlighted.](media/data-box-troubleshoot-share-access/get-share-credentials-01.png)
6973

73.4 KB
Loading
18.5 KB
Loading

articles/hdinsight/spark/apache-spark-jupyter-notebook-kernels.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,14 @@ description: Learn about the PySpark, PySpark3, and Spark kernels for Jupyter No
44
ms.service: hdinsight
55
ms.topic: how-to
66
ms.custom: hdinsightactive,hdiseo17may2017,seoapr2020
7-
ms.date: 04/24/2020
7+
ms.date: 04/18/2022
88
---
99

1010
# Kernels for Jupyter Notebook on Apache Spark clusters in Azure HDInsight
1111

1212
HDInsight Spark clusters provide kernels that you can use with the Jupyter Notebook on [Apache Spark](./apache-spark-overview.md) for testing your applications. A kernel is a program that runs and interprets your code. The three kernels are:
1313

14-
- **PySpark** - for applications written in Python2.
14+
- **PySpark** - for applications written in Python2. (Applicable only for Spark 2.4 version clusters)
1515
- **PySpark3** - for applications written in Python3.
1616
- **Spark** - for applications written in Scala.
1717

@@ -38,6 +38,12 @@ An Apache Spark cluster in HDInsight. For instructions, see [Create Apache Spark
3838

3939
:::image type="content" source="./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark.png " alt-text="Kernels for Jupyter Notebook on Spark" border="true":::
4040

41+
> [!NOTE]
42+
> For Spark 3.1, only **PySpark3**, or **Spark** will be available.
43+
>
44+
:::image type="content" source="./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark-for-hdi-4-0.png " alt-text="Kernels for Jupyter Notebook on Spark HDI4.0" border="true":::
45+
46+
4147
4. A notebook opens with the kernel you selected.
4248

4349
## Benefits of using the kernels

articles/service-fabric/how-to-managed-cluster-stateless-node-type.md

Lines changed: 50 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,27 @@
11
---
22
title: Deploy a Service Fabric managed cluster with stateless node types
33
description: Learn how to create and deploy stateless node types in Service Fabric managed clusters
4-
ms.topic: how-to
5-
ms.date: 2/14/2022
4+
ms.topic: conceptual
5+
ms.date: 4/11/2022
6+
author: craftyhouse
7+
ms.author: micraft
8+
ms.service: service-fabric
69
---
710
# Deploy a Service Fabric managed cluster with stateless node types
811

9-
Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types relax this assumption for a node type. Relaxing this assumption enables node stateless node types to benefit from faster scale-out operations by removing some of the restrictions on repair and maintenance operations.
12+
Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types change this assumption for a node type. This allows the node type to benefit from features such as faster scale out operations, support for Automatic OS Upgrades, Spot VMs, and scaling out to more than 100 nodes in a node type.
1013

11-
* Primary node types cannot be configured to be stateless.
14+
* Primary node types can't be configured to be stateless.
1215
* Stateless node types require an API version of **2021-05-01** or later.
13-
* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more here](how-to-managed-cluster-large-virtual-machine-scale-sets.md).
16+
* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more about here](how-to-managed-cluster-large-virtual-machine-scale-sets.md).
1417
* This enables support for up to 1000 nodes for the given node type.
1518
* Stateless node types can utilize a VM SKU temporary disk.
1619

17-
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
20+
## Enabling stateless node types in a Service Fabric managed cluster
21+
22+
To set one or more node types as stateless in a node type resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster.
1823

19-
## Enable stateless node types in a Service Fabric managed cluster
20-
To set one or more node types as stateless in a node type resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, it is required to have at least one primary node type, which is not stateless in the cluster.
24+
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
2125

2226
* The Service Fabric managed cluster resource apiVersion should be **2021-05-01** or later.
2327

@@ -44,11 +48,45 @@ To set one or more node types as stateless in a node type resource, set the **is
4448
}
4549
```
4650

47-
## Configure stateless node types with multiple Availability Zones
48-
To configure a Stateless node type spanning across multiple availability zones follow [Service Fabric clusters across availability zones](.\service-fabric-cross-availability-zones.md).
51+
## Enabling stateless node types using Spot VMs in a Service Fabric managed cluster (Preview)
52+
53+
[Azure Spot Virtual Machines on scale sets](../virtual-machine-scale-sets/use-spot.md) enables users to take advantage of unused compute capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict these Azure Spot Virtual Machine instances. Therefore, Spot VM node types are great for workloads that can handle interruptions and don't need to be completed within a specific time frame. Recommended workloads include development, testing, batch processing jobs, big data, or other large-scale stateless scenarios.
54+
55+
To set one or more stateless node types to use Spot VM, set both **isStateless** and **IsSpotVM** properties to true. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster. Stateless node types configured to use Spot VMs have Eviction Policy set to 'Delete'.
56+
57+
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
58+
59+
* The Service Fabric managed cluster resource apiVersion should be **2022-02-01-preview** or later.
60+
61+
```json
62+
{
63+
"apiVersion": "[variables('sfApiVersion')]",
64+
"type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
65+
"name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
66+
"location": "[resourcegroup().location]",
67+
"dependsOn": [
68+
"[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
69+
],
70+
"properties": {
71+
"isStateless": true,
72+
"isPrimary": false,
73+
"IsSpotVM": true,
74+
"vmImagePublisher": "[parameters('vmImagePublisher')]",
75+
"vmImageOffer": "[parameters('vmImageOffer')]",
76+
"vmImageSku": "[parameters('vmImageSku')]",
77+
"vmImageVersion": "[parameters('vmImageVersion')]",
78+
"vmSize": "[parameters('nodeTypeSize')]",
79+
"vmInstanceCount": "[parameters('nodeTypeVmInstanceCount')]",
80+
"dataDiskSizeGB": "[parameters('nodeTypeDataDiskSizeGB')]"
81+
}
82+
}
83+
```
84+
85+
## Configure stateless node types for zone resiliency
86+
To configure a Stateless node type for zone resiliency you must [configure managed cluster zone spanning](how-to-managed-cluster-availability-zones.md) at the cluster level.
4987

5088
>[!NOTE]
51-
> The zonal resiliency property must be set at the cluster level, and this property cannot be changed in place.
89+
> The zonal resiliency property must be set at the cluster level, and this property can't be changed in place.
5290
5391
## Temporary disk support
5492
Stateless node types can be configured to use temporary disk as the data disk instead of a Managed Disk. Using a temporary disk can reduce costs for stateless workloads. To configure a stateless node type to use the temporary disk set the **useTempDataDisk** property to **true**.
@@ -82,7 +120,7 @@ Stateless node types can be configured to use temporary disk as the data disk in
82120

83121

84122
## Migrate to using stateless node types in a cluster
85-
For all migration scenarios, a new stateless node type needs to be added. Existing node type cannot be migrated to be stateless. You can add a new stateless node type to an existing Service Fabric managed cluster, and remove any original node types from the cluster.
123+
For all migration scenarios, a new stateless node type needs to be added. An existing node type can't be migrated to be stateless. You can add a new stateless node type to an existing Service Fabric managed cluster, and remove any original node types from the cluster.
86124

87125
## Next steps
88126

0 commit comments

Comments
 (0)