You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/blockchain/workbench/includes/retire.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,9 @@
2
2
author: PatAltimore
3
3
ms.service: azure-blockchain
4
4
ms.topic: include
5
-
ms.date: 02/18/2022
6
-
ms.author: patricka
5
+
ms.date: 04/19/2022
6
+
ms.author: sunir
7
7
---
8
8
9
9
> [!IMPORTANT]
10
-
> On August 16, 2022, Azure Blockchain Workbench will be retired. Please migrate workloads to ConsenSys [Quorum Blockchain Service](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) prior to the retirement date. Select the **Contact me** button on the [Quorum Blockchain Service Azure Marketplace page](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) to contact ConsenSys to learn about their offerings for your requirements.
10
+
> On October 31, 2022, Azure Blockchain Workbench will be retired. Please migrate workloads to ConsenSys [Quorum Blockchain Service](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) prior to the retirement date. Select the **Contact me** button on the [Quorum Blockchain Service Azure Marketplace page](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.qbs-contact-me) to contact ConsenSys to learn about their offerings for your requirements.
Copy file name to clipboardExpand all lines: articles/container-apps/deploy-visual-studio-code.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -120,7 +120,7 @@ Now that you have a container app environment in Azure you can create a containe
120
120
121
121
9) Choose **External** to configure the HTTP traffic that the endpoint will accept.
122
122
123
-
10)Leave the default value of 80 for the port, and then select **Enter** to complete the workflow.
123
+
10)Enter a value of 3000 for the port, and then select **Enter** to complete the workflow. This value should be set to the port number that your container uses, which in the case of the sample app is 3000.
124
124
125
125
During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Click this link, and to view your app in the browser.
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/troubleshoot-changefeed-functions.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Common issues, workarounds, and diagnostic steps, when using the Az
4
4
author: ealsur
5
5
ms.service: cosmos-db
6
6
ms.subservice: cosmosdb-sql
7
-
ms.date: 03/28/2022
7
+
ms.date: 04/14/2022
8
8
ms.author: maquaran
9
9
ms.topic: troubleshooting
10
10
ms.reviewer: sngun
@@ -53,6 +53,10 @@ The previous versions of the Azure Cosmos DB Extension did not support using a l
53
53
54
54
This error means that you are currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you are currently running on Azure Functions V1, you will need to upgrade to Azure Functions V2.
55
55
56
+
### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] cannot be authorized by AAD token in data plane"
57
+
58
+
This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You cannot use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
59
+
56
60
### Azure Function fails to start with "The lease collection, if partitioned, must have partition key equal to id."
57
61
58
62
This error means that your current leases container is partitioned, but the partition key path is not `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/troubleshoot-forbidden.md
+6-5Lines changed: 6 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn how to diagnose and fix forbidden exceptions.
4
4
author: ealsur
5
5
ms.service: cosmos-db
6
6
ms.subservice: cosmosdb-sql
7
-
ms.date: 10/06/2021
7
+
ms.date: 04/14/2022
8
8
ms.author: maquaran
9
9
ms.topic: troubleshooting
10
10
ms.reviewer: sngun
@@ -17,13 +17,13 @@ The HTTP status code 403 represents the request is forbidden to complete.
17
17
18
18
## Firewall blocking requests
19
19
20
-
Data plane requests can come to Cosmos DB via the following 3 paths.
20
+
Data plane requests can come to Cosmos DB via the following three paths.
21
21
22
22
- Public internet (IPv4)
23
23
- Service endpoint
24
24
- Private endpoint
25
25
26
-
When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above 3 paths the request came to Cosmos DB.
26
+
When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above three paths the request came to Cosmos DB.
27
27
28
28
-`Request originated from client IP {...} through public internet.`
29
29
-`Request originated from client VNET through service endpoint.`
@@ -58,7 +58,7 @@ Partition key reached maximum size of {...} GB
58
58
This error means that your current [partitioning design](../partitioning-overview.md#logical-partitions) and workload is trying to store more than the allowed amount of data for a given partition key value. There is no limit to the number of logical partitions in your container but the size of data each logical partition can store is limited. You can reach to support for clarification.
59
59
60
60
## Non-data operations are not allowed
61
-
This scenario happens when non-data [operations are disallowed in the account](../how-to-setup-rbac.md#permission-model). On this scenario, it's common to see errors like the ones below:
61
+
This scenario happens when [attempting to perform non-data operations](../how-to-setup-rbac.md#permission-model) using Azure Active Directory (Azure AD) identities. On this scenario, it's common to see errors like the ones below:
62
62
63
63
```
64
64
Operation 'POST' on resource 'calls' is not allowed through Azure Cosmos DB endpoint
@@ -68,7 +68,8 @@ Forbidden (403); Substatus: 5300; The given request [PUT ...] cannot be authoriz
68
68
```
69
69
70
70
### Solution
71
-
Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell. Or reallow execution of non-data operations.
71
+
Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell.
72
+
If you are using the [Azure Functions Cosmos DB Trigger](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) make sure the `CreateLeaseContainerIfNotExists` property of the trigger isn't set to `true`. Using Azure AD identities blocks any non-data operation, such as creating the lease container.
Copy file name to clipboardExpand all lines: articles/databox/data-box-troubleshoot-share-access.md
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: v-dalc
7
7
ms.service: databox
8
8
ms.subservice: pod
9
9
ms.topic: troubleshooting
10
-
ms.date: 08/23/2021
10
+
ms.date: 04/15/2022
11
11
ms.author: alkohli
12
12
---
13
13
@@ -63,7 +63,11 @@ The failed connection attempts may include background processes, such as retries
63
63
64
64
**Suggested resolution.** To connect to an SMB share after a share account lockout, do these steps:
65
65
66
-
1. Verify the SMB credentials for the share. In the local web UI of your device, go to **Connect and copy**, and select **SMB** for the share. You'll see the following dialog box.
66
+
1. If the dashboard status indicates the device is locked, unlock the device from the top command bar and retry the connection.
67
+
68
+
:::image type="content" source="media/data-box-troubleshoot-share-access/dashboard-locked.png" alt-text="Screenshot of the dashboard locked status.":::
69
+
70
+
1. If you are still unable to connect to an SMB share after unlocking your device, verify the SMB credentials for the share. In the local web UI of your device, go to **Connect and copy**, and select **SMB** for the share. You'll see the following dialog box.
67
71
68
72

# Kernels for Jupyter Notebook on Apache Spark clusters in Azure HDInsight
11
11
12
12
HDInsight Spark clusters provide kernels that you can use with the Jupyter Notebook on [Apache Spark](./apache-spark-overview.md) for testing your applications. A kernel is a program that runs and interprets your code. The three kernels are:
13
13
14
-
-**PySpark** - for applications written in Python2.
14
+
-**PySpark** - for applications written in Python2. (Applicable only for Spark 2.4 version clusters)
15
15
-**PySpark3** - for applications written in Python3.
16
16
-**Spark** - for applications written in Scala.
17
17
@@ -38,6 +38,12 @@ An Apache Spark cluster in HDInsight. For instructions, see [Create Apache Spark
38
38
39
39
:::image type="content" source="./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark.png " alt-text="Kernels for Jupyter Notebook on Spark" border="true":::
40
40
41
+
> [!NOTE]
42
+
> For Spark 3.1, only **PySpark3**, or **Spark** will be available.
43
+
>
44
+
:::image type="content" source="./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark-for-hdi-4-0.png " alt-text="Kernels for Jupyter Notebook on Spark HDI4.0" border="true":::
Copy file name to clipboardExpand all lines: articles/service-fabric/how-to-managed-cluster-stateless-node-type.md
+50-12Lines changed: 50 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,23 +1,27 @@
1
1
---
2
2
title: Deploy a Service Fabric managed cluster with stateless node types
3
3
description: Learn how to create and deploy stateless node types in Service Fabric managed clusters
4
-
ms.topic: how-to
5
-
ms.date: 2/14/2022
4
+
ms.topic: conceptual
5
+
ms.date: 4/11/2022
6
+
author: craftyhouse
7
+
ms.author: micraft
8
+
ms.service: service-fabric
6
9
---
7
10
# Deploy a Service Fabric managed cluster with stateless node types
8
11
9
-
Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types relax this assumption for a node type. Relaxing this assumption enables node stateless node types to benefit from faster scale-out operations by removing some of the restrictions on repair and maintenance operations.
12
+
Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types change this assumption for a node type. This allows the node type to benefit from features such as faster scaleout operations, support for Automatic OS Upgrades, Spot VMs, and scaling out to more than 100 nodes in a node type.
10
13
11
-
* Primary node types cannot be configured to be stateless.
14
+
* Primary node types can't be configured to be stateless.
12
15
* Stateless node types require an API version of **2021-05-01** or later.
13
-
* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more here](how-to-managed-cluster-large-virtual-machine-scale-sets.md).
16
+
* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more about here](how-to-managed-cluster-large-virtual-machine-scale-sets.md).
14
17
* This enables support for up to 1000 nodes for the given node type.
15
18
* Stateless node types can utilize a VM SKU temporary disk.
16
19
17
-
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
20
+
## Enabling stateless node types in a Service Fabric managed cluster
21
+
22
+
To set one or more node types as stateless in a node type resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster.
18
23
19
-
## Enable stateless node types in a Service Fabric managed cluster
20
-
To set one or more node types as stateless in a node type resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, it is required to have at least one primary node type, which is not stateless in the cluster.
24
+
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
21
25
22
26
* The Service Fabric managed cluster resource apiVersion should be **2021-05-01** or later.
23
27
@@ -44,11 +48,45 @@ To set one or more node types as stateless in a node type resource, set the **is
44
48
}
45
49
```
46
50
47
-
## Configure stateless node types with multiple Availability Zones
48
-
To configure a Stateless node type spanning across multiple availability zones follow [Service Fabric clusters across availability zones](.\service-fabric-cross-availability-zones.md).
51
+
## Enabling stateless node types using Spot VMs in a Service Fabric managed cluster (Preview)
52
+
53
+
[Azure Spot Virtual Machines on scale sets](../virtual-machine-scale-sets/use-spot.md) enables users to take advantage of unused compute capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict these Azure Spot Virtual Machine instances. Therefore, Spot VM node types are great for workloads that can handle interruptions and don't need to be completed within a specific time frame. Recommended workloads include development, testing, batch processing jobs, big data, or other large-scale stateless scenarios.
54
+
55
+
To set one or more stateless node types to use Spot VM, set both **isStateless** and **IsSpotVM** properties to true. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster. Stateless node types configured to use Spot VMs have Eviction Policy set to 'Delete'.
56
+
57
+
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
58
+
59
+
* The Service Fabric managed cluster resource apiVersion should be **2022-02-01-preview** or later.
## Configure stateless node types for zone resiliency
86
+
To configure a Stateless node type for zone resiliency you must [configure managed cluster zone spanning](how-to-managed-cluster-availability-zones.md) at the cluster level.
49
87
50
88
>[!NOTE]
51
-
> The zonal resiliency property must be set at the cluster level, and this property cannot be changed in place.
89
+
> The zonal resiliency property must be set at the cluster level, and this property can't be changed in place.
52
90
53
91
## Temporary disk support
54
92
Stateless node types can be configured to use temporary disk as the data disk instead of a Managed Disk. Using a temporary disk can reduce costs for stateless workloads. To configure a stateless node type to use the temporary disk set the **useTempDataDisk** property to **true**.
@@ -82,7 +120,7 @@ Stateless node types can be configured to use temporary disk as the data disk in
82
120
83
121
84
122
## Migrate to using stateless node types in a cluster
85
-
For all migration scenarios, a new stateless node type needs to be added. Existing node type cannot be migrated to be stateless. You can add a new stateless node type to an existing Service Fabric managed cluster, and remove any original node types from the cluster.
123
+
For all migration scenarios, a new stateless node type needs to be added. An existing node type can't be migrated to be stateless. You can add a new stateless node type to an existing Service Fabric managed cluster, and remove any original node types from the cluster.
0 commit comments