Skip to content

Commit ffc2e18

Browse files
authored
Merge branch 'MicrosoftDocs:main' into main
2 parents 229696b + b497743 commit ffc2e18

25 files changed

+166
-91
lines changed

articles/azure-netapp-files/data-protection-disaster-recovery-options.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Using snapshot technology, you can replicate your Azure NetApp Files across desi
4747
- Data availability and redundancy for remote data processing and user access
4848
- Efficient storage-based data replication without load on compute infrastructure
4949

50-
To learn more, see [How volumes and snapshots are replicated cross-region for DR](snapshots-introduction.md#how-volumes-and-snapshots-are-replicated-cross-region-for-disaster-recovery-and-business-continuity). To get started with cross-region replication, see [Create cross-region replication for Azure NetApp Files](cross-region-replication-create-peering.md).
50+
To learn more, see [How volumes and snapshots are replicated cross-region for DR](snapshots-introduction.md#how-volumes-and-snapshots-are-replicated-for-disaster-recovery-and-business-continuity). To get started with cross-region replication, see [Create cross-region replication for Azure NetApp Files](cross-region-replication-create-peering.md).
5151

5252
## Cross-zone replication
5353

articles/azure-netapp-files/manage-cool-access.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@ The storage with cool access feature provides options for the “coolness period
2828
* To prevent data retrieval from the cool tier to the hot tier during sequential read operations (for example, antivirus or other file scanning operations), set the cool access retrieval policy to **Default** or **Never**. For more information, see [Enable cool access on a new volume](#enable-cool-access-on-a-new-volume).
2929
* After the capacity pool is configured with the option to support cool access volumes, the setting can't be disabled at the _capacity pool_ level. You can turn on or turn off the cool access setting at the _volume_ level anytime. Turning off the cool access setting at the volume level stops further tiering of data. 
3030
* Files moved to the cool tier remains there after you disable cool access on a volume. You must perform an I/O operation on _each_ file to return it to the warm tier.
31-
* You can't use [large volumes](large-volumes-requirements-considerations.md) with cool access.
3231
* For the maximum number of volumes supported for cool access per subscription per region, see [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#resource-limits).
3332
* Considerations for using cool access with [cross-region replication](cross-region-replication-requirements-considerations.md) and [cross-zone replication](cross-zone-replication-introduction.md):
3433
* The cool access setting on the destination volume is updated automatically to match the source volume whenever the setting is changed on the source volume or during authorization. The setting is also updated automatically when a reverse resync of the replication is performed, but only if the destination volume is in a cool access-enabled capacity pool. Changes to the cool access setting on the destination volume don't affect the setting on the source volume.

articles/azure-netapp-files/snapshots-introduction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ You can use several methods to create and maintain snapshots:
8181
* Snapshot policies, via the [Azure portal](snapshots-manage-policy.md), [REST API](/rest/api/netapp/snapshotpolicies), [Azure CLI](/cli/azure/netappfiles/snapshot/policy), or [PowerShell](/powershell/module/az.netappfiles/new-aznetappfilessnapshotpolicy) tools
8282
* Application consistent snapshot tooling such as [AzAcSnap](azacsnap-introduction.md) or third-party solutions
8383

84-
## How volumes and snapshots are replicated cross-region for disaster recovery and business continuity
84+
## How volumes and snapshots are replicated for disaster recovery and business continuity
8585

8686
Azure NetApp Files supports [cross-region replication](cross-region-replication-introduction.md) for disaster-recovery (DR) purposes and [cross-zone replication](cross-zone-replication-introduction.md) for business continuity. Azure NetApp Files cross-region replication and cross-zone replication both use SnapMirror technology. Only changed blocks are sent over the network in a compressed, efficient format. After replication is initiated between volumes, the entire volume contents (that is, the actual stored data blocks) are transferred only once. This operation is called a *baseline transfer*. After the initial transfer, only changed blocks (as captured in snapshots) are transferred. The result is an asynchronous 1:1 replica of the source volume, including all snapshots. This behavior follows a full and incremental-forever replication mechanism. This technology minimizes the amount of data required for replication, therefore saving data transfer costs. It also shortens the replication time. You can achieve a smaller Recovery Point Objective (RPO), because more snapshots can be created and transferred more frequently with minimal data transfers. Further, it takes away the need for host-based replication mechanisms, avoiding virtual machine and software license cost.
8787

375 KB
Loading

articles/azure-vmware/toc.yml

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,13 +33,15 @@ items:
3333
href: tutorial-configure-networking.md
3434
- name: 4 - Access a private cloud
3535
href: tutorial-access-private-cloud.md
36-
- name: 5 - Create an NSX network segment
36+
- name: 5 - Create an NSX Tier-1 Gateway
37+
href: tutorial-nsx-tier-1-gateway.md
38+
- name: 6 - Create an NSX network segment
3739
href: tutorial-nsx-t-network-segment.md
38-
- name: 6 - Peer on-premises to private cloud
40+
- name: 7 - Peer on-premises to private cloud
3941
href: tutorial-expressroute-global-reach-private-cloud.md
40-
- name: 7 - Scale a private cloud
42+
- name: 8 - Scale a private cloud
4143
href: tutorial-scale-private-cloud.md
42-
- name: 8 - Delete a private cloud
44+
- name: 9 - Delete a private cloud
4345
href: tutorial-delete-private-cloud.md
4446
- name: Cost optimization
4547
items:
Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
---
2+
title: Tutorial - NSX-Tier-1-Gateway
3+
description: Learn how to create a Tier-1 Gateway
4+
ms.topic: tutorial
5+
ms.service: azure-vmware
6+
ms.date: 12/11/2024
7+
ms.custom: engagement-fy25
8+
---
9+
10+
# Tutorial: Create an NSX Tier-1 Gateway
11+
12+
After deploying the Azure VMware Solution, you can create additional Tier-1 Gateways from the NSX Manager. Once configured, the additional Tier-1 Gateway is visible in the NSX Manager. NSX comes pre-provisioned by default with an NSX Tier-0 Gateway in **Active/Active** mode and a default Tier-1 Gateway in **Active/Standby** mode.
13+
14+
In this tutorial, you learn how to:
15+
16+
> [!div class="checklist"]
17+
> * Create an additional NSX Tier-1 Gateway in the NSX Manager
18+
> * Configure the High Availability (HA) mode on a Tier-1 Gateway
19+
20+
## Prerequisites
21+
22+
An Azure VMware Solution private cloud with access to the NSX Manager interface. For more information, see the [Configure networking](tutorial-configure-networking.md) tutorial.
23+
24+
## Use NSX Manager to create a Tier-1 Gateway
25+
26+
A Tier-1 Gateway is typically added to a Tier-0 Gateway in the northbound direction and to segments in the southbound direction.
27+
28+
1. With the CloudAdmin account, sign-in to the NSX Manager.
29+
30+
2. In NSX Manager, select **Networking** > **Tier-1 Gateways**.
31+
32+
3. Select **Add Tier-1 Gateway**.
33+
34+
4. Enter a name for the gateway.
35+
36+
5. Select the **HA Mode** for the Tier-1 Gateway. Choose between **Active/Standby**, **Active/Active**, or **Distributed Only**:
37+
38+
| HA Mode | Description |
39+
| :--------- | :------------- |
40+
| Active Standby | One active instance and one standby instance. The Standby takes over is the active fails. |
41+
| Active Active | Both instances are active and can handle traffic simultaneously. |
42+
| Distributed Only | No centralized instances; routing is distributed across all transport nodes. |
43+
44+
6. Select a Tier-0 Gateway to connect to this Tier-1 Gateway to create a multi-tier topology.
45+
46+
7. Select an NSX Edge cluster if you want this Tier-1 Gateway to host stateful services such as NAT, load balancer, or firewall.
47+
48+
8. After you select an NSX Edge cluster, a toggle gives you the option to select NSX Edge Nodes.
49+
50+
9. If you selected an NSX Edge cluster, select a failover mode or accept the default.
51+
52+
| Option | Description |
53+
| :----- | :---------- |
54+
| Preemptive | If the preferred NSX Edge node fails and recovers, it preempts its peer and becomes the active node. The peer changes its state to standby. |
55+
| Non-preemptive | If the preferred NSX Edge node fails and recovers, it checks if its peer is the active node. If so, the preferred node will not preempt its peer and will be the standby node. This is the default option. |
56+
57+
:::image type="content" source="media/nsxt/nsx-create-tier-1.png" alt-text="Diagram showing the creation of a new Tier-1 Gateway in NSX Manager." border="false" lightbox="media/nsxt/nsx-create-tier-1.png":::
58+
59+
10. Select **Save**.
60+
61+
## Next steps
62+
63+
In this tutorial, you created an additional Tier-1 Gateway to use in your Azure VMware Solution private cloud.
64+

articles/synapse-analytics/get-started-analyze-sql-pool.md

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,29 +4,38 @@ description: In this tutorial, use the NYC Taxi sample data to explore SQL pool'
44
author: whhender
55
ms.author: whhender
66
ms.reviewer: whhender, wiassaf
7-
ms.date: 10/16/2023
7+
ms.date: 12/11/2024
88
ms.service: azure-synapse-analytics
99
ms.subservice: sql
1010
ms.topic: tutorial
1111
ms.custom: engagement-fy23
1212
---
1313

14-
# Analyze data with dedicated SQL pools
14+
# Tutorial: Analyze data with dedicated SQL pools
1515

1616
In this tutorial, use the NYC Taxi data to explore a dedicated SQL pool's capabilities.
1717

18+
> [!div class="checklist"]
19+
> * [Deploy a dedicated SQL pool]
20+
> * [Load data into the pool]
21+
> * [Explore the data you've loaded]
22+
23+
## Prerequisites
24+
25+
* This tutorial assumes you've completed the steps in the rest of the quickstarts. Specifically it uses the 'contosodatalake' resource created in [the Create a Synapse Workspace quickstart.](get-started-create-workspace.md#place-sample-data-into-the-primary-storage-account)
26+
1827
## Create a dedicated SQL pool
1928

2029
1. In Synapse Studio, on the left-side pane, select **Manage** > **SQL pools** under **Analytics pools**.
2130
1. Select **New**.
2231
1. For **Dedicated SQL pool name** select `SQLPOOL1`.
2332
1. For **Performance level** choose **DW100C**.
24-
1. Select **Review + create** > **Create**. Your dedicated SQL pool will be ready in a few minutes.
33+
1. Select **Review + create** > **Create**. Your dedicated SQL pool will be ready in a few minutes.
2534

2635
Your dedicated SQL pool is associated with a SQL database that's also called `SQLPOOL1`.
2736

2837
1. Navigate to **Data** > **Workspace**.
29-
1. You should see a database named **SQLPOOL1**. If you do not see it, select **Refresh**.
38+
1. You should see a database named **SQLPOOL1**. If you don't see it, select **Refresh**.
3039

3140
A dedicated SQL pool consumes billable resources as long as it's active. You can pause the pool later to reduce costs.
3241

@@ -83,13 +92,20 @@ A dedicated SQL pool consumes billable resources as long as it's active. You can
8392
,IDENTITY_INSERT = 'OFF'
8493
)
8594
```
95+
96+
>[!TIP]
97+
>If you get an error that reads `Login failed for user '<token-identified principal>'`, you need to set your Entra Id admin.
98+
> 1. In the Azure Portal, search for your synapse workspace.
99+
> 1. Under **Settings** select **Microsoft Entra ID**.
100+
> 1. Select **Set admin** and set a Microsoft Entra ID admin.
101+
86102
1. Select the **Run** button to execute the script.
87103
1. This script finishes in less than 60 seconds. It loads 2 million rows of NYC Taxi data into a table called `dbo.NYCTaxiTripSmall`.
88104

89105
## Explore the NYC Taxi data in the dedicated SQL pool
90106

91107
1. In Synapse Studio, go to the **Data** hub.
92-
1. Go to **SQLPOOL1** > **Tables**.
108+
1. Go to **SQLPOOL1** > **Tables**. (If you don't see it in the menu, refresh the page.)
93109
1. Right-click the **dbo.NYCTaxiTripSmall** table and select **New SQL Script** > **Select TOP 100 Rows**.
94110
1. Wait while a new SQL script is created and runs.
95111
1. At the top of the SQL script **Connect to** is automatically set to the SQL pool called **SQLPOOL1**.
@@ -110,7 +126,16 @@ A dedicated SQL pool consumes billable resources as long as it's active. You can
110126
111127
This query creates a table `dbo.PassengerCountStats` with aggregate data from the `trip_distance` field, then queries the new table. The data shows how the total trip distances and average trip distance relate to the number of passengers.
112128
1. In the SQL script result window, change the **View** to **Chart** to see a visualization of the results as a line chart. Change **Category column** to `PassengerCount`.
113-
129+
130+
## Clean up
131+
132+
Pause your dedicated SQL Pool to reduce costs.
133+
134+
1. Navigate to **Manage** in your synapse workspace.
135+
1. Select **SQL pools**.
136+
1. Hover over SQLPOOL1 and select the **Pause** button.
137+
1. Confirm to pause.
138+
114139
## Next step
115140
116141
> [!div class="nextstepaction"]

articles/synapse-analytics/spark/apache-spark-pool-configurations.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
---
22
title: Apache Spark pool concepts
33
description: Introduction to Apache Spark pool sizes and configurations in Azure Synapse Analytics.
4-
ms.topic: conceptual
4+
ms.topic: concept-article
55
ms.service: azure-synapse-analytics
66
ms.subservice: spark
77
ms.custom: references_regions
88
author: guyhay
99
ms.author: guyhay
1010
ms.reviewer: whhender
11-
ms.date: 09/07/2022
11+
ms.date: 12/06/2024
1212
---
1313

1414
# Apache Spark pool configurations in Azure Synapse Analytics
@@ -53,7 +53,7 @@ Autoscale for Apache Spark pools allows automatic scale up and down of compute r
5353
Apache Spark pools now support elastic pool storage. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach extra disks if needed. Apache Spark pools utilize temporary disk storage while the pool is instantiated. Spark jobs write shuffle map outputs, shuffle data and spilled data to local VM disks. Examples of operations that could utilize local disk are sort, cache, and persist. When temporary VM disk space runs out, Spark jobs could fail due to “Out of Disk Space” error (java.io.IOException: No space left on device). With “Out of Disk Space” errors, much of the burden to prevent jobs from failing shifts to the customer to reconfigure the Spark jobs (for example, tweak the number of partitions) or clusters (for example, add more nodes to the cluster). These errors might not be consistent, and the user might end up experimenting heavily by running production jobs. This process can be expensive for the user in multiple dimensions:
5454

5555
* Wasted time. Customers are required to experiment heavily with job configurations via trial and error and are expected to understand Spark’s internal metrics to make the correct decision.
56-
* Wasted resources. Since production jobs can process varying amount of data, Spark jobs can fail non-deterministically if resources aren't over-provisioned. For instance, consider the problem of data skew, which could result in a few nodes requiring more disk space than others. Currently in Synapse, each node in a cluster gets the same size of disk space and increasing disk space across all nodes isn't an ideal solution and leads to tremendous waste.
56+
* Wasted resources. Since production jobs can process varying amount of data, Spark jobs can fail nondeterministically if resources aren't over-provisioned. For instance, consider the problem of data skew, which could result in a few nodes requiring more disk space than others. Currently in Synapse, each node in a cluster gets the same size of disk space and increasing disk space across all nodes isn't an ideal solution and leads to tremendous waste.
5757
* Slowdown in job execution. In the hypothetical scenario where we solve the problem by autoscaling nodes (assuming costs aren't an issue to the end customer), adding a compute node is still expensive (takes a few minutes) as opposed to adding storage (takes a few seconds).
5858

5959
No action is required by you, plus you should see fewer job failures as a result.
@@ -65,7 +65,7 @@ No action is required by you, plus you should see fewer job failures as a result
6565

6666
The automatic pause feature releases resources after a set idle period, reducing the overall cost of an Apache Spark pool. The number of minutes of idle time can be set once this feature is enabled. The automatic pause feature is independent of the autoscale feature. Resources can be paused whether the autoscale is enabled or disabled. This setting can be altered after pool creation although active sessions will need to be restarted.
6767

68-
## Next steps
68+
## Related content
6969

7070
* [Azure Synapse Analytics](../index.yml)
7171
* [Apache Spark Documentation](https://spark.apache.org/docs/3.2.1/)

0 commit comments

Comments
 (0)