Skip to content

Commit e23a8b0

Browse files
committed
rebranded articles.
1 parent 2ca7f9e commit e23a8b0

7 files changed

+43
-36
lines changed

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,11 @@ ms.custom: azure-synapse
1515

1616
# Monitoring resource utilization and query activity in Azure Synapse Analytics
1717

18-
Azure Synapse Analytics provides a rich monitoring experience within the Azure portal to surface insights regarding your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs. The portal also enables you to integrate with other Azure monitoring services such as Azure Monitor (logs) with Log analytics to provide a holistic monitoring experience for not only your data warehouse but also your entire Azure analytics platform for an integrated monitoring experience. This documentation describes what monitoring capabilities are available to optimize and manage your analytics platform with SQL Analytics.
18+
Azure Synapse Analytics provides a rich monitoring experience within the Azure portal to surface insights regarding your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs. The portal also enables you to integrate with other Azure monitoring services such as Azure Monitor (logs) with Log analytics to provide a holistic monitoring experience for not only your data warehouse but also your entire Azure analytics platform for an integrated monitoring experience. This documentation describes what monitoring capabilities are available to optimize and manage your analytics platform.
1919

20-
## Resource utilization
21-
The following metrics are available in the Azure portal for SQL Analytics. These metrics are surfaced through [Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/platform/data-collection#metrics).
20+
## Resource utilization
21+
22+
The following metrics are available in the Azure portal for Synapse SQL. These metrics are surfaced through [Azure Monitor](../../azure-monitor/platform/data-collection.md#metrics?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
2223

2324

2425
| Metric Name | Description | Aggregation Type |
@@ -43,21 +44,22 @@ The following metrics are available in the Azure portal for SQL Analytics. These
4344

4445
Things to consider when viewing metrics and setting alerts:
4546

46-
- DWU used represents only a **high-level representation of usage** across the SQL pool and is not meant to be a comprehensive indicator of utilization. To determine whether to scale up or down, consider all factors which can be impacted by DWU such as concurrency, memory, tempdb, and adaptive cache capacity. We recommend [running your workload at different DWU settings](https://docs.microsoft.com/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-overview#finding-the-right-size-of-data-warehouse-units) to determine what works best to meet your business objectives.
47+
- DWU used represents only a **high-level representation of usage** across the SQL pool and is not meant to be a comprehensive indicator of utilization. To determine whether to scale up or down, consider all factors which can be impacted by DWU such as concurrency, memory, tempdb, and adaptive cache capacity. We recommend [running your workload at different DWU settings](sql-data-warehouse-manage-compute-overview#finding-the-right-size-of-data-warehouse-units) to determine what works best to meet your business objectives.
4748
- Failed and successful connections are reported for a particular data warehouse - not for the logical server
4849
- Memory percentage reflects utilization even if the data warehouse is in idle state - it does not reflect active workload memory consumption. Use and track this metric along with others (tempdb, gen2 cache) to make a holistic decision on if scaling for additional cache capacity will increase workload performance to meet your requirements.
4950

50-
5151
## Query activity
52-
For a programmatic experience when monitoring SQL Analytics via T-SQL, the service provides a set of Dynamic Management Views (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
5352

54-
To view the list of DMVs that SQL Analytics provides, refer to this [documentation](https://docs.microsoft.com/azure/sql-data-warehouse/sql-data-warehouse-reference-tsql-system-views#sql-data-warehouse-dynamic-management-views-dmvs).
53+
For a programmatic experience when monitoring Synapse SQL via T-SQL, the service provides a set of Dynamic Management Views (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
54+
55+
To view the list of DMVs that apply to Synapse SQL, refer to this [documentation](sql-data-warehouse-reference-tsql-system-views#sql-data-warehouse-dynamic-management-views-dmvs).
5556

5657
## Metrics and diagnostics logging
57-
Both metrics and logs can be exported to Azure Monitor, specifically the [Azure Monitor logs](https://docs.microsoft.com/azure/log-analytics/log-analytics-overview) component and can be programmatically accessed through [log queries](https://docs.microsoft.com/azure/log-analytics/log-analytics-tutorial-viewdata). The log latency for SQL Analytics is about 10-15 minutes. For more details on the factors impacting latency, visit the following documentation.
5858

59+
Both metrics and logs can be exported to Azure Monitor, specifically the [Azure Monitor logs](../../azure-monitor/log-query/log-query-overview.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) component and can be programmatically accessed through [log queries](../../azure-monitor/log-query/get-started-portal.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The log latency for Synapse SQL is about 10-15 minutes. For more details on the factors impacting latency, visit the following documentation.
5960

6061
## Next steps
62+
6163
The following How-to guides describe common scenarios and use cases when monitoring and managing your data warehouse:
6264

63-
- [Monitor your data warehouse workload with DMVs](https://docs.microsoft.com/azure/sql-data-warehouse/sql-data-warehouse-manage-monitor)
65+
- [Monitor your data warehouse workload with DMVs](/sql-data-warehouse/sql-data-warehouse-manage-monitor?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest)

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-continuous-integration-and-deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ At this point, you have a simple environment where any check-in to your source c
5656

5757
## Next steps
5858

59-
- Explore [SQL Analytics MPP architecture](massively-parallel-processing-mpp-architecture.md)
59+
- Explore [Synapse SQL pool MPP architecture](massively-parallel-processing-mpp-architecture.md)
6060
- Quickly [create a SQL pool](create-data-warehouse-portal.md)
6161
- [Load sample data](load-data-from-azure-blob-storage-using-polybase.md)
6262
- Explore [Videos](/azure/sql-data-warehouse/sql-data-warehouse-videos)

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-best-practices-transactions.md

Lines changed: 18 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Optimizing transactions
3-
description: Learn how to optimize the performance of your transactional code in SQL Analytics while minimizing risk for long rollbacks.
3+
description: Learn how to optimize the performance of your transactional code in Synapse SQL while minimizing risk for long rollbacks.
44
services: synapse-analytics
55
author: XiaoyuMSFT
66
manager: craigg
@@ -13,19 +13,22 @@ ms.reviewer: igorstan
1313
ms.custom: seo-lt-2019, azure-synapse
1414
---
1515

16-
# Optimizing transactions in SQL Analytics
17-
Learn how to optimize the performance of your transactional code in SQL Analytics while minimizing risk for long rollbacks.
16+
# Optimizing transactions in Synapse SQL
17+
18+
Learn how to optimize the performance of your transactional code in Synapse SQL while minimizing risk for long rollbacks.
1819

1920
## Transactions and logging
20-
Transactions are an important component of a relational database engine. SQL Analytics uses transactions during data modification. These transactions can be explicit or implicit. Single INSERT, UPDATE, and DELETE statements are all examples of implicit transactions. Explicit transactions use BEGIN TRAN, COMMIT TRAN, or ROLLBACK TRAN. Explicit transactions are typically used when multiple modification statements need to be tied together in a single atomic unit.
2121

22-
SQL Analytics commits changes to the database using transaction logs. Each distribution has its own transaction log. Transaction log writes are automatic. There is no configuration required. However, whilst this process guarantees the write it does introduce an overhead in the system. You can minimize this impact by writing transactionally efficient code. Transactionally efficient code broadly falls into two categories.
22+
Transactions are an important component of a relational database engine. Transactions are used during data modification. These transactions can be explicit or implicit. Single INSERT, UPDATE, and DELETE statements are all examples of implicit transactions. Explicit transactions use BEGIN TRAN, COMMIT TRAN, or ROLLBACK TRAN. Explicit transactions are typically used when multiple modification statements need to be tied together in a single atomic unit.
23+
24+
Changes to the database are tracked using transaction logs. Each distribution has its own transaction log. Transaction log writes are automatic. There is no configuration required. However, whilst this process guarantees the write it does introduce an overhead in the system. You can minimize this impact by writing transactionally efficient code. Transactionally efficient code broadly falls into two categories.
2325

2426
* Use minimal logging constructs whenever possible
2527
* Process data using scoped batches to avoid singular long running transactions
2628
* Adopt a partition switching pattern for large modifications to a given partition
2729

2830
## Minimal vs. full logging
31+
2932
Unlike fully logged operations, which use the transaction log to keep track of every row change, minimally logged operations keep track of extent allocations and meta-data changes only. Therefore, minimal logging involves logging only the information that is required to roll back the transaction after a failure, or for an explicit request (ROLLBACK TRAN). As much less information is tracked in the transaction log, a minimally logged operation performs better than a similarly sized fully logged operation. Furthermore, because fewer writes go the transaction log, a much smaller amount of log data is generated and so is more I/O efficient.
3033

3134
The transaction safety limits only apply to fully logged operations.
@@ -36,6 +39,7 @@ The transaction safety limits only apply to fully logged operations.
3639
>
3740
3841
## Minimally logged operations
42+
3943
The following operations are capable of being minimally logged:
4044

4145
* CREATE TABLE AS SELECT ([CTAS](sql-data-warehouse-develop-ctas.md))
@@ -73,14 +77,13 @@ CTAS and INSERT...SELECT are both bulk load operations. However, both are influe
7377
It is worth noting that any writes to update secondary or non-clustered indexes will always be fully logged operations.
7478

7579
> [!IMPORTANT]
76-
> A SQL Analytics database has 60 distributions. Therefore, assuming all rows are evenly distributed and landing in a single partition, your batch will need to contain 6,144,000 rows or larger to be minimally logged when writing to a Clustered Columnstore Index. If the table is partitioned and the rows being inserted span partition boundaries, then you will need 6,144,000 rows per partition boundary assuming even data distribution. Each partition in each distribution must independently exceed the 102,400 row threshold for the insert to be minimally logged into the distribution.
77-
>
80+
> A Synapse SQL pool database has 60 distributions. Therefore, assuming all rows are evenly distributed and landing in a single partition, your batch will need to contain 6,144,000 rows or larger to be minimally logged when writing to a Clustered Columnstore Index. If the table is partitioned and the rows being inserted span partition boundaries, then you will need 6,144,000 rows per partition boundary assuming even data distribution. Each partition in each distribution must independently exceed the 102,400 row threshold for the insert to be minimally logged into the distribution.
7881
>
7982
8083
Loading data into a non-empty table with a clustered index can often contain a mixture of fully logged and minimally logged rows. A clustered index is a balanced tree (b-tree) of pages. If the page being written to already contains rows from another transaction, then these writes will be fully logged. However, if the page is empty then the write to that page will be minimally logged.
8184

8285
## Optimizing deletes
83-
DELETE is a fully logged operation. If you need to delete a large amount of data in a table or a partition, it often makes more sense to `SELECT` the data you wish to keep, which can be run as a minimally logged operation. To select the data, create a new table with [CTAS](sql-data-warehouse-develop-ctas.md). Once created, use [RENAME](/sql/t-sql/statements/rename-transact-sql) to swap out your old table with the newly created table.
86+
DELETE is a fully logged operation. If you need to delete a large amount of data in a table or a partition, it often makes more sense to `SELECT` the data you wish to keep, which can be run as a minimally logged operation. To select the data, create a new table with [CTAS](sql-data-warehouse-develop-ctas.md). Once created, use [RENAME](/sql/t-sql/statements/rename-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest) to swap out your old table with the newly created table.
8487

8588
```sql
8689
-- Delete all sales transactions for Promotions except PromotionKey 2.
@@ -111,7 +114,7 @@ RENAME OBJECT [dbo].[FactInternetSales_d] TO [FactInternetSales];
111114
```
112115

113116
## Optimizing updates
114-
UPDATE is a fully logged operation. If you need to update a large number of rows in a table or a partition, it can often be far more efficient to use a minimally logged operation such as [CTAS](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) to do so.
117+
UPDATE is a fully logged operation. If you need to update a large number of rows in a table or a partition, it can often be far more efficient to use a minimally logged operation such as [CTAS](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest) to do so.
115118

116119
In the example below a full table update has been converted to a CTAS so that minimal logging is possible.
117120

@@ -172,7 +175,7 @@ DROP TABLE [dbo].[FactInternetSales_old]
172175
```
173176

174177
> [!NOTE]
175-
> Re-creating large tables can benefit from using SQL Analytics workload management features. For more information, see [Resource classes for workload management](resource-classes-for-workload-management.md).
178+
> Re-creating large tables can benefit from using Synapse SQL pool workload management features. For more information, see [Resource classes for workload management](resource-classes-for-workload-management.md).
176179
>
177180
>
178181
@@ -400,7 +403,8 @@ END
400403
```
401404

402405
## Pause and scaling guidance
403-
SQL Analytics lets you [pause, resume, and scale](sql-data-warehouse-manage-compute-overview.md) your SQL pool on demand. When you pause or scale your SQL pool, it is important to understand that any in-flight transactions are terminated immediately; causing any open transactions to be rolled back. If your workload had issued a long running and incomplete data modification prior to the pause or scale operation, then this work will need to be undone. This undoing might impact the time it takes to pause or scale your SQL pool.
406+
407+
Synapse SQL lets you [pause, resume, and scale](sql-data-warehouse-manage-compute-overview.md) your SQL pool on demand. When you pause or scale your SQL pool, it is important to understand that any in-flight transactions are terminated immediately; causing any open transactions to be rolled back. If your workload had issued a long running and incomplete data modification prior to the pause or scale operation, then this work will need to be undone. This undoing might impact the time it takes to pause or scale your SQL pool.
404408

405409
> [!IMPORTANT]
406410
> Both `UPDATE` and `DELETE` are fully logged operations and so these undo/redo operations can take significantly longer than equivalent minimally logged operations.
@@ -409,9 +413,10 @@ SQL Analytics lets you [pause, resume, and scale](sql-data-warehouse-manage-comp
409413
410414
The best scenario is to let in flight data modification transactions complete prior to pausing or scaling SQL pool. However, this scenario might not always be practical. To mitigate the risk of a long rollback, consider one of the following options:
411415

412-
* Rewrite long running operations using [CTAS](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse)
416+
* Rewrite long running operations using [CTAS](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest)
413417
* Break the operation into chunks; operating on a subset of the rows
414418

415419
## Next steps
416-
See [Transactions in SQL Analytics](sql-data-warehouse-develop-transactions.md) to learn more about isolation levels and transactional limits. For an overview of other Best Practices, see [SQL Data Warehouse Best Practices](sql-data-warehouse-best-practices.md).
420+
421+
See [Transactions in Synapse SQL](sql-data-warehouse-develop-transactions.md) to learn more about isolation levels and transactional limits. For an overview of other Best Practices, see [SQL Data Warehouse Best Practices](sql-data-warehouse-best-practices.md).
417422

0 commit comments

Comments
 (0)