You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sql-data-warehouse/sql-data-warehouse-develop-best-practices-transactions.md
+11-10Lines changed: 11 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: Optimizing transactions
3
-
description: Learn how to optimize the performance of your transactional code in Azure SQL Data Warehouse while minimizing risk for long rollbacks.
3
+
description: Learn how to optimize the performance of your transactional code in SQL Analytics while minimizing risk for long rollbacks.
4
4
services: sql-data-warehouse
5
5
author: XiaoyuMSFT
6
6
manager: craigg
@@ -11,15 +11,16 @@ ms.date: 04/19/2018
11
11
ms.author: xiaoyul
12
12
ms.reviewer: igorstan
13
13
ms.custom: seo-lt-2019
14
+
ms.custom: azure-synapse
14
15
---
15
16
16
-
# Optimizing transactions in Azure SQL Data Warehouse
17
-
Learn how to optimize the performance of your transactional code in Azure SQL Data Warehouse while minimizing risk for long rollbacks.
17
+
# Optimizing transactions in SQL Analytics
18
+
Learn how to optimize the performance of your transactional code in SQL Analytics while minimizing risk for long rollbacks.
18
19
19
20
## Transactions and logging
20
-
Transactions are an important component of a relational database engine. SQL Data Warehouse uses transactions during data modification. These transactions can be explicit or implicit. Single INSERT, UPDATE, and DELETE statements are all examples of implicit transactions. Explicit transactions use BEGIN TRAN, COMMIT TRAN, or ROLLBACK TRAN. Explicit transactions are typically used when multiple modification statements need to be tied together in a single atomic unit.
21
+
Transactions are an important component of a relational database engine. SQL Analytics uses transactions during data modification. These transactions can be explicit or implicit. Single INSERT, UPDATE, and DELETE statements are all examples of implicit transactions. Explicit transactions use BEGIN TRAN, COMMIT TRAN, or ROLLBACK TRAN. Explicit transactions are typically used when multiple modification statements need to be tied together in a single atomic unit.
21
22
22
-
Azure SQL Data Warehouse commits changes to the database using transaction logs. Each distribution has its own transaction log. Transaction log writes are automatic. There is no configuration required. However, whilst this process guarantees the write it does introduce an overhead in the system. You can minimize this impact by writing transactionally efficient code. Transactionally efficient code broadly falls into two categories.
23
+
SQL Analytics commits changes to the database using transaction logs. Each distribution has its own transaction log. Transaction log writes are automatic. There is no configuration required. However, whilst this process guarantees the write it does introduce an overhead in the system. You can minimize this impact by writing transactionally efficient code. Transactionally efficient code broadly falls into two categories.
23
24
24
25
* Use minimal logging constructs whenever possible
25
26
* Process data using scoped batches to avoid singular long running transactions
@@ -73,7 +74,7 @@ CTAS and INSERT...SELECT are both bulk load operations. However, both are influe
73
74
It is worth noting that any writes to update secondary or non-clustered indexes will always be fully logged operations.
74
75
75
76
> [!IMPORTANT]
76
-
> SQL Data Warehouse has 60 distributions. Therefore, assuming all rows are evenly distributed and landing in a single partition, your batch will need to contain 6,144,000 rows or larger to be minimally logged when writing to a Clustered Columnstore Index. If the table is partitioned and the rows being inserted span partition boundaries, then you will need 6,144,000 rows per partition boundary assuming even data distribution. Each partition in each distribution must independently exceed the 102,400 row threshold for the insert to be minimally logged into the distribution.
77
+
> A SQL Analytics database has 60 distributions. Therefore, assuming all rows are evenly distributed and landing in a single partition, your batch will need to contain 6,144,000 rows or larger to be minimally logged when writing to a Clustered Columnstore Index. If the table is partitioned and the rows being inserted span partition boundaries, then you will need 6,144,000 rows per partition boundary assuming even data distribution. Each partition in each distribution must independently exceed the 102,400 row threshold for the insert to be minimally logged into the distribution.
77
78
>
78
79
>
79
80
@@ -172,7 +173,7 @@ DROP TABLE [dbo].[FactInternetSales_old]
172
173
```
173
174
174
175
> [!NOTE]
175
-
> Re-creating large tables can benefit from using SQL Data Warehouse workload management features. For more information, see [Resource classes for workload management](resource-classes-for-workload-management.md).
176
+
> Re-creating large tables can benefit from using SQL Analytics workload management features. For more information, see [Resource classes for workload management](resource-classes-for-workload-management.md).
176
177
>
177
178
>
178
179
@@ -400,18 +401,18 @@ END
400
401
```
401
402
402
403
## Pause and scaling guidance
403
-
Azure SQL Data Warehouse lets you [pause, resume, and scale](sql-data-warehouse-manage-compute-overview.md) your data warehouse on demand. When you pause or scale your SQL Data Warehouse, it is important to understand that any in-flight transactions are terminated immediately; causing any open transactions to be rolled back. If your workload had issued a long running and incomplete data modification prior to the pause or scale operation, then this work will need to be undone. This undoing might impact the time it takes to pause or scale your Azure SQL Data Warehouse database.
404
+
SQL Analytics lets you [pause, resume, and scale](sql-data-warehouse-manage-compute-overview.md) your SQL pool on demand. When you pause or scale your SQL pool, it is important to understand that any in-flight transactions are terminated immediately; causing any open transactions to be rolled back. If your workload had issued a long running and incomplete data modification prior to the pause or scale operation, then this work will need to be undone. This undoing might impact the time it takes to pause or scale your SQL pool.
404
405
405
406
> [!IMPORTANT]
406
407
> Both `UPDATE` and `DELETE` are fully logged operations and so these undo/redo operations can take significantly longer than equivalent minimally logged operations.
407
408
>
408
409
>
409
410
410
-
The best scenario is to let in flight data modification transactions complete prior to pausing or scaling SQL Data Warehouse. However, this scenario might not always be practical. To mitigate the risk of a long rollback, consider one of the following options:
411
+
The best scenario is to let in flight data modification transactions complete prior to pausing or scaling SQL pool. However, this scenario might not always be practical. To mitigate the risk of a long rollback, consider one of the following options:
411
412
412
413
* Rewrite long running operations using [CTAS](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse)
413
414
* Break the operation into chunks; operating on a subset of the rows
414
415
415
416
## Next steps
416
-
See [Transactions in SQL Data Warehouse](sql-data-warehouse-develop-transactions.md) to learn more about isolation levels and transactional limits. For an overview of other Best Practices, see [SQL Data Warehouse Best Practices](sql-data-warehouse-best-practices.md).
417
+
See [Transactions in SQL Analytics](sql-data-warehouse-develop-transactions.md) to learn more about isolation levels and transactional limits. For an overview of other Best Practices, see [SQL Data Warehouse Best Practices](sql-data-warehouse-best-practices.md).
0 commit comments