Skip to content

Commit 620ed03

Browse files
committed
update article10
1 parent cce4ad6 commit 620ed03

File tree

1 file changed

+11
-10
lines changed

1 file changed

+11
-10
lines changed

articles/sql-data-warehouse/sql-data-warehouse-develop-best-practices-transactions.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Optimizing transactions
3-
description: Learn how to optimize the performance of your transactional code in Azure SQL Data Warehouse while minimizing risk for long rollbacks.
3+
description: Learn how to optimize the performance of your transactional code in SQL Analytics while minimizing risk for long rollbacks.
44
services: sql-data-warehouse
55
author: XiaoyuMSFT
66
manager: craigg
@@ -11,15 +11,16 @@ ms.date: 04/19/2018
1111
ms.author: xiaoyul
1212
ms.reviewer: igorstan
1313
ms.custom: seo-lt-2019
14+
ms.custom: azure-synapse
1415
---
1516

16-
# Optimizing transactions in Azure SQL Data Warehouse
17-
Learn how to optimize the performance of your transactional code in Azure SQL Data Warehouse while minimizing risk for long rollbacks.
17+
# Optimizing transactions in SQL Analytics
18+
Learn how to optimize the performance of your transactional code in SQL Analytics while minimizing risk for long rollbacks.
1819

1920
## Transactions and logging
20-
Transactions are an important component of a relational database engine. SQL Data Warehouse uses transactions during data modification. These transactions can be explicit or implicit. Single INSERT, UPDATE, and DELETE statements are all examples of implicit transactions. Explicit transactions use BEGIN TRAN, COMMIT TRAN, or ROLLBACK TRAN. Explicit transactions are typically used when multiple modification statements need to be tied together in a single atomic unit.
21+
Transactions are an important component of a relational database engine. SQL Analytics uses transactions during data modification. These transactions can be explicit or implicit. Single INSERT, UPDATE, and DELETE statements are all examples of implicit transactions. Explicit transactions use BEGIN TRAN, COMMIT TRAN, or ROLLBACK TRAN. Explicit transactions are typically used when multiple modification statements need to be tied together in a single atomic unit.
2122

22-
Azure SQL Data Warehouse commits changes to the database using transaction logs. Each distribution has its own transaction log. Transaction log writes are automatic. There is no configuration required. However, whilst this process guarantees the write it does introduce an overhead in the system. You can minimize this impact by writing transactionally efficient code. Transactionally efficient code broadly falls into two categories.
23+
SQL Analytics commits changes to the database using transaction logs. Each distribution has its own transaction log. Transaction log writes are automatic. There is no configuration required. However, whilst this process guarantees the write it does introduce an overhead in the system. You can minimize this impact by writing transactionally efficient code. Transactionally efficient code broadly falls into two categories.
2324

2425
* Use minimal logging constructs whenever possible
2526
* Process data using scoped batches to avoid singular long running transactions
@@ -73,7 +74,7 @@ CTAS and INSERT...SELECT are both bulk load operations. However, both are influe
7374
It is worth noting that any writes to update secondary or non-clustered indexes will always be fully logged operations.
7475

7576
> [!IMPORTANT]
76-
> SQL Data Warehouse has 60 distributions. Therefore, assuming all rows are evenly distributed and landing in a single partition, your batch will need to contain 6,144,000 rows or larger to be minimally logged when writing to a Clustered Columnstore Index. If the table is partitioned and the rows being inserted span partition boundaries, then you will need 6,144,000 rows per partition boundary assuming even data distribution. Each partition in each distribution must independently exceed the 102,400 row threshold for the insert to be minimally logged into the distribution.
77+
> A SQL Analytics database has 60 distributions. Therefore, assuming all rows are evenly distributed and landing in a single partition, your batch will need to contain 6,144,000 rows or larger to be minimally logged when writing to a Clustered Columnstore Index. If the table is partitioned and the rows being inserted span partition boundaries, then you will need 6,144,000 rows per partition boundary assuming even data distribution. Each partition in each distribution must independently exceed the 102,400 row threshold for the insert to be minimally logged into the distribution.
7778
>
7879
>
7980
@@ -172,7 +173,7 @@ DROP TABLE [dbo].[FactInternetSales_old]
172173
```
173174

174175
> [!NOTE]
175-
> Re-creating large tables can benefit from using SQL Data Warehouse workload management features. For more information, see [Resource classes for workload management](resource-classes-for-workload-management.md).
176+
> Re-creating large tables can benefit from using SQL Analytics workload management features. For more information, see [Resource classes for workload management](resource-classes-for-workload-management.md).
176177
>
177178
>
178179
@@ -400,18 +401,18 @@ END
400401
```
401402

402403
## Pause and scaling guidance
403-
Azure SQL Data Warehouse lets you [pause, resume, and scale](sql-data-warehouse-manage-compute-overview.md) your data warehouse on demand. When you pause or scale your SQL Data Warehouse, it is important to understand that any in-flight transactions are terminated immediately; causing any open transactions to be rolled back. If your workload had issued a long running and incomplete data modification prior to the pause or scale operation, then this work will need to be undone. This undoing might impact the time it takes to pause or scale your Azure SQL Data Warehouse database.
404+
SQL Analytics lets you [pause, resume, and scale](sql-data-warehouse-manage-compute-overview.md) your SQL pool on demand. When you pause or scale your SQL pool, it is important to understand that any in-flight transactions are terminated immediately; causing any open transactions to be rolled back. If your workload had issued a long running and incomplete data modification prior to the pause or scale operation, then this work will need to be undone. This undoing might impact the time it takes to pause or scale your SQL pool.
404405

405406
> [!IMPORTANT]
406407
> Both `UPDATE` and `DELETE` are fully logged operations and so these undo/redo operations can take significantly longer than equivalent minimally logged operations.
407408
>
408409
>
409410
410-
The best scenario is to let in flight data modification transactions complete prior to pausing or scaling SQL Data Warehouse. However, this scenario might not always be practical. To mitigate the risk of a long rollback, consider one of the following options:
411+
The best scenario is to let in flight data modification transactions complete prior to pausing or scaling SQL pool. However, this scenario might not always be practical. To mitigate the risk of a long rollback, consider one of the following options:
411412

412413
* Rewrite long running operations using [CTAS](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse)
413414
* Break the operation into chunks; operating on a subset of the rows
414415

415416
## Next steps
416-
See [Transactions in SQL Data Warehouse](sql-data-warehouse-develop-transactions.md) to learn more about isolation levels and transactional limits. For an overview of other Best Practices, see [SQL Data Warehouse Best Practices](sql-data-warehouse-best-practices.md).
417+
See [Transactions in SQL Analytics](sql-data-warehouse-develop-transactions.md) to learn more about isolation levels and transactional limits. For an overview of other Best Practices, see [SQL Data Warehouse Best Practices](sql-data-warehouse-best-practices.md).
417418

0 commit comments

Comments
 (0)