Skip to content

Commit bb927dc

Browse files
authored
Merge branch 'release-synapse-current' into sql_analytics_naming2
2 parents c300b2f + a308d4c commit bb927dc

14 files changed

+53
-38
lines changed

articles/sql-database/transparent-data-encryption-byok-azure-sql.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ If the key that is needed for restoring a backup is no longer available to the t
158158

159159
To mitigate it, run the [Get-AzSqlServerKeyVaultKey](/powershell/module/az.sql/get-azsqlserverkeyvaultkey) cmdlet for target SQL Database logical server or [Get-AzSqlInstanceKeyVaultKey](/powershell/module/az.sql/get-azsqlinstancekeyvaultkey) for target managed instance to return the list of available keys and identify the missing ones. To ensure all backups can be restored, make sure the target server for the restore has access to all of keys needed. These keys don't need to be marked as TDE protector.
160160

161-
To learn more about backup recovery for SQL Database, see [Recover an Azure SQL database](sql-database-recovery-using-backups.md). To learn more about backup recovery for SQL Pool, see [Recover a SQL Pool](../synapse-analytics/sql-data-warehouse/backup-and-restore.md). For SQL Server's native backup/restore with managed instance, see [Quickstart: Restore a database to a Managed Instance](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance-get-started-restore)
161+
To learn more about backup recovery for SQL Database, see [Recover an Azure SQL database](sql-database-recovery-using-backups.md). To learn more about backup recovery for SQL pool, see [Recover a SQL pool](../synapse-analytics/sql-data-warehouse/backup-and-restore.md). For SQL Server's native backup/restore with managed instance, see [Quickstart: Restore a database to a Managed Instance](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance-get-started-restore)
162162

163163
Additional consideration for log files: Backed up log files remain encrypted with the original TDE protector, even if it was rotated and the database is now using a new TDE protector. At restore time, both keys will be needed to restore the database. If the log file is using a TDE protector stored in Azure Key Vault, this key will be needed at restore time, even if the database has been changed to use service-managed TDE in the meantime.
164164

articles/synapse-analytics/breadcrumb/toc.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,4 +46,12 @@
4646
tocHref: /azure/active-directory
4747
topicHref: /azure/synapse-analytics/index
4848

49+
- name: Azure
50+
tocHref: /azure/
51+
topicHref: /azure/index
52+
items:
53+
- name: Synapse Analytics
54+
tocHref: /azure/cosmos-db
55+
topicHref: /azure/synapse-analytics/index
56+
4957

articles/synapse-analytics/metadata/database.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,5 +91,5 @@ Verify the schema for the newly created database in the results.
9191
- [Learn more about Azure Synapse Analytics' shared metadata](overview.md)
9292
- [Learn more about Azure Synapse Analytics' shared metadata Tables](table.md)
9393

94-
<!-- - [Learn more about the Synchronization with SQL Analytics on-demand](overview.md)
95-
- [Learn more about the Synchronization with SQL Analytics pools](overview.md)-->
94+
<!-- - [Learn more about the Synchronization with SQL on-demand](overview.md)
95+
- [Learn more about the Synchronization with SQL pools](overview.md)-->

articles/synapse-analytics/overview-cheat-sheet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The Azure Synapse Analytics cheat sheet will guide you through the basic concept
2626
| Nouns and verbs | What it does |
2727
|:--- |:--- |
2828
| **Synapse Workspace (preview)** | A securable collaboration boundary for doing cloud-based enterprise analytics in Azure. A workspace is deployed in a specific region and has an associated ADLS Gen2 account and file system (for storing temporary data). A workspace is under a resource group. |
29-
| **SQL Analytics** | Run analytics with pools or with on-demand capabilities. |
29+
| **Synapse SQL** | Run analytics with pools or with on-demand capabilities. |
3030
| **SQL pool** | 0-to-N SQL provisioned resources with their corresponding databases can be deployed in a workspace. Each SQL pool has an associated database. A SQL pool can be scaled, paused and resumed manually or automatically. A SQL pool can scale from 100 DWU up to 30,000 DWU. |
3131
| **SQL on-demand (preview)** | Distributed data processing system built for large-scale data that lets you run T-SQL queries over data in data lake. It is serverless so you don't need to manage infrastructure. |
3232
|**Apache Spark** | Spark run-time used in a Spark pool. The current version supported is Spark 2.4 with Python 3.6.1, Scala 2.11.12, .NET support for Apache Spark 0.5 and Delta Lake 0.3. |

articles/synapse-analytics/overview-what-is.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Azure Synapse removes the traditional technology barriers between using SQL and
5959

6060
Azure Synapse comes built-in with the same Data Integration engine and experiences as Azure Data Factory, allowing you to create rich data pipelines without using a separate orchestration engine.
6161

62-
* Move data between Synapse and 85+ on-premises data sources
62+
* Move data between Azure Synapse and 90+ on-premises data sources
6363
* Orchestrate Notebooks, Pipelines, Spark jobs, SQL Scripts, Stored procedures
6464
* Code-Free ETL with Data flow activities
6565

articles/synapse-analytics/security/how-to-set-up-access-control.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,11 +109,11 @@ Users in each role need to complete the following steps:
109109
| | Step | Workspace admins | Spark admins | SQL admins |
110110
| --- | --- | --- | --- | --- |
111111
| 1 | Upload a parquet file into CNT1 | YES | YES | YES |
112-
| 2 | Read the parquet file using SQL on demand | YES | NO | YES |
112+
| 2 | Read the parquet file using SQL on-demand | YES | NO | YES |
113113
| 3 | Create a Spark pool | YES [1] | YES [1] | NO |
114114
| 4 | Reads the parquet file with a Notebook | YES | YES | NO |
115115
| 5 | Create a pipeline from the Notebook and Trigger the pipeline to run now | YES | NO | NO |
116-
| 6 | Create a SQL Pool and run a SQL script such as &quot;SELECT 1&quot; | YES [1] | NO | YES[1] |
116+
| 6 | Create a SQL pool and run a SQL script such as &quot;SELECT 1&quot; | YES [1] | NO | YES[1] |
117117

118118
> [!NOTE]
119119
> [1] To create SQL or Spark pools the user must have at least Contributor role on the Synapse workspace.

articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Tools and services you can use to move data to Azure Storage:
6363

6464
- [Azure ExpressRoute](../../expressroute/expressroute-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) service enhances network throughput, performance, and predictability. ExpressRoute is a service that routes your data through a dedicated private connection to Azure. ExpressRoute connections do not route data through the public internet. The connections offer more reliability, faster speeds, lower latencies, and higher security than typical connections over the public internet.
6565
- [AZCopy utility](../../storage/common/storage-choose-data-transfer-solution.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) moves data to Azure Storage over the public internet. This works if your data sizes are less than 10 TB. To perform loads on a regular basis with AZCopy, test the network speed to see if it is acceptable.
66-
- [Azure Data Factory (ADF)](../../data-factory/introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) has a gateway that you can install on your local server. Then you can create a pipeline to move data from your local server up to Azure Storage. To use Data Factory with SQL Analytics, see [Loading data for SQL Analytics](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
66+
- [Azure Data Factory (ADF)](../../data-factory/introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) has a gateway that you can install on your local server. Then you can create a pipeline to move data from your local server up to Azure Storage. To use Data Factory with SQL pool, see [Loading data for SQL pool](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
6767

6868
## 3. Prepare the data for loading
6969

articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Design guidance for replicated tables
3-
description: Recommendations for designing replicated tables in Synapse SQL
3+
description: Recommendations for designing replicated tables in Synapse SQL pool
44
services: synapse-analytics
55
author: XiaoyuMSFT
66
manager: craigg
@@ -13,27 +13,27 @@ ms.reviewer: igorstan
1313
ms.custom: seo-lt-2019, azure-synapse
1414
---
1515

16-
# Design guidance for using replicated tables in SQL Analytics
16+
# Design guidance for using replicated tables in Synapse SQL pool
1717

18-
This article gives recommendations for designing replicated tables in your SQL Analytics schema. Use these recommendations to improve query performance by reducing data movement and query complexity.
18+
This article gives recommendations for designing replicated tables in your Synapse SQL pool schema. Use these recommendations to improve query performance by reducing data movement and query complexity.
1919

2020
> [!VIDEO https://www.youtube.com/embed/1VS_F37GI9U]
2121
2222
## Prerequisites
2323

24-
This article assumes you are familiar with data distribution and data movement concepts in SQL Analytics.  For more information, see the [architecture](massively-parallel-processing-mpp-architecture.md) article.
24+
This article assumes you are familiar with data distribution and data movement concepts in SQL pool.  For more information, see the [architecture](massively-parallel-processing-mpp-architecture.md) article.
2525

2626
As part of table design, understand as much as possible about your data and how the data is queried.  For example, consider these questions:
2727

2828
- How large is the table?
2929
- How often is the table refreshed?
30-
- Do I have fact and dimension tables in a SQL Analytics database?
30+
- Do I have fact and dimension tables in a SQL pool database?
3131

3232
## What is a replicated table?
3333

3434
A replicated table has a full copy of the table accessible on each Compute node. Replicating a table removes the need to transfer data among Compute nodes before a join or aggregation. Since the table has multiple copies, replicated tables work best when the table size is less than 2 GB compressed. 2 GB is not a hard limit. If the data is static and does not change, you can replicate larger tables.
3535

36-
The following diagram shows a replicated table that is accessible on each Compute node. In SQL Analytics, the replicated table is fully copied to a distribution database on each Compute node.
36+
The following diagram shows a replicated table that is accessible on each Compute node. In SQL pool, the replicated table is fully copied to a distribution database on each Compute node.
3737

3838
![Replicated table](./media/design-guidance-for-replicated-tables/replicated-table.png "Replicated table")
3939

@@ -47,8 +47,8 @@ Consider using a replicated table when:
4747
Replicated tables may not yield the best query performance when:
4848

4949
- The table has frequent insert, update, and delete operations. The data manipulation language (DML) operations require a rebuild of the replicated table. Rebuilding frequently can cause slower performance.
50-
- The SQL Analytics database is scaled frequently. Scaling a SQL Analytics database changes the number of Compute nodes, which incurs rebuilding the replicated table.
51-
- The table has a large number of columns, but data operations typically access only a small number of columns. In this scenario, instead of replicating the entire table, it might be more effective to distribute the table, and then create an index on the frequently accessed columns. When a query requires data movement, SQL Analytics only moves data for the requested columns.
50+
- The SQL pool database is scaled frequently. Scaling a SQL pool database changes the number of Compute nodes, which incurs rebuilding the replicated table.
51+
- The table has a large number of columns, but data operations typically access only a small number of columns. In this scenario, instead of replicating the entire table, it might be more effective to distribute the table, and then create an index on the frequently accessed columns. When a query requires data movement, SQL pool only moves data for the requested columns.
5252

5353
## Use replicated tables with simple query predicates
5454

@@ -119,7 +119,7 @@ We re-created `DimDate` and `DimSalesTerritory` as replicated tables, and ran th
119119

120120
## Performance considerations for modifying replicated tables
121121

122-
SQL Analytics implements a replicated table by maintaining a master version of the table. It copies the master version to the first distribution database on each Compute node. When there is a change, SQL Analytics first updates the master version, then it rebuilds the tables on each Compute node. A rebuild of a replicated table includes copying the table to each Compute node and then building the indexes. For example, a replicated table on a DW2000c has 5 copies of the data. A master copy and a full copy on each Compute node. All data is stored in distribution databases. SQL Analytics uses this model to support faster data modification statements and flexible scaling operations.
122+
SQL pool implements a replicated table by maintaining a master version of the table. It copies the master version to the first distribution database on each Compute node. When there is a change, the master version is updated first, then the tables on each Compute node are rebuilt. A rebuild of a replicated table includes copying the table to each Compute node and then building the indexes. For example, a replicated table on a DW2000c has 5 copies of the data. A master copy and a full copy on each Compute node. All data is stored in distribution databases. SQL pool uses this model to support faster data modification statements and flexible scaling operations.
123123

124124
Rebuilds are required after:
125125

@@ -136,7 +136,7 @@ The rebuild does not happen immediately after data is modified. Instead, the reb
136136

137137
### Use indexes conservatively
138138

139-
Standard indexing practices apply to replicated tables. SQL Analytics rebuilds each replicated table index as part of the rebuild. Only use indexes when the performance gain outweighs the cost of rebuilding the indexes.
139+
Standard indexing practices apply to replicated tables. SQL pool rebuilds each replicated table index as part of the rebuild. Only use indexes when the performance gain outweighs the cost of rebuilding the indexes.
140140

141141
### Batch data load
142142

@@ -188,7 +188,7 @@ SELECT TOP 1 * FROM [ReplicatedTable]
188188

189189
To create a replicated table, use one of these statements:
190190

191-
- [CREATE TABLE (SQL Analytics)](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest)
192-
- [CREATE TABLE AS SELECT (SQL Analytics)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest)
191+
- [CREATE TABLE (SQL pool)](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest)
192+
- [CREATE TABLE AS SELECT (SQL pool)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest)
193193

194194
For an overview of distributed tables, see [distributed tables](sql-data-warehouse-tables-distribute.md).

articles/synapse-analytics/sql-data-warehouse/fivetran-quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.custom: seo-lt-2019, azure-synapse
1515

1616
# Quickstart: Fivetran with data warehouse
1717

18-
This quickstart describes how to set up a new Fivetran user to work with an Azure Synapse Analytics data warehouse provisioned with a SQL Pool. The article assumes that you have an existing data warehouse.
18+
This quickstart describes how to set up a new Fivetran user to work with an Azure Synapse Analytics data warehouse provisioned with a SQL pool. The article assumes that you have an existing data warehouse.
1919

2020
## Set up a connection
2121

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-authentication.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ Currently Azure Active Directory users are not shown in SSDT Object Explorer. As
7575

7676
### Find the details
7777

78-
* The steps to configure and use Azure Active Directory authentication are nearly identical for Azure SQL Database and SQL Analytics in Azure Synapse. Follow the detailed steps in the topic [Connecting to SQL Database or SQL Pool By Using Azure Active Directory Authentication](../../sql-database/sql-database-aad-authentication.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
78+
* The steps to configure and use Azure Active Directory authentication are nearly identical for Azure SQL Database and SQL Analytics in Azure Synapse. Follow the detailed steps in the topic [Connecting to SQL Database or SQL pool By Using Azure Active Directory Authentication](../../sql-database/sql-database-aad-authentication.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
7979
* Create custom database roles and add users to the roles. Then grant granular permissions to the roles. For more information, see [Getting Started with Database Engine Permissions](/sql/relational-databases/security/authentication-access/getting-started-with-database-engine-permissions?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest).
8080

8181
## Next steps

0 commit comments

Comments
 (0)