Skip to content

Commit ac32929

Browse files
Merge pull request #108353 from Kat-Campise/sql_dw_rename
Sql dw rename
2 parents 7367ed6 + efe57ad commit ac32929

13 files changed

+82
-66
lines changed

articles/synapse-analytics/overview-faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ A: Azure Synapse has the following capabilities:
4545

4646
### Q: How does Azure Synapse Analytics relate to Azure SQL Data Warehouse
4747

48-
A: Azure Synapse Analytics is an evolution of Azure SQL Data Warehouse into an analytics platform. This platform combines data exploration, ingestion, transformation, preparation, and serving analytics layer.
48+
A: Azure Synapse Analytics is an evolution of Azure SQL Data Warehouse into an analytics platform, which includes SQL pool as the data warehouse solution. This platform combines data exploration, ingestion, transformation, preparation, and serving analytics layer.
4949

5050
## Use cases
5151

articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ The following image is an example of how you can write a PySpark query using the
8080

8181
You cannot reference data or variables directly across different languages in a Synapse Studio notebook. In Spark, a temporary table can be referenced across languages. Here is an example of how to read a `Scala` DataFrame in `PySpark` and `SparkSQL` using a Spark temp table as a workaround.
8282

83-
1. In Cell 1, read a DataFrame from SQL DW connector using Scala and create a temporary table.
83+
1. In Cell 1, read a DataFrame from SQL pool connector using Scala and create a temporary table.
8484

8585
```scala
8686
%%scala
@@ -207,7 +207,7 @@ You can specify the timeout duration, the number, and the size of executors to g
207207

208208
## Bring data to a notebook
209209

210-
You can load data from Azure Blob Storage, Azure Data Lake Store Gen 2, and SQL Data Warehouse as shown in the code samples below.
210+
You can load data from Azure Blob Storage, Azure Data Lake Store Gen 2, and SQL pool as shown in the code samples below.
211211

212212
### Read a CSV from Azure Data Lake Store Gen2 as a Spark DataFrame
213213

@@ -252,7 +252,7 @@ df = spark.read.option("header", "true") \
252252

253253
### Read data from the primary storage account
254254

255-
You can access data in the primary storage account directly. There's no need to provide the secret keys. In Data Explorer, right-click on a file and select **New notebook** to see a new notebook with data extractor auto-generated.
255+
You can access data in the primary storage account directly. There's no need to provide the secret keys. In Data Explorer, right-click on a file and select **New notebook** to see a new notebook with data extractor autogenerated.
256256

257257
![data-to-cell](./media/apache-spark-development-using-notebooks/synapse-data-to-cell.png)
258258

@@ -309,7 +309,7 @@ displayHTML(html)
309309

310310
## Save notebooks
311311

312-
You have the option to save a single notebook or all notebooks in your workspace.
312+
You can save a single notebook or all notebooks in your workspace.
313313

314314
1. To save changes you made to a single notebook, select the **Publish** button on the notebook command bar.
315315

articles/synapse-analytics/sql-analytics/data-loading-best-practices.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,8 @@ Run loads under static rather than dynamic resource classes. Using the static re
5959

6060
## Allowing multiple users to load
6161

62-
There is often a need to have multiple users load data into a data warehouse. Loading with the [CREATE TABLE AS SELECT (Transact-SQL)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) requires CONTROL permissions of the database. The CONTROL permission gives control access to all schemas. You might not want all loading users to have control access on all schemas. To limit permissions, use the DENY CONTROL statement.
62+
There is often a need to have multiple users load data into a data warehouse. Loading with the [CREATE TABLE AS SELECT (Transact-SQL)](https://docs.microsoft.com/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?view=aps-pdw-2016-au7
63+
) requires CONTROL permissions of the database. The CONTROL permission gives control access to all schemas. You might not want all loading users to have control access on all schemas. To limit permissions, use the DENY CONTROL statement.
6364

6465
For example, consider database schemas, schema_A for dept A, and schema_B for dept B. Let database users user_A and user_B be users for PolyBase loading in dept A and B, respectively. They both have been granted CONTROL database permissions. The creators of schema A and B now lock down their schemas using DENY:
6566

@@ -84,7 +85,7 @@ Columnstore indexes require large amounts of memory to compress data into high-q
8485
- Load enough rows to completely fill new rowgroups. During a bulk load, every 1,048,576 rows get compressed directly into the columnstore as a full rowgroup. Loads with fewer than 102,400 rows send the rows to the deltastore where rows are held in a b-tree index. If you load too few rows, they might all go to the deltastore and not get compressed immediately into columnstore format.
8586

8687
## Increase batch size when using SQLBulkCopy API or BCP
87-
As mentioned before, loading with PolyBase will provide the highest throughput with SQL Data Warehouse. If you cannot use PolyBase to load and must use the SQLBulkCopy API (or BCP) you should consider increasing batch size for better throughput - a good rule of thumb is a batch size between 100K to 1M rows.
88+
As mentioned before, loading with PolyBase will provide the highest throughput with Synapse SQL pool. If you cannot use PolyBase to load and must use the SQLBulkCopy API (or BCP) you should consider increasing batch size for better throughput - a good rule of thumb is a batch size between 100K to 1M rows.
8889

8990
## Handling loading failures
9091

@@ -94,13 +95,13 @@ To fix the dirty records, ensure that your external table and external file form
9495

9596
## Inserting data into a production table
9697

97-
A one-time load to a small table with an [INSERT statement](/sql/t-sql/statements/insert-transact-sql), or even a periodic reload of a look-up might perform good enough with a statement like `INSERT INTO MyLookup VALUES (1, 'Type 1')`. However, singleton inserts are not as efficient as performing a bulk load.
98+
A one-time load to a small table with an [INSERT statement](https://docs.microsoft.com/sql/t-sql/statements/insert-transact-sql?view=sql-server-ver15), or even a periodic reload of a look-up might perform good enough with a statement like `INSERT INTO MyLookup VALUES (1, 'Type 1')`. However, singleton inserts are not as efficient as performing a bulk load.
9899

99100
If you have thousands or more single inserts throughout the day, batch the inserts so you can bulk load them. Develop your processes to append the single inserts to a file, and then create another process that periodically loads the file.
100101

101102
## Creating statistics after the load
102103

103-
To improve query performance, it's important to create statistics on all columns of all tables after the first load, or substantial changes occur in the data. This can be done manually or you can enable [auto-create statistics](https://docs.microsoft.com/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics#automatic-creation-of-statistic).
104+
To improve query performance, it's important to create statistics on all columns of all tables after the first load, or substantial changes occur in the data. This can be done manually or you can enable [auto-create statistics](../sql-data-warehouse/sql-data-warehouse-tables-statistics.md?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json).
104105

105106
For a detailed explanation of statistics, see [Statistics](development-tables-statistics.md). The following example shows how to manually create statistics on five columns of the Customer_Speed table.
106107

@@ -118,7 +119,7 @@ It is good security practice to change the access key to your blob storage on a
118119

119120
To rotate Azure Storage account keys:
120121

121-
For each storage account whose key has changed, issue [ALTER DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/alter-database-scoped-credential-transact-sql).
122+
For each storage account whose key has changed, issue [ALTER DATABASE SCOPED CREDENTIAL](https://docs.microsoft.com/sql/t-sql/statements/alter-database-scoped-credential-transact-sql?view=azure-sqldw-latest).
122123

123124
Example:
124125

0 commit comments

Comments
 (0)