Skip to content

Commit e509b95

Browse files
committed
renaming SQL Analytics to Synapse SQL
1 parent cb980e6 commit e509b95

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

articles/synapse-analytics/metadata/database.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,5 +91,5 @@ Verify the schema for the newly created database in the results.
9191
- [Learn more about Azure Synapse Analytics' shared metadata](overview.md)
9292
- [Learn more about Azure Synapse Analytics' shared metadata Tables](table.md)
9393

94-
<!-- - [Learn more about the Synchronization with SQL Analytics on-demand](overview.md)
95-
- [Learn more about the Synchronization with SQL Analytics pools](overview.md)-->
94+
<!-- - [Learn more about the Synchronization with SQL on-demand](overview.md)
95+
- [Learn more about the Synchronization with SQL pools](overview.md)-->

articles/synapse-analytics/overview-cheat-sheet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The Azure Synapse Analytics cheat sheet will guide you through the basic concept
2626
| Nouns and verbs | What it does |
2727
|:--- |:--- |
2828
| **Synapse Workspace (preview)** | A securable collaboration boundary for doing cloud-based enterprise analytics in Azure. A workspace is deployed in a specific region and has an associated ADLS Gen2 account and file system (for storing temporary data). A workspace is under a resource group. |
29-
| **SQL Analytics** | Run analytics with pools or with on-demand capabilities. |
29+
| **Synapse SQL** | Run analytics with pools or with on-demand capabilities. |
3030
| **SQL pool** | 0-to-N SQL provisioned resources with their corresponding databases can be deployed in a workspace. Each SQL pool has an associated database. A SQL pool can be scaled, paused and resumed manually or automatically. A SQL pool can scale from 100 DWU up to 30,000 DWU. |
3131
| **SQL on-demand (preview)** | Distributed data processing system built for large-scale data that lets you run T-SQL queries over data in data lake. It is serverless so you don't need to manage infrastructure. |
3232
|**Apache Spark** | Spark run-time used in a Spark pool. The current version supported is Spark 2.4 with Python 3.6.1, Scala 2.11.12, .NET support for Apache Spark 0.5 and Delta Lake 0.3. |

articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Tools and services you can use to move data to Azure Storage:
6363

6464
- [Azure ExpressRoute](../../expressroute/expressroute-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) service enhances network throughput, performance, and predictability. ExpressRoute is a service that routes your data through a dedicated private connection to Azure. ExpressRoute connections do not route data through the public internet. The connections offer more reliability, faster speeds, lower latencies, and higher security than typical connections over the public internet.
6565
- [AZCopy utility](../../storage/common/storage-choose-data-transfer-solution.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) moves data to Azure Storage over the public internet. This works if your data sizes are less than 10 TB. To perform loads on a regular basis with AZCopy, test the network speed to see if it is acceptable.
66-
- [Azure Data Factory (ADF)](../../data-factory/introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) has a gateway that you can install on your local server. Then you can create a pipeline to move data from your local server up to Azure Storage. To use Data Factory with SQL Analytics, see [Loading data for SQL Analytics](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
66+
- [Azure Data Factory (ADF)](../../data-factory/introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) has a gateway that you can install on your local server. Then you can create a pipeline to move data from your local server up to Azure Storage. To use Data Factory with SQL pool, see [Loading data for SQL pool](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
6767

6868
## 3. Prepare the data for loading
6969

0 commit comments

Comments
 (0)