Skip to content

Commit f4f46f1

Browse files
Merge pull request #223398 from WilliamDAssafMSFT/patch-1
Update sql-data-warehouse-concept-recommendations.md
2 parents 03bc366 + ee9dd3a commit f4f46f1

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ The following section describes workload-based heuristics you may find in the Az
5656
Currently Advisor will only show at most four replicated table candidates at once with clustered columnstore indexes prioritizing the highest activity.
5757

5858
> [!IMPORTANT]
59-
> The replicated table recommendation is not full proof and does not take into account data movement operations. We are working on adding this as a heuristic but in the meantime, you should always validate your workload after applying the recommendation. To learn more about replicated tables, visit the following [documentation](design-guidance-for-replicated-tables.md#what-is-a-replicated-table).
59+
> The replicated table recommendation is not fool proof and does not take into account data movement operations. We are working on adding this as a heuristic but in the meantime, you should always validate your workload after applying the recommendation. To learn more about replicated tables, visit the following [documentation](design-guidance-for-replicated-tables.md#what-is-a-replicated-table).
6060
6161

6262
## Adaptive (Gen2) cache utilization
@@ -68,4 +68,4 @@ Query performance can degrade when there is high tempdb contention. Tempdb cont
6868

6969
## Data loading misconfiguration
7070

71-
You should always load data from a storage account in the same region as your dedicated SQL pool to minimize latency. Use the [COPY statement for high throughput data ingestion](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) and split your staged files in your storage account to maximize throughput. If you can't use the COPY statement, you can use the SqlBulkCopy API or bcp with a high batch size for better throughput. See [Best practices for data loading](../sql/data-loading-best-practices.md) for additional data loading guidance.
71+
You should always load data from a storage account in the same region as your dedicated SQL pool to minimize latency. Use the [COPY statement for high throughput data ingestion](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) and split your staged files in your storage account to maximize throughput. If you can't use the COPY statement, you can use the SqlBulkCopy API or bcp with a high batch size for better throughput. See [Best practices for data loading](../sql/data-loading-best-practices.md) for additional data loading guidance.

0 commit comments

Comments
 (0)