You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -153,7 +153,7 @@ Netezza implements some database objects that aren't directly supported in Azure
153
153
- Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`.
154
154
-`CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause.
155
155
156
-
You can find out which columns have zone maps by using the nz_zonemap utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning.
156
+
You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning.
157
157
158
158
- Clustered Base tables (CBT)—In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
Copy file name to clipboardExpand all lines: articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,7 +79,7 @@ Netezza implements some database objects that aren't directly supported in Azure
79
79
- Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`.
80
80
-`CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause.
81
81
82
-
You can find out which columns have zone maps by using the nz_zonemap utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning.
82
+
You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning.
83
83
84
84
- Clustered Base tables (CBT)—In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
Copy file name to clipboardExpand all lines: articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ It makes sense to only migrate tables that are in use in the existing system. Ta
39
39
40
40
If enabled, Teradata system catalog tables and logs contain information that can determine when a given table was last accessed—which can in turn be used to decide whether a table is a candidate for migration.
41
41
42
-
Here's an example query on dbc.tables that provides the date of last access and last modification:
42
+
Here's an example query on `DBC.Tables` that provides the date of last access and last modification:
@@ -77,7 +77,7 @@ With this approach, standard Teradata utilities, such as Teradata Parallel Data
77
77
78
78
- The migration process is orchestrated and controlled entirely within the Azure environment.
79
79
80
-
#### Migrating data marts—stay physical or go virtual?
80
+
#### Migrating data marts - stay physical or go virtual?
81
81
82
82
> [!TIP]
83
83
> Virtualizing data marts can save on storage and processing resources.
@@ -130,7 +130,7 @@ The first step is always to build an inventory of ETL/ELT processes that need to
130
130
131
131
In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](/azure/data-factory/concepts-pipelines-activities?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
132
132
133
-
In the Teradata environment, some or all ETL processing may be performed by custom scripts using Teradata-specific utilities like BTEQ and TPT. In this case, your approach should be to reengineer using Data Factory.
133
+
In the Teradata environment, some or all ETL processing may be performed by custom scripts using Teradata-specific utilities like BTEQ and TPT. In this case, your approach should be to re-engineer using Data Factory.
134
134
135
135
> [!TIP]
136
136
> Leverage investment in existing third-party tools to reduce cost and risk.
0 commit comments