Skip to content

Commit 4707740

Browse files
20220524 1354 build review
1 parent 1ad379a commit 4707740

9 files changed

+55
-50
lines changed

articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@ You can edit existing Netezza CREATE TABLE and CREATE VIEW scripts to create the
251251

252252
However, all the information that specifies the current definitions of tables and views within the existing Netezza environment is maintained within system catalog tables. These tables are the best source of this information, as it's guaranteed to be up to date and complete. User-maintained documentation may not be in sync with the current table definitions.
253253

254-
Access the information in these tables via utilities such as nz_ddl_table and generate the `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
254+
Access the information in these tables via utilities such as `nz_ddl_table` and generate the `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
255255

256256
Third-party migration and ETL tools also use the catalog information to achieve the same result.
257257

@@ -263,7 +263,7 @@ During a migration exercise, extract the data as efficiently as possible. Use th
263263

264264
This is a simple example of an external table extract:
265265

266-
```
266+
```sql
267267
CREATE EXTERNAL TABLE '/tmp/export_tab1.csv' USING (DELIM ',') AS SELECT * from <TABLENAME>;
268268
```
269269

articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ If enabled, Netezza query history tables contain information that can determine
4141

4242
Here's an example query that looks for the usage of a specific table within a given time window:
4343

44-
```
44+
```sql
4545
SELECT FORMAT_TABLE_ACCESS (usage),
4646
hq.submittime
4747
FROM "$v_hist_queries" hq
@@ -59,7 +59,9 @@ AND
5959
OR instr(FORMAT_TABLE_ACCESS(usage),'del') > 0
6060
)
6161
AND status=0;
62+
```
6263

64+
```output
6365
| FORMAT_TABLE_ACCESS | SUBMITTIME
6466
----------------------+---------------------------
6567
ins | 2015-06-16 18:32:25.728042
@@ -162,7 +164,7 @@ Use the metadata from the Netezza catalog tables to determine whether any of the
162164

163165
For example, this Netezza SQL query shows columns and column types:
164166

165-
```
167+
```sql
166168
SELECT
167169
tablename,
168170
attname AS COL_NAME,
@@ -174,7 +176,9 @@ FROM _v_table a
174176
WHERE a.tablename = 'ATT_TEST'
175177
AND a.schema = 'ADMIN'
176178
ORDER BY attnum;
179+
```
177180

181+
```output
178182
TABLENAME | COL_NAME | COL_TYPE | COL_NUM
179183
----------+-------------+----------------------+--------
180184
ATT_TEST | COL_INT | INTEGER | 1
@@ -255,7 +259,7 @@ Once the database tables to be migrated have been created in Azure Synapse, you
255259

256260
- **File Extract**&mdash;Extract the data from the Netezza tables to flat files, normally in CSV format, via nzsql with the -o option or via the `CREATE EXTERNAL TABLE` statement. Use an external table whenever possible since it's the most efficient in terms of data throughput. The following SQL example, creates a CSV file via an external table:
257261

258-
```
262+
```sql
259263
CREATE EXTERNAL TABLE '/data/export.csv' USING (delimiter ',')
260264
AS SELECT col1, col2, expr1, expr2, col3, col1 || col2 FROM your table;
261265
```

articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ For more information on the [Azure Synapse security](/azure/synapse-analytics/sq
3131
3232
#### Netezza authorization options
3333

34-
The IBM® Netezza® system offers several authentication methods for Netezza database users:
34+
The IBM&reg; Netezza&reg; system offers several authentication methods for Netezza database users:
3535

3636
- **Local authentication**: Netezza administrators define database users and their passwords by using the `CREATE USER` command or through Netezza administrative interfaces. In local authentication, use the Netezza system to manage database accounts and passwords, and to add and remove database users from the system. This method is the default authentication method.
3737

@@ -74,13 +74,13 @@ See the following sections for more details.
7474
> [!TIP]
7575
> Migration of a data warehouse requires more than just tables, views, and SQL statements.
7676
77-
The information about current users and groups in a Netezza system is held in system catalog views `_v_users` and `_v_groupusers`. Use the nzsql utility or tools such as the Netezza® Performance, NzAdmin, or the Netezza Utility scripts to list user privileges. For example, use the dpu and dpgu commands in nzsql to display users or groups with their permissions.
77+
The information about current users and groups in a Netezza system is held in system catalog views `_v_users` and `_v_groupusers`. Use the nzsql utility or tools such as the Netezza&reg; Performance, NzAdmin, or the Netezza Utility scripts to list user privileges. For example, use the `dpu` and `dpgu` commands in nzsql to display users or groups with their permissions.
7878

7979
Use or edit the utility scripts `nz_get_users` and `nz_get_user_groups` to retrieve the same information in the required format.
8080

81-
Query system catalog views directly (if the user has `SELECT` access to those views) to obtain current lists of users and roles defined within the system. See examples:
81+
Query system catalog views directly (if the user has `SELECT` access to those views) to obtain current lists of users and roles defined within the system. See examples to list users, groups, or users and their associated groups:
8282

83-
```
83+
```sql
8484
-- List of users
8585
SELECT USERNAME FROM _V_USER;
8686

@@ -106,7 +106,7 @@ In Netezza, the individual permissions are represented as individual bits within
106106

107107
The simplest way to obtain a DDL script that contains the `GRANT` commands to replicate the current privileges for users and groups is to use the appropriate Netezza utility scripts:
108108

109-
```
109+
```sql
110110
--List of group privileges
111111
nz_ddl_grant_group -usrobj dbname > output_file_dbname;
112112

@@ -120,7 +120,7 @@ Netezza supports two classes of access rights,&mdash;Admin and Object. See the f
120120

121121
| Admin Privilege | Description | Azure Synapse Equivalent |
122122
|----------------------------|-------------|-----------------|
123-
| Backup | Allows user to create backups. The user can run backups. The user can run the command nzbackup. | \* |
123+
| Backup | Allows user to create backups. The user can run backups. The user can run the command `nzbackup`. | \* |
124124
| [Create] Aggregate | Allows the user to create user-defined aggregates (UDAs). Permission to operate on existing UDAs is controlled by object privileges. | CREATE FUNCTION \*\*\* |
125125
| [Create] Database | Allows the user to create databases. Permission to operate on existing databases is controlled by object privileges. | CREATE DATABASE |
126126
| [Create] External Table | Allows the user to create external tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE |
@@ -185,7 +185,7 @@ Netezza administration tasks typically fall into two categories:
185185

186186
- Database administration, which is managing user databases and their content, loading data, backing up data, restoring data, and controlling access to data and permissions.
187187

188-
IBM® Netezza® offers several ways or interfaces that you can use to perform the various system and database management tasks:
188+
IBM&reg; Netezza&reg; offers several ways or interfaces that you can use to perform the various system and database management tasks:
189189

190190
- Netezza commands (nz* commands) are installed in the /nz/kit/bin directory on the Netezza host. For many of the nz* commands, you must be able to sign into the Netezza system to access and run those commands. In most cases, users sign in as the default nz user account, but you can create other Linux user accounts on your system. Some commands require you to specify a database user account, password, and database to ensure that you've permission to do the task.
191191

@@ -251,7 +251,7 @@ For more information, see [Azure Synapse operations and management options](/azu
251251

252252
Netezza appliances are redundant, fault-tolerant systems and there are diverse options in a Netezza system to enable high availability and disaster recovery.
253253

254-
Adding IBM® Netezza Replication Services for disaster recovery improves fault tolerance by extending redundancy across local and wide area networks.
254+
Adding IBM&reg; Netezza Replication Services for disaster recovery improves fault tolerance by extending redundancy across local and wide area networks.
255255

256256
IBM Netezza Replication Services protects against data loss by synchronizing data on a primary system (the primary node) with data on one or more target nodes (subordinates). These nodes make up a replication set.
257257

articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -170,9 +170,9 @@ When it comes to migrating to Azure Synapse, there are several things that can i
170170
171171
BI tool reports and dashboards, and other visualizations, are produced by issuing SQL queries that access physical tables and/or views in your data warehouse or data mart. When it comes to migrating your data warehouse or data mart schema to Azure Synapse, there may be incompatibilities that can impact reports and dashboards, such as:
172172

173-
- Non-standard table types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse (like the Teradata time-series tables)
173+
- Non-standard table types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
174174

175-
- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse. For example, Teradata Geospatial or Interval data types.
175+
- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
176176

177177
In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it may be able to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
178178

articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ Edit existing Netezza `CREATE TABLE` and `CREATE VIEW` scripts to create the equ
130130

131131
However, all the information that specifies the current definitions of tables and views within the existing Netezza environment is maintained within system catalog tables. This is the best source of this information as it's guaranteed to be up to date and complete. Be aware that user-maintained documentation may not be in sync with the current table definitions.
132132

133-
Access this information by using utilities such as nz_ddl_table and generate the `CREATE TABLE DDL` statements. Edit these statements for the equivalent tables in Azure Synapse.
133+
Access this information by using utilities such as `nz_ddl_table` and generate the `CREATE TABLE DDL` statements. Edit these statements for the equivalent tables in Azure Synapse.
134134

135135
> [!TIP]
136136
> Third-party tools and services can automate data mapping tasks.

articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -186,39 +186,39 @@ There are a few differences in SQL Data Manipulation Language (DML) syntax betwe
186186

187187
- `QUALIFY`&mdash;Teradata supports the `QUALIFY` operator. For example:
188188

189-
```
189+
```sql
190190
SELECT col1
191191
FROM tab1
192-
WHERE col1=\'XYZ\'
192+
WHERE col1='XYZ'
193193
QUALIFY ROW_NUMBER () OVER (PARTITION by
194194
col1 ORDER BY col1) = 1;
195195
```
196196

197197
The equivalent Azure Synapse syntax is:
198198

199-
```
200-
SELECT \* FROM (
199+
```sql
200+
SELECT * FROM (
201201
SELECT col1, ROW_NUMBER () OVER (PARTITION by col1 ORDER BY col1) rn
202-
FROM tab1 WHERE col1=\'XYZ\'
202+
FROM tab1 WHERE col1='XYZ'
203203
) WHERE rn = 1;
204204
```
205205

206-
- Date Arithmetic&mdash;Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as 'SELECT DATE1&mdash;DATE2 FROM...'
206+
- Date Arithmetic&mdash;Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as 'SELECT DATE1-DATE2 FROM...'
207207

208208
- In Group by ordinal, explicitly provide the T-SQL column name.
209209

210-
- LIKE ANY&mdash;Teradata supports LIKE ANY syntax such as:
210+
- Teradata supports LIKE ANY syntax such as:
211211

212-
```
213-
SELECT \* FROM CUSTOMER
212+
```sql
213+
SELECT * FROM CUSTOMER
214214
WHERE POSTCODE LIKE ANY
215-
('CV1%', 'CV2%', CV3%');
215+
('CV1%', 'CV2%', 'CV3%');
216216
```
217217

218218
The equivalent in Azure Synapse syntax is:
219219

220-
```
221-
SELECT \* FROM CUSTOMER
220+
```sql
221+
SELECT * FROM CUSTOMER
222222
WHERE
223223
(POSTCODE LIKE 'CV1%') OR (POSTCODE LIKE 'CV2%') OR (POSTCODE LIKE 'CV3%');
224224
```
@@ -271,7 +271,7 @@ You can edit existing Teradata CREATE TABLE and CREATE VIEW scripts to create th
271271

272272
However, all the information that specifies the current definitions of tables and views within the existing Teradata environment is maintained within system catalog tables. These tables are the best source of this information, as it's guaranteed to be up to date and complete. User-maintained documentation may not be in sync with the current table definitions.
273273

274-
Access the information in these tables via views into the catalog such as DBC.ColumnsV, and generate the equivalent CREATE TABLE DDL statements for the equivalent tables in Azure Synapse.
274+
Access the information in these tables via views into the catalog such as `DBC.ColumnsV`, and generate the equivalent CREATE TABLE DDL statements for the equivalent tables in Azure Synapse.
275275

276276
Third-party migration and ETL tools also use the catalog information to achieve the same result.
277277

articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -41,10 +41,11 @@ If enabled, Teradata system catalog tables and logs contain information that can
4141

4242
Here's an example query on dbc.tables that provides the date of last access and last modification:
4343

44-
```
45-
Select TableName, CreatorName, CreateTimeStamp, LastAlterName,
46-
LastAlterTimeStamp, AccessCount, LastAccessTimeStamp from DBC.Tables t
47-
Where DataBaseName = 'databasename'
44+
```sql
45+
SELECT TableName, CreatorName, CreateTimeStamp, LastAlterName,
46+
LastAlterTimeStamp, AccessCount, LastAccessTimeStamp
47+
FROM DBC.Tables t
48+
WHERE DataBaseName = 'databasename'
4849
```
4950

5051
If logging is enabled and the log history is accessible, other information, such as SQL query text, is available in table DBQLogTbl and associated logging tables. For more information, see [Teradata log history](https://docs.teradata.com/reader/wada1XMYPkZVTqPKz2CNaw/PuQUxpyeCx4jvP8XCiEeGA).

articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,8 @@ After data extraction, use Teradata system catalog tables to generate equivalent
8080
8181
The information about current users and roles in a Teradata system is found in the system catalog tables `DBC.USERS` (or `DBC.DATABASES`) and `DBC.ROLEMEMBERS`. Query these tables (if the user has `SELECT` access to those tables) to obtain current lists of users and roles defined within the system. The following are examples of queries to do this for individual users:
8282

83-
```
84-
/\*\*\*SQL to find all users\*\*\*/
83+
```sql
84+
/***SQL to find all users***/
8585
SELECT
8686
DatabaseName AS UserName
8787
From dbc.databases
@@ -114,7 +114,7 @@ There's no way to retrieve existing passwords, so you need to implement a scheme
114114
115115
In a Teradata system, the system tables `DBC.ALLRIGHTS` and `DBC.ALLROLERIGHTS` hold the access rights for users and roles. Query these tables (if the user has `SELECT` access to those tables) to obtain current lists of access rights defined within the system. The following are examples of queries for individual users:
116116

117-
```
117+
```sql
118118
/**SQL for AccessRights held by a USER***/
119119
SELECT UserName, DatabaseName,TableName,ColumnName,
120120
CASE WHEN Abbv.AccessRight IS NOT NULL THEN Abbv.Description ELSE
@@ -255,7 +255,7 @@ Teradata Database contains many log tables in the Data Dictionary that accumulat
255255

256256
#### Dictionary tables to maintain
257257

258-
Reset accumulators and peak values using the DBC.AMPUsage view and the ClearPeakDisk macro provided with the software:
258+
Reset accumulators and peak values using the `DBC.AMPUsage` view and the `ClearPeakDisk` macro provided with the software:
259259

260260
- `DBC.Acctg`: resource usage by account/user
261261

articles/synapse-analytics/migration-guides/teradata/5-minimize-sql-issues.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ The Azure environment also includes specific features for complex analytics on t
102102
> [!TIP]
103103
> Assess the impact of unsupported data types as part of the preparation phase.
104104
105-
Most Teradata data types have a direct equivalent in Azure Synapse. This table shows these data types together with the recommended approach for handling them. In the table, Teradata column type is the type that's stored within the system catalog&mdash;for example, in DBC.ColumnsV.
105+
Most Teradata data types have a direct equivalent in Azure Synapse. This table shows these data types together with the recommended approach for handling them. In the table, Teradata column type is the type that's stored within the system catalog&mdash;for example, in `DBC.ColumnsV`.
106106

107107
| Teradata column type | Teradata data type | Azure Synapse data type |
108108
|----------------------|--------------------|----------------|
@@ -151,7 +151,7 @@ Most Teradata data types have a direct equivalent in Azure Synapse. This table s
151151

152152
Use the metadata from the Teradata catalog tables to determine whether any of these data types are to be migrated and allow for this in the migration plan. For example, use a SQL query like this one to find any occurrences of unsupported data types that need attention.
153153

154-
```
154+
```sql
155155
SELECT
156156
ColumnType, CASE
157157
WHEN ColumnType = '++' THEN 'TD_ANYTYPE'
@@ -203,7 +203,7 @@ Edit existing Teradata `CREATE TABLE` and `CREATE VIEW` scripts to create the eq
203203

204204
However, all the information that specifies the current definitions of tables and views within the existing Teradata environment is maintained within system catalog tables. This is the best source of this information as it's guaranteed to be up to date and complete. Be aware that user-maintained documentation may not be in sync with the current table definitions.
205205

206-
Access this information via views onto the catalog such as DBC.ColumnsV and generate the equivalent `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
206+
Access this information via views onto the catalog such as `DBC.ColumnsV` and generate the equivalent `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
207207

208208
> [!TIP]
209209
> Third-party tools and services can automate data mapping tasks.
@@ -227,20 +227,20 @@ Be aware of these differences in SQL Data Manipulation Language (DML) syntax bet
227227

228228
- `QUALIFY`&mdash;Teradata supports the `QUALIFY` operator. For example:
229229

230-
```
230+
```sql
231231
SELECT col1
232232
FROM tab1
233-
WHERE col1=\'XYZ\'
233+
WHERE col1='XYZ'
234234
QUALIFY ROW_NUMBER () OVER (PARTITION by
235235
col1 ORDER BY col1) = 1;
236236
```
237237

238238
The equivalent Azure Synapse syntax is:
239239

240-
```
241-
SELECT \* FROM (
240+
```sql
241+
SELECT * FROM (
242242
SELECT col1, ROW_NUMBER () OVER (PARTITION by col1 ORDER BY col1) rn
243-
FROM tab1 WHERE col1=\'XYZ\'
243+
FROM tab1 WHERE col1='XYZ'
244244
) WHERE rn = 1;
245245
```
246246

@@ -250,16 +250,16 @@ Be aware of these differences in SQL Data Manipulation Language (DML) syntax bet
250250

251251
- `LIKE ANY`&mdash;Teradata supports `LIKE ANY` syntax such as:
252252

253-
```
254-
SELECT \* FROM CUSTOMER
253+
```sql
254+
SELECT * FROM CUSTOMER
255255
WHERE POSTCODE LIKE ANY
256-
('CV1%', 'CV2%', CV3%');
256+
('CV1%', 'CV2%', 'CV3%');
257257
```
258258

259259
The equivalent in Azure Synapse syntax is:
260260

261-
```
262-
SELECT \* FROM CUSTOMER
261+
```sql
262+
SELECT * FROM CUSTOMER
263263
WHERE
264264
(POSTCODE LIKE 'CV1%') OR (POSTCODE LIKE 'CV2%') OR (POSTCODE LIKE 'CV3%');
265265
```

0 commit comments

Comments
 (0)