You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article outlines how to use the copy activity in Azure Data Factory and Synapse Analytics pipelines to copy data from Teradata Vantage. It builds on the [copy activity overview](copy-activity-overview.md).
@@ -41,6 +39,11 @@ Specifically, this Teradata connector supports:
You need to [install .NET Data Provider](https://downloads.teradata.com/download/connectivity/net-data-provider-teradata) with version 20.00.03.00 or above on your self-hosted integration runtime if you use it.
45
+
### For version 1.0
46
+
44
47
If you use Self-hosted Integration Runtime, note it provides a built-in Teradata driver starting from version 3.18. You don't need to manually install any driver. The driver requires "Visual C++ Redistributable 2012 Update 4" on the self-hosted integration runtime machine. If you don't yet have it installed, download it from [here](https://www.microsoft.com/en-sg/download/details.aspx?id=30679).
45
48
46
49
## Getting started
@@ -75,7 +78,60 @@ The following sections provide details about properties that are used to define
75
78
76
79
## Linked service properties
77
80
78
-
The Teradata linked service supports the following properties:
81
+
The Teradata connector now supports version 2.0 (Preview). Refer to this [section](#upgrade-the-teradata-connector) to upgrade your Teradata connector version from version 1.0. For the property details, see the corresponding sections.
82
+
83
+
-[Version 2.0 (Preview)](#version-20-preview)
84
+
-[Version 1.0](#version-10)
85
+
86
+
### Version 2.0 (Preview)
87
+
88
+
The Teradata linked service supports the following properties when apply version 2.0 (Preview):
89
+
90
+
| Property | Description | Required |
91
+
|:--- |:--- |:--- |
92
+
| type | The type property must be set to **Teradata**. | Yes |
93
+
| version | The version that you specify. The value is `2.0`. | Yes |
94
+
| server | The Teradata server name. | Yes |
95
+
| authenticationType | The authentication type to connect to Teradata. Valid values including **Basic**, **Windows**, and **LDAP**| Yes |
96
+
| username | Specify a user name to connect to Teradata. | Yes |
97
+
| password | Specify a password for the user account you specified for the user name. You can also choose to [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
98
+
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
99
+
100
+
More connection properties you can set in connection string per your case:
101
+
102
+
| Property | Description | Default value |
103
+
|:--- |:--- |:--- |
104
+
| sslMode | The SSL mode for connections to the database. Valid values including `Disable`, `Allow`, `Prefer`, `Require`, `Verify-CA`, `Verify-Full`. |`Verify-Full`|
105
+
| portNumber |The port numbers when connecting to server through non-HTTPS/TLS connections. | 1025|
106
+
| httpsPortNumber |The port numbers when connecting to server through HTTPS/TLS connections. |443 |
107
+
| UseDataEncryption | Specifies whether to encrypt all communication with the Teradata database. Allowed values are 0 or 1.<br><br/>- **0 (disabled)**: Encrypts authentication information only.<br/>- **1 (enabled, default)**: Encrypts all data that is passed between the driver and the database. This setting is ignored for HTTPS/TLS connections.|`1`|
108
+
| CharacterSet | The character set to use for the session. For example, `CharacterSet=UTF16`.<br><br/>This value can be a user-defined character set, or one of the following predefined character sets: <br/>- ASCII<br/>- ARABIC1256_6A0<br/>- CYRILLIC1251_2A0<br/>- HANGUL949_7R0<br/>- HEBREW1255_5A0<br/>- KANJI932_1S0<br/>- KANJISJIS_0S<br/>- LATIN1250_1A0<br/>- LATIN1252_3A0<br/>- LATIN1254_7A0<br/>- LATIN1258_8A0<br/>- SCHINESE936_6R0<br/>- TCHINESE950_8R0<br/>- THAI874_4A0<br/>- UTF8<br/>- UTF16 |`ASCII`|
109
+
| MaxRespSize |The maximum size of the response buffer for SQL requests, in bytes. For example, `MaxRespSize=10485760`.<br/><br/>Range of permissible values are from `4096` to `16775168`. The default value is `524288`. |`524288`|
110
+
111
+
**Example**
112
+
113
+
```json
114
+
{
115
+
"name": "TeradataLinkedService",
116
+
"properties": {
117
+
"type": "Teradata",
118
+
"version": "2.0",
119
+
"typeProperties": {
120
+
"server": "<server name>",
121
+
"username": "<user name>",
122
+
"password": "<password>",
123
+
"authenticationType": "<authentication type>"
124
+
},
125
+
"connectVia": {
126
+
"referenceName": "<name of Integration Runtime>",
127
+
"type": "IntegrationRuntimeReference"
128
+
}
129
+
}
130
+
}
131
+
```
132
+
### Version 1.0
133
+
134
+
The Teradata linked service supports the following properties when apply version 1.0:
79
135
80
136
| Property | Description | Required |
81
137
|:--- |:--- |:--- |
@@ -299,7 +355,7 @@ You are suggested to enable parallel copy with data partitioning especially when
| Full load from large table. |**Partition option**: Hash. <br><br/>During execution, the service automatically detects the primary index column, applies a hash against it, and copies data by partitions. |
301
357
| Load large amount of data by using a custom query. |**Partition option**: Hash.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfHashPartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used for apply hash partition. If not specified, the service automatically detects the PK column of the table you specified in the Teradata dataset.<br><br>During execution, the service replaces `?AdfHashPartitionCondition` with the hash partition logic, and sends to Teradata. |
302
-
| Load large amount of data by using a custom query, having an integer column with evenly distributed value for range partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data. You can partition against the column with integer data type.<br>**Partition upper bound** and **partition lower bound**: Specify if you want to filter against the partition column to retrieve data only between the lower and upper range.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName`, `?AdfRangePartitionUpbound`, and `?AdfRangePartitionLowbound` with the actual column name and value ranges for each partition, and sends to Teradata. <br>For example, if your partition column "ID" set with the lower bound as 1 and the upper bound as 80, with parallel copy set as 4, the service retrieves data by 4 partitions. Their IDs are between [1,20], [21, 40], [41, 60], and [61, 80], respectively. |
358
+
| Load large amount of data by using a custom query, having an integer column with evenly distributed value for range partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data. You can partition against the column with integer data type.<br>**Partition upper bound** and **partition lower bound**: Specify if you want to filter against the partition column to retrieve data only between the lower and upper range.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName`, `?AdfRangePartitionUpbound`, and `?AdfRangePartitionLowbound` with the actual column name and value ranges for each partition, and sends to Teradata. <br>For example, if your partition column "ID" is set with the lower bound as 1 and the upper bound as 80, with parallel copy set as 4, the service retrieves data by 4 partitions. Their IDs are between [1,20], [21, 40], [41, 60], and [61, 80], respectively. |
303
359
304
360
**Example: query with hash partition**
305
361
@@ -333,53 +389,70 @@ You are suggested to enable parallel copy with data partitioning especially when
333
389
334
390
When you copy data from Teradata, the following mappings apply from Teradata's data types to the internal data types used by the service. To learn about how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
335
391
336
-
| Teradata data type | Interim service data type |
| Xml | String |Not supported. Apply explicit cast in source query. |
377
433
378
434
379
435
## Lookup activity properties
380
436
381
437
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
382
438
439
+
## Upgrade the Teradata connector
440
+
441
+
Here are steps that help you upgrade the Teradata connector:
442
+
443
+
1. In **Edit linked service** page, select 2.0 version and configure the linked service by referring to [linked service version 2.0 (Preview) properties](#version-20-preview).
444
+
445
+
2. The data type mapping for the Teradata linked service version 2.0 (Preview) is different from that for the version 1.0. To learn the latest data type mapping, see [Data type mapping for Teradata](#data-type-mapping-for-teradata).
446
+
447
+
448
+
## Differences between Teradata connector version 2.0 (Preview) and version 1.0
449
+
450
+
The Teradata connector version 2.0 (Preview) offers new functionalities and is compatible with most features of version 1.0. The following table shows the feature differences between version 2.0 (Preview) and version 1.0.
451
+
452
+
| Version 2.0 (Preview) | Version 1.0 |
453
+
| :----------- | :------- |
454
+
| The following mappings are used from Teradata data types to interim service data type.<br><br>Date -> Date<br>Time With Time Zone -> String <br>Timestamp With Time Zone -> DateTimeOffset <br>Graphic -> String<br>Interval Day -> TimeSpan<br>Interval Day To Hour -> TimeSpan<br>Interval Day To Minute -> TimeSpan<br>Interval Day To Second -> TimeSpan<br>Interval Hour -> TimeSpan<br>Interval Hour To Minute -> TimeSpan<br>Interval Hour To Second -> TimeSpan<br>Interval Minute -> TimeSpan<br>Interval Minute To Second -> TimeSpan<br>Interval Month -> String<br>Interval Second -> TimeSpan<br>Interval Year -> String<br>Interval Year To Month -> String<br>Number -> Double<br>Period (Date) -> String<br>Period (Time) -> String<br>Period (Time With Time Zone) -> String<br>Period (Timestamp) -> String<br>Period (Timestamp With Time Zone) -> String<br>VarGraphic -> String<br>Xml -> String | The following mappings are used from Teradata data types to interim service data type.<br><br>Date -> DateTime<br>Time With Time Zone -> TimeSpan <br>Timestamp With Time Zone -> DateTime <br>Other mappings supported by version 2.0 (Preview) listed left are not supported by version 1.0. Please apply an explicit cast in the source query. |
455
+
383
456
384
457
## Related content
385
458
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
0 commit comments