You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Azure Data Factory, the introduction of the connector lifecycle ensures that customers always have access to the most reliable, secure, and feature-rich connectors. With the structured lifecycle stages, major connector upgrade evolves through distinct lifecycle stages, from preview to general availability and end of support, providing clear expectations for stability, support, and future enhancements. This lifecycle framework guarantees that users can seamlessly adopt new connectors with confidence, benefit from regular performance and security updates, and prepare in advance for any phase-out of older versions. By utilizing versioning within the connector lifecycle, the service empowers users with a predictable, transparent, and future-proof integration experience, reducing operational risks and enhancing overall workload reliability.
18
20
19
21
## Release rhythm
@@ -80,21 +82,36 @@ These auto-upgraded workloads are not affected by the announced removal date of
80
82
81
83
You can identify which activities have been automatically upgraded by inspecting the activity output, where relevant upgrade information is recorded.
82
84
85
+
**Example:**
86
+
87
+
Copy activity output
88
+
89
+
```json
90
+
"source": {
91
+
"type": "AmazonS3",
92
+
"autoUpgrade": "true",
93
+
}
94
+
95
+
"sink": {
96
+
"type": "AmazonS3",
97
+
"autoUpgrade": "true",
98
+
}
99
+
```
100
+
83
101
> [!NOTE]
84
102
> While compatibility mode offers flexibility, we strongly encourage users to upgrade to the latest GA version as soon as possible to benefit from ongoing improvements, optimizations, and full support.
85
103
86
104
You can find more details from the table below on the connector list that is planned for the automatic upgrade.
87
105
88
106
| Connector | Scenario |
89
107
|------------------|----------|
90
-
|[Google BigQuery](connector-google-bigquery.md)| Scenario that does not rely on below capability in Google BigQuery V1:<br><br> • Use `trustedCertsPath`, `additionalProjects`, `requestgoogledrivescope` connection properties.<br> • Set `useSystemTrustStore` connection property as `false`.<br> • Use **STRUCT** and **ARRAY** data types. |
91
-
|[Teradata](connector-teradata.md)| Scenario that does not rely on below capability in Teradata (version 1.0):<br><br> • Set below value for **CharacterSet**:<br> • BIG5 (TCHBIG5_1R0)<br> • EUC (Unix compatible, KANJIEC_0U)<br> • GB (SCHGB2312_1T0)<br> • IBM Mainframe (KANJIEBCDIC5035_0I)<br> • NetworkKorean (HANGULKSC5601_2R4)<br> • Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)|
92
-
|[Spark](connector-spark.md)| Scenario that does not rely on below capability in Spark (version 1.0):<br><br>• Authentication types:<br> • Username<br>• Thrift transport protocol:<br> • SASL<br> • Binary<br>• Thrift transport protocol:<br> • SharkServer<br> • SharkServer2 |
93
-
|[Impala](connector-impala.md)| Scenario that does not rely on below capability in Impala (version 1.0):<br><br>• Authentication types:<br> • SASL Username |
108
+
|[Amazon Redshift](connector-amazon-redshift.md)| Scenario that does not rely on below capability in Amazon Redshift (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br>• Use [UNLOAD](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.56 or above).<br><br> For more information, go to [Install Amazon Redshift ODBC driver for the version 2.0](connector-amazon-redshift.md#install-amazon-redshift-odbc-driver-for-the-version-20).|
109
+
|[Google BigQuery](connector-google-bigquery.md)| Scenario that does not rely on below capability in Google BigQuery V1:<br><br> • Use `trustedCertsPath`, `additionalProjects`, `requestgoogledrivescope` connection properties.<br> • Set `useSystemTrustStore` connection property as `false`.<br> • Use **STRUCT** and **ARRAY** data types. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
94
110
|[Hive](connector-hive.md)| Scenario that does not rely on below capability in Hive (version 1.0):<br><br>• Authentication types:<br> • Username<br>• Thrift transport protocol:<br> • HiveServer1<br>• Service discovery mode: True<br>• Use native query: True |
95
-
|[Vertica](connector-vertica.md)| Scenario that does not rely on below capability in Vertica (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime.<br><br> For more information, go to [Install Vertica ODBC driver for the version 2.0](connector-vertica.md#install-vertica-odbc-driver-for-the-version-20). |
96
-
|[Amazon Redshift](connector-amazon-redshift.md)| Scenario that does not rely on below capability in Amazon Redshift (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br>• Use [UNLOAD](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime.|
97
-
111
+
|[Impala](connector-impala.md)| Scenario that does not rely on below capability in Impala (version 1.0):<br><br>• Authentication types:<br> • SASL Username<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
112
+
|[Spark](connector-spark.md)| Scenario that does not rely on below capability in Spark (version 1.0):<br><br>• Authentication types:<br> • Username<br>• Thrift transport protocol:<br> • SASL<br> • Binary<br>• Thrift transport protocol:<br> • SharkServer<br> • SharkServer2<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
113
+
|[Teradata](connector-teradata.md)| Scenario that does not rely on below capability in Teradata (version 1.0):<br><br> • Set below value for **CharacterSet**:<br> • BIG5 (TCHBIG5_1R0)<br> • EUC (Unix compatible, KANJIEC_0U)<br> • GB (SCHGB2312_1T0)<br> • IBM Mainframe (KANJIEBCDIC5035_0I)<br> • NetworkKorean (HANGULKSC5601_2R4)<br> • Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)<br><br> If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above.|
114
+
|[Vertica](connector-vertica.md)| Scenario that does not rely on below capability in Vertica (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.55 or above).<br><br> For more information, go to [Install Vertica ODBC driver for the version 2.0](connector-vertica.md#install-vertica-odbc-driver-for-the-version-20). |
This article provides an overview of the release stages and timelines for each connector available in Azure Data Factory.
16
18
For comprehensive details on support levels and recommended usage at each stage, please see [this article](connector-lifecycle-overview.md#release-rhythm).
0 commit comments