Skip to content

Commit d434cc8

Browse files
authored
preexisting acrolinx
1 parent d5d29e5 commit d434cc8

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

articles/data-factory/connector-lifecycle-overview.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Connector upgrades are essential to evolve innovation in a fast manner, maintain
2828

2929
- **Protocol change introduced by external data source vendors leading to potential behavior changes**
3030

31-
These changes are not always exhaustively predictable and arise due to incompatibility brought by individual data source vendor itself. Given these uncertainties, versioning ensures that users can adopt the updated connector (e.g. version 2.0) while maintaining a fallback option in a period. This empowers users to well plan for a version upgrade to accommodate potential differences while providing users with a clear transition path.
31+
These changes aren't always exhaustively predictable and arise due to incompatibility brought by individual data source vendor itself. Given these uncertainties, versioning ensures that users can adopt the updated connector (e.g. version 2.0) while maintaining a fallback option in a period. This empowers users to well plan for a version upgrade to accommodate potential differences while providing users with a clear transition path.
3232

3333
- **Fixing unintended behaviors**
3434

@@ -47,8 +47,8 @@ A connector lifecycle includes multiple stages with thorough and measurable asse
4747
| Private Preview | The private preview phase marks the initial release of a new connector version to limited users. During this phase, opt-in users can use the latest version of the connector and provide feedback. | 3 months or above |
4848
| Public Preview | This stage marks the initial release of a new connector version to all users publicly. During this phase, users are encouraged to try the latest connector version and provide feedback. For newly created connections, it defaults to the latest connector version. Users can switch back to the previous version. | 1 month or above* |
4949
| General Availability | Once a connector version meets the General Availability (GA) criteria, it's released to the public and is suitable for production workloads. To reach this stage, the new connector version must meet the requirements in terms of performance, reliability, and its capability to meet business needs. | 12 months or above* |
50-
| End-of-Support (EOS) announced | When a connector version reaches its EOS, it will not receive any further updates or support. A six-month notice is announced before the EOS date of this version. This is documented together with the removal date. | 6 months before the end-of-support date* |
51-
| End-of-Support (EOS) | Once the previously announced EOS date arrives, the connector version becomes officially unsupported. This implies that it will not receive any updates or bug fixes, and no official support will be provided. Users will not be able to create new workloads on a version that is under EOS stage. Using an unsupported connector version is at the user's own risk. The workload running on EOS version may not fail immediately, the service might expedite moving into the final stage at any time, at Microsoft's discretion due to outstanding security issues, or other factors. | / |
50+
| End-of-Support (EOS) announced | When a connector version reaches its EOS, it won't receive any further updates or support. A six-month notice is announced before the EOS date of this version. This is documented together with the removal date. | 6 months before the end-of-support date* |
51+
| End-of-Support (EOS) | Once the previously announced EOS date arrives, the connector version becomes officially unsupported. This implies that it won't receive any updates or bug fixes, and no official support will be provided. Users won't be able to create new workloads on a version that is under EOS stage. Using an unsupported connector version is at the user's own risk. The workload running on EOS version may not fail immediately, the service might expedite moving into the final stage at any time, at Microsoft's discretion due to outstanding security issues, or other factors. | / |
5252
| Version removed | Once the connector version passes its EOS date, the service will remove all related components associated with this connector version. This implies that pipelines using this connector version will discontinue to execute. | 1-12 months after the end of support date* |
5353

5454
*\* These timelines are provided as an example and might vary depending on various factors. Lifecycle timelines are subject to change at Microsoft discretion.*
@@ -62,23 +62,23 @@ To manage connection updates effectively, it's important to understand versionin
6262

6363
## How Data Factory handles connector version upgrade
6464

65-
**Major and minor version** updates may include changes that can impact your pipeline output or related components. To help you prepare, we will notify you in advance, providing a window for testing and upgrading to the latest version. Specific examples of version changes can be found in the documentation for each individual connector. We recommend reviewing and upgrading to the latest version as early as possible to take advantage of the up-to-date enhancements and ensure your pipelines continue to run smoothly and reliably.
65+
**Major and minor version** updates may include changes that can impact your pipeline output or related components. To help you prepare, we'll notify you in advance, providing a window for testing and upgrading to the latest version. Specific examples of version changes can be found in the documentation for each individual connector. We recommend reviewing and upgrading to the latest version as early as possible to take advantage of the up-to-date enhancements and ensure your pipelines continue to run smoothly and reliably.
6666

6767
When new versions are released, the service starts to always set to the latest new versions by default for all newly created linked service. At that time, users can fall back to the earlier version if needed.
6868

6969
When a version reaches its end-of-support date, users are no longer allowed to create new linked service on that version.
7070

71-
In addition to major and minor version updates, the service also delivers new features and bug fixes that are fully backward compatible with your existing setup. These changes do not require a version update to the connector. Depending on the nature of the change, users may either receive the improvements automatically or have the option to enable new features as needed. This approach ensures a seamless experience while maintaining stability and flexibility.
71+
In addition to major and minor version updates, the service also delivers new features and bug fixes that are fully backward compatible with your existing setup. These changes don't require a version update to the connector. Depending on the nature of the change, users may either receive the improvements automatically or have the option to enable new features as needed. This approach ensures a seamless experience while maintaining stability and flexibility.
7272

7373
## Automatic connector upgrade
7474

7575
In addition to providing [tools](connector-upgrade-advisor.md) and [best practices](connector-upgrade-guidance.md) to help users manually upgrade their connectors, the service now also provides a more streamlined upgrade process for some cases where applicable. This is designed to help users adopt the most reliable and supported connector versions with minimal disruption.
7676

77-
The following section outlines the general approach that the service takes for automatic upgrades. While this provides a high-level overview, it is strongly recommended to review the documentation specific to each connector to understand which scenarios are supported and how the upgrade process applies to your workloads.
77+
The following section outlines the general approach that the service takes for automatic upgrades. While this provides a high-level overview, it's strongly recommended to review the documentation specific to each connector to understand which scenarios are supported and how the upgrade process applies to your workloads.
7878

7979
In cases where certain scenarios running on the latest GA connector version are fully backward compatible with the previous version, the service will automatically upgrade existing workloads (such as Copy, Lookup, and Script activities) to a compatibility mode that preserves the behavior of the earlier version.
8080

81-
These auto-upgraded workloads are not affected by the announced removal date of the older version, giving users additional time to evaluate and transition to the latest GA version without facing immediate failures.
81+
These auto-upgraded workloads aren't affected by the announced removal date of the older version, giving users additional time to evaluate and transition to the latest GA version without facing immediate failures.
8282

8383
You can identify which activities have been automatically upgraded by inspecting the activity output, where relevant upgraded information is recorded.
8484

@@ -105,13 +105,13 @@ You can find more details from the table below on the connector list that is pla
105105

106106
| Connector | Scenario |
107107
|------------------|----------|
108-
| [Amazon Redshift](connector-amazon-redshift.md) | Scenario that does not rely on below capability in Amazon Redshift (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br>• Use [UNLOAD](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime.<br><br> For more information, go to [Install Amazon Redshift ODBC driver for the version 2.0](connector-amazon-redshift.md#install-amazon-redshift-odbc-driver-for-the-version-20).|
109-
| [Google BigQuery](connector-google-bigquery.md) | Scenario that does not rely on below capability in Google BigQuery V1:<br><br> • Use `trustedCertsPath`, `additionalProjects`, `requestgoogledrivescope` connection properties.<br> • Set `useSystemTrustStore` connection property as `false`.<br> • Use **STRUCT** and **ARRAY** data types. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
110-
| [Hive](connector-hive.md) | Scenario that does not rely on below capability in Hive (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• HiveServer1<br>• Service discovery mode: True<br>• Use native query: True <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
111-
| [Impala](connector-impala.md) | Scenario that does not rely on below capability in Impala (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• SASL Username<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
112-
| [Spark](connector-spark.md) | Scenario that does not rely on below capability in Spark (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SASL<br>&nbsp;&nbsp;• Binary<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SharkServer<br>&nbsp;&nbsp;• SharkServer2<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
113-
| [Teradata](connector-teradata.md) | Scenario that does not rely on below capability in Teradata (version 1.0):<br><br> • Set below value for **CharacterSet**:<br>&nbsp;&nbsp;• BIG5 (TCHBIG5_1R0)<br>&nbsp;&nbsp;• EUC (Unix compatible, KANJIEC_0U)<br>&nbsp;&nbsp;• GB (SCHGB2312_1T0)<br>&nbsp;&nbsp;• IBM Mainframe (KANJIEBCDIC5035_0I)<br>&nbsp;&nbsp;• NetworkKorean (HANGULKSC5601_2R4)<br>&nbsp;&nbsp;• Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)|
114-
| [Vertica](connector-vertica.md) | Scenario that does not rely on below capability in Vertica (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.55 or above).<br><br> For more information, go to [Install Vertica ODBC driver for the version 2.0](connector-vertica.md#install-vertica-odbc-driver-for-the-version-20). |
108+
| [Amazon Redshift](connector-amazon-redshift.md) | Scenario that doesn't rely on below capability in Amazon Redshift (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br>• Use [UNLOAD](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime.<br><br> For more information, go to [Install Amazon Redshift ODBC driver for the version 2.0](connector-amazon-redshift.md#install-amazon-redshift-odbc-driver-for-the-version-20).|
109+
| [Google BigQuery](connector-google-bigquery.md) | Scenario that doesn't rely on below capability in Google BigQuery V1:<br><br> • Use `trustedCertsPath`, `additionalProjects`, `requestgoogledrivescope` connection properties.<br> • Set `useSystemTrustStore` connection property as `false`.<br> • Use **STRUCT** and **ARRAY** data types. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
110+
| [Hive](connector-hive.md) | Scenario that doesn't rely on below capability in Hive (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• HiveServer1<br>• Service discovery mode: True<br>• Use native query: True <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
111+
| [Impala](connector-impala.md) | Scenario that doesn't rely on below capability in Impala (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• SASL Username<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
112+
| [Spark](connector-spark.md) | Scenario that doesn't rely on below capability in Spark (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SASL<br>&nbsp;&nbsp;• Binary<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SharkServer<br>&nbsp;&nbsp;• SharkServer2<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
113+
| [Teradata](connector-teradata.md) | Scenario that doesn't rely on below capability in Teradata (version 1.0):<br><br> • Set below value for **CharacterSet**:<br>&nbsp;&nbsp;• BIG5 (TCHBIG5_1R0)<br>&nbsp;&nbsp;• EUC (Unix compatible, KANJIEC_0U)<br>&nbsp;&nbsp;• GB (SCHGB2312_1T0)<br>&nbsp;&nbsp;• IBM Mainframe (KANJIEBCDIC5035_0I)<br>&nbsp;&nbsp;• NetworkKorean (HANGULKSC5601_2R4)<br>&nbsp;&nbsp;• Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)|
114+
| [Vertica](connector-vertica.md) | Scenario that doesn't rely on below capability in Vertica (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.55 or above).<br><br> For more information, go to [Install Vertica ODBC driver for the version 2.0](connector-vertica.md#install-vertica-odbc-driver-for-the-version-20). |
115115

116116

117117
## Related content
@@ -120,4 +120,4 @@ You can find more details from the table below on the connector list that is pla
120120
- [Connector upgrade guidance](connector-upgrade-guidance.md)
121121
- [Connector upgrade advisor](connector-upgrade-advisor.md)
122122
- [Connector release stages and timelines](connector-release-stages-and-timelines.md)
123-
- [Connector upgrade FAQ](connector-deprecation-frequently-asked-questions.md)
123+
- [Connector upgrade FAQ](connector-deprecation-frequently-asked-questions.md)

0 commit comments

Comments
 (0)