Skip to content

Commit 6411eaa

Browse files
Clare Zheng (Shanghai Wicresoft Co Ltd)Clare Zheng (Shanghai Wicresoft Co Ltd)
authored andcommitted
Resolve blocking issue and move auto upgrade section
1 parent 47c20c1 commit 6411eaa

File tree

3 files changed

+70
-55
lines changed

3 files changed

+70
-55
lines changed

articles/data-factory/connector-lifecycle-overview.md

Lines changed: 5 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,11 @@ Connector upgrades are essential to evolve innovation in a fast manner, maintain
2424

2525
- **New feature enhancements such as security, performance, etc.**
2626

27-
While the service actively evolves to provide the most secure and reliable features in the connector, leveraging the connector lifecycle is an efficient approach to ensure that users can take full advantage of the new enhancements at their manageable pace without business interruption.
27+
While the service actively evolves to provide the most secure and reliable features in the connector, applying the connector lifecycle is an efficient approach to ensure that users can take full advantage of the new enhancements at their manageable pace without business interruption.
2828

2929
- **Protocol change introduced by external data source vendors leading to potential behavior changes**
3030

31-
These changes aren't always exhaustively predictable and arise due to incompatibility brought by individual data source vendor itself. Given these uncertainties, versioning ensures that users can adopt the updated connector (e.g. version 2.0) while maintaining a fallback option in a period. This empowers users to well plan for a version upgrade to accommodate potential differences while providing users with a clear transition path.
31+
These changes aren't always exhaustively predictable and arise due to incompatibility brought by individual data source vendor itself. Given these uncertainties, versioning ensures that users can adopt the updated connector (for example, version 2.0) while maintaining a fallback option in a period. This empowers users to well plan for a version upgrade to accommodate potential differences while providing users with a clear transition path.
3232

3333
- **Fixing unintended behaviors**
3434

@@ -48,14 +48,14 @@ A connector lifecycle includes multiple stages with thorough and measurable asse
4848
| Public Preview | This stage marks the initial release of a new connector version to all users publicly. During this phase, users are encouraged to try the latest connector version and provide feedback. For newly created connections, it defaults to the latest connector version. Users can switch back to the previous version. | 1 month or above* |
4949
| General Availability | Once a connector version meets the General Availability (GA) criteria, it's released to the public and is suitable for production workloads. To reach this stage, the new connector version must meet the requirements in terms of performance, reliability, and its capability to meet business needs. | 12 months or above* |
5050
| End-of-Support (EOS) announced | When a connector version reaches its EOS, it won't receive any further updates or support. A six-month notice is announced before the EOS date of this version. This is documented together with the removal date. | 6 months before the end-of-support date* |
51-
| End-of-Support (EOS) | Once the previously announced EOS date arrives, the connector version becomes officially unsupported. This implies that it won't receive any updates or bug fixes, and no official support will be provided. Users won't be able to create new workloads on a version that is under EOS stage. Using an unsupported connector version is at the user's own risk. The workload running on EOS version may not fail immediately, the service might expedite moving into the final stage at any time, at Microsoft's discretion due to outstanding security issues, or other factors. | / |
52-
| Version removed | Once the connector version passes its EOS date, the service will remove all related components associated with this connector version. This implies that pipelines using this connector version will discontinue to execute. | 1-12 months after the end of support date* |
51+
| End-of-Support (EOS) | Once the previously announced EOS date arrives, the connector version becomes officially unsupported. This implies that it won't receive any updates or bug fixes, and no official support will be provided. Users won't be able to create new workloads on a version that is under EOS stage. Using an unsupported connector version is at the user's own risk. The workload running on EOS version may not fail immediately. The service might expedite moving into the final stage at any time, at Microsoft's discretion due to outstanding security issues, or other factors. | / |
52+
| Version removed | Once the connector version passes its EOS date, the service removes all related components associated with this connector version. This implies that pipelines using this connector version discontinues to execute. | 1-12 months after the end of support date* |
5353

5454
*\* These timelines are provided as an example and might vary depending on various factors. Lifecycle timelines are subject to change at Microsoft discretion.*
5555

5656
## Understanding connector versions
5757

58-
To manage connection updates effectively, it's important to understand versioning and how to interpret the change. Connectors in Azure Data Factory generally follow versioning Major.Minor (e.g., 1.2):
58+
To manage connection updates effectively, it's important to understand versioning and how to interpret the change. Connectors in Azure Data Factory generally follow versioning Major.Minor (for example, 1.2):
5959

6060
- **Major updates (x.0):** These are significant changes that require review on the changes before upgrade.
6161
- **Minor updates (1.x):** These might introduce new features or fixes, but with minor changes to the existing behavior.
@@ -70,52 +70,6 @@ When a version reaches its end-of-support date, users are no longer allowed to c
7070

7171
In addition to major and minor version updates, the service also delivers new features and bug fixes that are fully backward compatible with your existing setup. These changes don't require a version update to the connector. Depending on the nature of the change, users may either receive the improvements automatically or have the option to enable new features as needed. This approach ensures a seamless experience while maintaining stability and flexibility.
7272

73-
## Automatic connector upgrade
74-
75-
In addition to providing [tools](connector-upgrade-advisor.md) and [best practices](connector-upgrade-guidance.md) to help users manually upgrade their connectors, the service now also provides a more streamlined upgrade process for some cases where applicable. This is designed to help users adopt the most reliable and supported connector versions with minimal disruption.
76-
77-
The following section outlines the general approach that the service takes for automatic upgrades. While this provides a high-level overview, it's strongly recommended to review the documentation specific to each connector to understand which scenarios are supported and how the upgrade process applies to your workloads.
78-
79-
In cases where certain scenarios running on the latest GA connector version are fully backward compatible with the previous version, the service will automatically upgrade existing workloads (such as Copy, Lookup, and Script activities) to a compatibility mode that preserves the behavior of the earlier version.
80-
81-
These auto-upgraded workloads aren't affected by the announced removal date of the older version, giving users additional time to evaluate and transition to the latest GA version without facing immediate failures.
82-
83-
You can identify which activities have been automatically upgraded by inspecting the activity output, where relevant upgraded information is recorded.
84-
85-
**Example:**
86-
87-
Copy activity output
88-
89-
```json
90-
"source": {
91-
"type": "AmazonS3",
92-
"autoUpgrade": "true",
93-
}
94-
95-
"sink": {
96-
"type": "AmazonS3",
97-
"autoUpgrade": "true",
98-
}
99-
```
100-
101-
> [!NOTE]
102-
> While compatibility mode offers flexibility, we strongly encourage users to upgrade to the latest GA version as soon as possible to benefit from ongoing improvements, optimizations, and full support.
103-
104-
You can find more details from the table below on the connector list that is planned for the automatic upgrade.
105-
106-
| Connector | Scenario |
107-
|------------------|----------|
108-
| [Amazon Redshift](connector-amazon-redshift.md) | Scenario that doesn't rely on below capability in Amazon Redshift (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br>• Use [UNLOAD](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime.<br><br> For more information, go to [Install Amazon Redshift ODBC driver for the version 2.0](connector-amazon-redshift.md#install-amazon-redshift-odbc-driver-for-the-version-20).|
109-
| [Google BigQuery](connector-google-bigquery.md) | Scenario that doesn't rely on below capability in Google BigQuery V1:<br><br> • Use `trustedCertsPath`, `additionalProjects`, `requestgoogledrivescope` connection properties.<br> • Set `useSystemTrustStore` connection property as `false`.<br> • Use **STRUCT** and **ARRAY** data types. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
110-
| [Hive](connector-hive.md) | Scenario that doesn't rely on below capability in Hive (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• HiveServer1<br>• Service discovery mode: True<br>• Use native query: True <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
111-
| [Impala](connector-impala.md) | Scenario that doesn't rely on below capability in Impala (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• SASL Username<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
112-
| [Salesforce](connector-salesforce.md) | Scenario that does not rely on capability below in Salesforce V1:<br><br>• SOQL queries that use:<br>&nbsp;&nbsp;• TYPEOF clauses<br>&nbsp;&nbsp;• Compound address/geolocations fields<br>• All SQL-92 query<br>• Report query {call "\<report name>"}<br>• Use Self-hosted integration runtime (To be supported) |
113-
| [Salesforce Service Cloud](connector-salesforce-service-cloud.md) | Scenario that does not rely on capability below in Salesforce Service Cloud V1:<br><br>• SOQL queries that use:<br>&nbsp;&nbsp;• TYPEOF clauses<br>&nbsp;&nbsp;• Compound address/geolocations fields<br>• All SQL-92 query<br>• Report query {call "\<report name>"}<br>• Use Self-hosted integration runtime (To be supported) |
114-
| [Spark](connector-spark.md) | Scenario that doesn't rely on below capability in Spark (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SASL<br>&nbsp;&nbsp;• Binary<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SharkServer<br>&nbsp;&nbsp;• SharkServer2<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
115-
| [Teradata](connector-teradata.md) | Scenario that doesn't rely on below capability in Teradata (version 1.0):<br><br> • Set below value for **CharacterSet**:<br>&nbsp;&nbsp;• BIG5 (TCHBIG5_1R0)<br>&nbsp;&nbsp;• EUC (Unix compatible, KANJIEC_0U)<br>&nbsp;&nbsp;• GB (SCHGB2312_1T0)<br>&nbsp;&nbsp;• IBM Mainframe (KANJIEBCDIC5035_0I)<br>&nbsp;&nbsp;• NetworkKorean (HANGULKSC5601_2R4)<br>&nbsp;&nbsp;• Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)|
116-
| [Vertica](connector-vertica.md) | Scenario that doesn't rely on below capability in Vertica (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.55 or above).<br><br> For more information, go to [Install Vertica ODBC driver for the version 2.0](connector-vertica.md#install-vertica-odbc-driver-for-the-version-20). |
117-
118-
11973
## Related content
12074

12175
- [Connector overview](connector-overview.md)

articles/data-factory/connector-snowflake-legacy.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -89,8 +89,6 @@ The following sections provide details about properties that define entities spe
8989

9090
This Snowflake connector supports the following authentication types. See the corresponding sections for details.
9191

92-
93-
9492
- [Basic authentication](#basic-authentication)
9593

9694
### Basic authentication
@@ -100,7 +98,7 @@ The following properties are supported for a Snowflake linked service when using
10098
| Property | Description | Required |
10199
| :--------------- | :----------------------------------------------------------- | :------- |
102100
| type | The type property must be set to **Snowflake**. | Yes |
103-
| connectionString | Specifies the information needed to connect to the Snowflake instance. You can choose to put password or entire connection string in Azure Key Vault. Refer to the examples below the table, and the [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article, for more details.<br><br>Some typical settings:<br>- **Account name:** The [full account name](https://docs.snowflake.net/manuals/user-guide/connecting.html#your-snowflake-account-name) of your Snowflake account (including additional segments that identify the region and cloud platform), e.g. xy12345.east-us-2.azure.<br/>- **User name:** The login name of the user for the connection.<br>- **Password:** The password for the user.<br>- **Database:** The default database to use once connected. It should be an existing database for which the specified role has privileges.<br>- **Warehouse:** The virtual warehouse to use once connected. It should be an existing warehouse for which the specified role has privileges.<br>- **Role:** The default access control role to use in the Snowflake session. The specified role should be an existing role that has already been assigned to the specified user. The default role is PUBLIC. | Yes |
101+
| connectionString | Specifies the information needed to connect to the Snowflake instance. You can choose to put password or entire connection string in Azure Key Vault. Refer to the examples below the table, and the [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article, for more details.<br><br>Some typical settings:<br>- **Account name:** The [full account name](https://docs.snowflake.net/manuals/user-guide/connecting.html#your-snowflake-account-name) of your Snowflake account (including additional segments that identify the region and cloud platform), for example, xy12345.east-us-2.azure.<br/>- **User name:** The login name of the user for the connection.<br>- **Password:** The password for the user.<br>- **Database:** The default database to use once connected. It should be an existing database for which the specified role has privileges.<br>- **Warehouse:** The virtual warehouse to use once connected. It should be an existing warehouse for which the specified role has privileges.<br>- **Role:** The default access control role to use in the Snowflake session. The specified role should be an existing role that has already been assigned to the specified user. The default role is PUBLIC. | Yes |
104102
| authenticationType  | Set this property to **Basic**. | Yes    |
105103
| connectVia | The [integration runtime](concepts-integration-runtime.md) that is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure integration runtime. | No |
106104

@@ -197,7 +195,7 @@ To copy data from Snowflake, the following properties are supported in the Copy
197195
| Property | Description | Required |
198196
| :--------------------------- | :----------------------------------------------------------- | :------- |
199197
| type | The type property of the Copy activity source must be set to **SnowflakeSource**. | Yes |
200-
| query | Specifies the SQL query to read data from Snowflake. If the names of the schema, table and columns contain lower case, quote the object identifier in query e.g. `select * from "schema"."myTable"`.<br>Executing stored procedure isn't supported. | No |
198+
| query | Specifies the SQL query to read data from Snowflake. If the names of the schema, table and columns contain lower case, quote the object identifier in query, for example, `select * from "schema"."myTable"`.<br>Executing stored procedure isn't supported. | No |
201199
| exportSettings | Advanced settings used to retrieve data from Snowflake. You can configure the ones supported by the COPY into command that the service will pass through when you invoke the statement. | Yes |
202200
| ***Under `exportSettings`:*** | | |
203201
| type | The type of export command, set to **SnowflakeExportCopyCommand**. | Yes |

0 commit comments

Comments
 (0)