diff --git a/README.md b/README.md index 39fccb9..a31ce06 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ By using the IBM Connectivity Pack, Connectivity Pack Kafka connectors enable data streaming between external systems and Kafka. -**Note:** +**Note:** - Connectivity Pack v2.0.0 and earlier are compatible with Event Streams 11.8 and earlier, but not compatible with Kafka Connect 4.0.0. - Connectivity Pack v3.0.0 is compatible with Event Streams 12.0.0 and later, and compatible with Kafka Connect 4.0.0. @@ -142,25 +142,25 @@ Configure the Kafka Connect runtime and include the configuration, certificates, 1. Apply the configured `KafkaConnect` custom resource by using the `kubectl apply` command to start the Kafka Connect runtime. -1. When Kafka Connect is successfully created, verify that the connector is available for use by checking the `status.connectorPlugins` section in the `KafkaConnect` custom resource. - - For the Connectivity Pack source connector to work, the following plug-in must be present: - - ```yaml - status: - connectorPlugins: - - class: com.ibm.eventstreams.connect.connectivitypack.source.ConnectivityPackSourceConnector - type: source - version: - ``` - - For the Connectivity Pack sink connector to work, the following plug-in must be present: - - ```yaml - status: - connectorPlugins: - - class: com.ibm.eventstreams.connect.connectivitypack.sink.ConnectivityPackSinkConnector - type: sink - version: - ``` +1. When Kafka Connect is successfully created, verify that the connector is available for use by checking the `status.connectorPlugins` section in the `KafkaConnect` custom resource. + - For the Connectivity Pack source connector to work, the following plug-in must be present: + + ```yaml + status: + connectorPlugins: + - class: com.ibm.eventstreams.connect.connectivitypack.source.ConnectivityPackSourceConnector + type: source + version: + ``` + - For the Connectivity Pack sink connector to work, the following plug-in must be present: + + ```yaml + status: + connectorPlugins: + - class: com.ibm.eventstreams.connect.connectivitypack.sink.ConnectivityPackSinkConnector + type: sink + version: + ``` ## Running the Connectors @@ -171,8 +171,8 @@ Configure your connector with information about your external system by followin 1. Create a `KafkaConnector` custom resource to define your connector configuration. Example custom resources are available in the [`examples`](/examples) folder: [kafka-connector-source.yaml](/examples#kafka-connector-source.yaml) for a source connector and [kafka-connector-sink.yaml](/examples#kafka-connector-sink.yaml) for a sink connector. You can edit these files based on your requirements. 1. Specify the appropriate connector class name: - - For a source connector: `com.ibm.eventstreams.connect.connectivitypack.source.ConnectivityPackSourceConnector` - - For a sink connector: `com.ibm.eventstreams.connect.connectivitypack.sink.ConnectivityPackSinkConnector` + - For a source connector: `com.ibm.eventstreams.connect.connectivitypack.source.ConnectivityPackSourceConnector` + - For a sink connector: `com.ibm.eventstreams.connect.connectivitypack.sink.ConnectivityPackSinkConnector` 1. Configure the connector properties in the `config` section as described in the respective documentation. See the [source connector documentation](./connectors/source-connector.md#configuration) for source connectors and the [sink connector documentation](./connectors/sink-connector.md#configuration) for sink connectors. diff --git a/ibm-connectivity-pack/templates/proxy.yaml b/ibm-connectivity-pack/templates/proxy.yaml deleted file mode 100644 index db569aa..0000000 --- a/ibm-connectivity-pack/templates/proxy.yaml +++ /dev/null @@ -1,88 +0,0 @@ -{{ if .Values.certificate.enable -}} -kind: ConfigMap -apiVersion: v1 -metadata: - name: {{ include "ibm-connectivity-pack.proxy" . }} - namespace: {{ include "ibm-connectivity-pack.namespace" . }} - labels: - {{- include "ibm-connectivity-pack.labels" . | nindent 4 }} - annotations: - {{- toYaml .Values.annotations | nindent 4 }} -data: - {{ if .Values.certificate.MTLSenable -}} - stunnel.conf: |- - ; ************************************************************************** - ; * Global options * - ; ************************************************************************** - pid = /tmp/haproxy.pid - fips = yes - foreground = yes - ; ************************************************************************** - ; * Service defaults * - ; ************************************************************************** - cert =/etc/stunnel/secrets/server.cert.pem - key =/etc/stunnel/secrets/server.key.pem - CAfile =/etc/stunnel/secrets/server.ca.pem - ; Allow only TLS, thus avoiding SSL - sslVersion = TLSv1.2 - socket = l:TCP_NODELAY=1 - socket = r:TCP_NODELAY=1 - verify = 2 - TIMEOUTclose = 0 - ; ************************************************************************** - ; * Services * - ; ************************************************************************** - [proxy] - accept = 3001 - connect = /tmp/lcp.socket - [ecpproxy] - accept = 3004 - connect = /tmp/ecp.socket - [fbcproxy] - accept = 3006 - connect=localhost:3005 - [webhookproxy] - verify = 1 - accept = 3008 - connect = localhost:3009 - [wsproxy] - accept = 3042 - connect = localhost:3022 - {{ else }} - stunnel.conf: |- - ; ************************************************************************** - ; * Global options * - ; ************************************************************************** - pid = /tmp/haproxy.pid - fips = yes - foreground = yes - ; ************************************************************************** - ; * Service defaults * - ; ************************************************************************** - cert =/etc/stunnel/secrets/server.cert.pem - key =/etc/stunnel/secrets/server.key.pem - ; Allow only TLS, thus avoiding SSL - sslVersion = TLSv1.2 - socket = l:TCP_NODELAY=1 - socket = r:TCP_NODELAY=1 - TIMEOUTclose = 0 - ; ************************************************************************** - ; * Services * - ; ************************************************************************** - [proxy] - accept = 3001 - connect = /tmp/lcp.socket - [ecpproxy] - accept = 3004 - connect = /tmp/ecp.socket - [fbcproxy] - accept = 3006 - connect=localhost:3005 - [webhookproxy] - accept = 3008 - connect = localhost:3009 - [wsproxy] - accept = 3042 - connect = localhost:3022 - {{ end }} -{{ end }} \ No newline at end of file diff --git a/systems/salesforce.md b/systems/salesforce.md deleted file mode 100644 index d523588..0000000 --- a/systems/salesforce.md +++ /dev/null @@ -1,146 +0,0 @@ -# Salesforce - -The Salesforce connector enables streaming of Salesforce platform events and Change Data Capture (CDC) events by using the Faye client or Bauyex protocol. This connector also supports discovery of custom objects and properties. - -## Pre-requisites - -- Ensure streaming API is enabled for your Salesforce edition and organization. -- Ensure you have the required permissions set up in Salesforce to use Change Data Capture objects. -- Ensure you have the required permissions set up in Salesforce to access the specified objects and events. -- Set the Session Security Level at login value to `None` instead of `High Assurance`. -- To connect to Salesforce sandboxes or subdomains and use Salesforce as a source system to trigger events, enable the Salesforce Organization object in your Salesforce environment. -- If using Change Data Capture (CDC) events, ensure that CDC is enabled for the specified objects in Salesforce. - -## Connecting to Salesforce - -The `connectivitypack.source` and `connectivitypack.source.url` configurations in the `KafkaConnector` custom resource provide the connector with the required information to connect to the data source. - -| **Name** | **Value or Description** | -| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -| `connectivitypack.source` | `salesforce` | -| `connectivitypack.source.url` | Specifies the URL of the source system. For example, for Salesforce, the base URL of your instance is `https://.salesforce.com`. | - -## Supported authentication mechanisms - -You can configure the following authentication mechanisms for Salesforce in the `KafkaConnector` custom resource depending on the authentication flow in Salesforce. - -### 1. Basic OAuth - -- **Use Case:** Recommended for most applications. -- **Required Credentials:** - - **Client Identity:** Obtain this by creating a *Connected App* in Salesforce and locating the *Consumer Key* under the application's settings. - - **Client Secret:** Available in the *Connected App* configuration alongside the *Consumer Key*. - - **Access Token and Refresh Token:** Generated by performing an OAuth flow with the configured Connected App. - -For more information, see the [Salesforce OAuth 2.0 Documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_web_server_oauth_flow.htm). - -| **Name** | **Description** | -| -------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | -| **connectivitypack.source.credentials.authType** | `BASIC_OAUTH` - Specifies that the connector will use Basic OAuth for authentication. | -| **connectivitypack.source.credentials.clientSecret** | The client secret of the Salesforce connected app used for Basic OAuth authentication. | -| **connectivitypack.source.credentials.clientIdentity** | The client ID (or consumer key) of the Salesforce connected app used for Basic OAuth authentication. | -| **connectivitypack.source.credentials.accessTokenBasicOauth** | The access token used for Basic OAuth authentication with Salesforce. | -| **connectivitypack.source.credentials.refreshTokenBasicOauth** | The refresh token used to renew the OAuth access token for Basic OAuth authentication. | - -### 2. OAuth2 Password (Deprecated) - -- **Use Case:** Legacy applications where Basic OAuth is not applicable. -- **Required Credentials:** - - **Username and Password:** Use the Salesforce account’s credentials. - - **Client Identity and Client Secret:** Same as Basic OAuth, obtained from the *Connected App* settings. -- **Important Note:** Salesforce has deprecated the OAuth2 Password grant type. If you're using this method, plan to migrate to Basic OAuth to ensure future compatibility. - -| **Name** | **Description** | -| ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------ | -| **connectivitypack.source.credentials.authType** | `OAUTH2_PASSWORD` - Specifies that the connector will use OAuth 2.0 Password authentication. | -| **connectivitypack.source.credentials.username** | The Salesforce username required for OAuth2 Password authentication. | -| **connectivitypack.source.credentials.password** | The Salesforce password associated with the username for OAuth2 Password authentication. | -| **connectivitypack.source.credentials.clientSecret** | The client secret of the Salesforce Connected App required for OAuth2 Password authentication. | -| **connectivitypack.source.credentials.clientIdentity** | The client ID (or consumer key) of the Salesforce Connected App required for OAuth2 Password authentication. | - -## Supported objects and events - -You can specify any of the following objects and associated events in the `connectivitypack.source.` and the `connectivitypack.source..events` sections of the `KafkaConnector` custom resource: - -### Platform Events - -[Salesforce platform events](https://www.ibm.com/links?url=https%3A%2F%2Fdeveloper.salesforce.com%2Fdocs%2Fatlas.en-us.platform_events.meta%2Fplatform_events%2Fplatform_events_intro.htm) deliver custom event notifications when something meaningful happens to objects that are defined in your Salesforce organization. Platform events are dynamic in nature and specific to the endpoint account connected, and as a result are not shown in the static list. - -| **Objects** | **Events** | -|:-------------------------------------:|:-------------------------:| -| Platform Event objects | CREATED | - -#### Replay ID -Salesforce provides queues for recording platform events and each event notification has a unique replay ID. Salesforce retains platform events for 72 hours, and a user can store a replay ID value to use when subscribing again to retrieve events during the retention window, as described in the [Salesforce documentation](https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/using_streaming_api_durability.htm). - -The Salesforce connector uses the replay ID to track Salesforce platform events it has received. If the connector is restarted for any reason, it resumes streaming from where it stopped by using the replay ID. If the replay ID is no longer valid (more than 72 hours old), the connector will not be able to resume. Instead, it will start a new subscription to receive events from the current time. - -### Change Data Capture Events - -Salesforce CDC events provide notifications of state changes to objects that you are interested in. - -**Note:** CDC must be enabled by customers, and it is only available for objects in the dynamic list. - -All custom objects and a subset of standard objects are supported for use with Change Data Capture in Salesforce. For the full list, see [Change Event Object Support](https://www.ibm.com/links?url=https%3A%2F%2Fdeveloper.salesforce.com%2Fdocs%2Fatlas.en-us.change_data_capture.meta%2Fchange_data_capture%2Fcdc_object_support.htm). - -| **Objects** | **Events** | -|:-------------------------------------:|:-------------------------:| -| Change Data Capture objects | CREATED, UPDATED, DELETED | - -## Example configuration - -The following is an example of a connector configuration for Salesforce: - -```yaml -apiVersion: eventstreams.ibm.com/v1beta2 -kind: KafkaConnector -metadata: - labels: - # The eventstreams.ibm.com/cluster label identifies the Kafka Connect instance - # in which to create this connector. That KafkaConnect instance - # must have the eventstreams.ibm.com/use-connector-resources annotation - # set to true. - eventstreams.ibm.com/cluster: cp-connect-cluster - name: - namespace: -spec: - # Connector class name - class: com.ibm.eventstreams.connect.connectivitypacksource.ConnectivityPackSourceConnector - - config: - # Which data source to connect to, for example, salesforce - connectivitypack.source: salesforce - - # URL to access the data source, for example, `https://.salesforce.com` - connectivitypack.source.url: - - # Credentials to access the data source using OAUTH2_PASSWORD authentication. - connectivitypack.source.credentials.authType: OAUTH2_PASSWORD - connectivitypack.source.credentials.username: - connectivitypack.source.credentials.password: - connectivitypack.source.credentials.clientSecret: - connectivitypack.source.credentials.clientIdentity: - - # Objects and event types to read from the data source - connectivitypack.source.objects: ',' - connectivitypack.source..events: 'CREATED' - connectivitypack.source..events: 'CREATED,UPDATED' - - # Optional, sets the format for Kafka topic names created by the connector. - # You can use placeholders such as '${object}' and '${eventType}', which the connector will replace automatically. - # Including '${object}' or '${eventType}' in the format is optional. For example, '${object}-topic-name' is a valid format. - # By default, the format is '${object}-${eventType}', but it's shown here for clarity. - connectivitypack.topic.name.format: '${object}-${eventType}' - - # `tasksMax` must be equal to the number of object-eventType combinations - # In this example it is 3 (object1 - CREATED, object2 - CREATED, object2 - UPDATED) - tasksMax: 3 - - # Specifies the converter class used to deserialize the message value. - # Change this to a different converter (for example, AvroConverter) as applicable. - value.converter: org.apache.kafka.connect.json.JsonConverter - - # Controls whether the schema is included in the message. - # Set this to false to disable schema support, or to true to enable schema inclusion (for example, for Avro). - value.converter.schemas.enable: false -``` \ No newline at end of file diff --git a/systems/sink systems/servicenow.md b/systems/sink systems/servicenow.md index ef93b01..8399c57 100644 --- a/systems/sink systems/servicenow.md +++ b/systems/sink systems/servicenow.md @@ -9,7 +9,7 @@ To use the ServiceNow connector, ensure that you have the required credentials a ## Connecting to ServiceNow -The `connectivitypack.sink` and the associated ServiceNow resource configurations in the `KafkaConnector` custom resource provide the required information to connect to ServiceNow. +The `connectivitypack.sink` and the associated ServiceNow resource configurations in the `KafkaConnector` custom resource provide the required information to connect to ServiceNow. | Name | Value or Description | |---------------------------|--------------------------------------------------------------------------------------| @@ -18,7 +18,7 @@ The `connectivitypack.sink` and the associated ServiceNow resource configuration ## Supported authentication mechanisms -You can configure the following authentication mechanisms for ServiceNow in the `KafkaConnector` custom resource: +You can configure the following authentication mechanisms for ServiceNow in the `KafkaConnector` custom resource: | Authentication Type | System Type | Use Case | Required Credentials | `KafkaConnector` configuration | |----------------------|-------------|-----------|-----------------------|---------------------------------| @@ -28,7 +28,7 @@ You can configure the following authentication mechanisms for ServiceNow in the ## Supported objects and actions -The ServiceNow sink connector supports the following objects and their actions when processing data from Kafka topics: +The ServiceNow sink connector supports the following objects and their actions when processing data from Kafka topics: | Object | Action | Description | KafkaConnector configuration | |---------------------|--------|------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------| @@ -66,7 +66,7 @@ The ServiceNow sink connector supports the following objects and their actions w | | DELETE | Deletes a comment record. | `connectivitypack.sink.object: sys_journal_field`
`connectivitypack.sink.action: DELETE`
`connectivitypack.sink.object.key: sys_id` | | sys_attachment | CREATE | Creates a new attachment record. | `connectivitypack.sink.object: sys_attachment`
`connectivitypack.sink.action: CREATE`
`connectivitypack.sink.resource.parentType: `
`connectivitypack.sink.resource.AttachmentOwnerId: `
**Note:** Parent type value must be `ticket`, `incident`, `problem`, `alm_asset`, `cmn_department`, or `sys_user`. | | | DELETE | Deletes an attachment record. | `connectivitypack.sink.object: sys_attachment`
`connectivitypack.sink.action: DELETE`
`connectivitypack.sink.object.key: sys_id` | - + ## Example configuration diff --git a/systems/source systems/hdfs.md b/systems/source systems/hdfs.md index 0715930..65444f6 100644 --- a/systems/source systems/hdfs.md +++ b/systems/source systems/hdfs.md @@ -35,7 +35,7 @@ The connector works with the following types of files: ### CSV files -- A CSV file can contain multiple records. +- A CSV file can contain multiple records. - The CSV file must only be in UTF-8 file encoding standard. - Each record in a CSV file must end with a line delimiter. The delimiter is configurable, with `\n` (newline) being the default. - The first line must contain a header with the field or column names. The connector treats this line as the header. Column names can be plain text, enclosed in double quotes, or escaped by using double quotes if they contain quote characters. For example: @@ -43,7 +43,7 @@ The connector works with the following types of files: - `"Index","Customer Id","First Name","Last Name"` - `Index,"Customer Id",First Name,"Last Name"` - `"Index","My ""Customer"" Id", "First Name", "Last Name"` - + - The value fields in a CSV can be plain text, enclosed in double quotes, or escaped by using double quotes if they contain quote characters. - Each CSV file must end with a line delimiter. If the last line does not include a delimiter, the last record will not be sent to the Kafka topic. - It is recommended to use a CSV file with a .csv extension. @@ -70,7 +70,7 @@ This section describes the objects, associated events, and subscription paramete ### UnstructuredRecord events -The HDFS connector monitors a specified folder for CSV or reference files and streams the data to Kafka topics. For more information about reference files and how to use them, see [reference files](#reference-files). +The HDFS connector monitors a specified folder for CSV or reference files and streams the data to Kafka topics. For more information about reference files and how to use them, see [reference files](#reference-files). The following table describes the supported object and event for HDFS. @@ -126,7 +126,7 @@ You can configure how the connector sends HDFS records to Kafka topics: - Specify the name of the Kafka topic: You can explicitly specify a single topic name for all records fetched by the connector by using the `connectivitypack.topic.name.format` parameter. | **KafkaConnector configuration** | **Description** | - |:----------------------------------:|:---------------------------------------------------------:| + |:----------------------------------:|:---------------------------------------------------------:| | `connectivitypack.topic.name.format` | Specifies the Kafka topic name. For example, `customer` | - Default behavior: If no topic name is configured, the connector automatically uses the file name (without extension) as the topic name. If a file has no extension but its name contains a period (.), the text after the last period will be treated as the file extension. diff --git a/systems/source systems/jira.md b/systems/source systems/jira.md index d735aa3..b0a388a 100644 --- a/systems/source systems/jira.md +++ b/systems/source systems/jira.md @@ -1,6 +1,6 @@ # Jira -The Jira connector uses Jira API to stream Jira issue events to Kafka Topics. You can use this connector for real-time tracking of issues in Jira projects. +The Jira connector uses Jira API to stream Jira issue events to Kafka Topics. You can use this connector for real-time tracking of issues in Jira projects. **Note:** The connector supports only the Jira Enterprise version.