Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 22 additions & 3 deletions content/integrate/redis-data-integration/data-pipelines/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ redis-di set-secret SOURCE_DB_USERNAME myUserName

Use
[`kubectl create secret generic`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/)
to set secrets for a K8s/Helm deployment. The general pattern of the commands is
to set secrets for a K8s/Helm deployment. The general pattern of the commands is:

```bash
kubectl create secret generic <DB> \
Expand All @@ -74,6 +74,23 @@ kubectl create secret generic <DB> \
```

Where `<DB>` is either `source-db` for source secrets or `target-db` for target secrets.

If you use TLS or mTLS for either the source or target databases, you also need to create the `source-db-ssl` and/or `target-db-ssl` K8s secrets that contain the certificates used to establish secure connections. The general pattern of the commands is:

```bash
kubectl create secret generic <DB>-ssl \
--namespace=rdi \
--from-file=<FILE-NAME>=<FILE-PATH>
```

When you create these secrets, ensure that all certificates and keys are in `PEM` format. The only exception to this is that for PostgreSQL, the private key in the `source-db-ssl` secret (the `client.key` file) must be in `DER` format. If you have a key in `PEM` format, you must convert it to `DER` before creating the `source-db-ssl` secret using the command:

```bash
openssl pkcs8 -topk8 -inform PEM -outform DER -in /path/to/myclient.key -out /path/to/myclient.pk8 -nocrypt
```

This command assumes that the private key is not encrypted. See the [`openssl` documentation](https://docs.openssl.org/master/) to learn how to convert an encrypted private key.

The specific command lines for source secrets are as follows:

```bash
Expand Down Expand Up @@ -102,7 +119,7 @@ kubectl create secret generic source-db --namespace=rdi \
--from-literal=SOURCE_DB_USERNAME=yourUsername \
--from-literal=SOURCE_DB_PASSWORD=yourPassword \
--from-literal=SOURCE_DB_CACERT=/etc/certificates/source_db/ca.crt \
--from-literal=SOURCE_DB_CERT=/etc/certificates/source_db/client.crt \
--from-literal=SOURCE_DB_CERT=/etc/certificates/source_db/client.crt \
--from-literal=SOURCE_DB_KEY=/etc/certificates/source_db/client.key \
--from-literal=SOURCE_DB_KEY_PASSWORD=yourKeyPassword \ # add this only if SOURCE_DB_KEY is password-protected
--save-config --dry-run=client -o yaml | kubectl apply -f -
Expand Down Expand Up @@ -142,7 +159,7 @@ kubectl create secret generic target-db --namespace=rdi \
--from-literal=TARGET_DB_USERNAME=yourUsername \
--from-literal=TARGET_DB_PASSWORD=yourPassword \
--from-literal=TARGET_DB_CACERT=/etc/certificates/target_db/ca.crt \
--from-literal=TARGET_DB_CERT=/etc/certificates/target_db/client.crt \
--from-literal=TARGET_DB_CERT=/etc/certificates/target_db/client.crt \
--from-literal=TARGET_DB_KEY=/etc/certificates/target_db/client.key \
--from-literal=TARGET_DB_KEY_PASSWORD=yourKeyPassword \ # add this only if TARGET_DB_KEY is password-protected
--save-config --dry-run=client -o yaml | kubectl apply -f -
Expand All @@ -154,6 +171,8 @@ kubectl create secret generic target-db-ssl --namespace=rdi \
--save-config --dry-run=client -o yaml | kubectl apply -f -
```

Note that the certificate paths contained in the secrets `SOURCE_DB_CACERT`, `SOURCE_DB_CERT`, and `SOURCE_DB_KEY` (for the source database) and `TARGET_DB_CACERT`, `TARGET_DB_CERT`, and `TARGET_DB_KEY` (for the target database) are internal to RDI, so you *must* use the values shown in the example above. You should only change the certificate paths when you create the `source-db-ssl` and `target-db-ssl` secrets.

## Deploy a pipeline

When you have created your configuration, including the [jobs]({{< relref "/integrate/redis-data-integration/data-pipelines/data-pipelines#job-files" >}}), they are
Expand Down