Skip to content

Commit 086352a

Browse files
committed
Merge branch 'release-rs-gilboa' into DOC-5680
2 parents 614b991 + 86ede92 commit 086352a

File tree

13 files changed

+123
-23
lines changed

13 files changed

+123
-23
lines changed

config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ rdi_redis_gears_version = "1.2.6"
5555
rdi_debezium_server_version = "2.3.0.Final"
5656
rdi_db_types = "cassandra|mysql|oracle|postgresql|sqlserver"
5757
rdi_cli_latest = "latest"
58-
rdi_current_version = "1.14.0"
58+
rdi_current_version = "1.14.1"
5959

6060
[params.clientsConfig]
6161
"Python"={quickstartSlug="redis-py"}

content/develop/ai/langcache/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ Using LangCache as a semantic caching service has the following benefits:
3333

3434
- **Lower LLM costs**: Reduce costly LLM calls by easily storing the most frequently-requested responses.
3535
- **Faster AI app responses**: Get faster AI responses by retrieving previously-stored requests from memory.
36-
- **Simpler Deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
37-
- **Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates.
36+
- **Simpler deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
37+
- **Advanced cache management**: Manage data access, privacy, and eviction protocols. Monitor usage and cache hit rates.
3838

3939
LangCache works well for the following use cases:
4040

content/develop/data-types/streams.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -981,6 +981,5 @@ A few remarks:
981981

982982
## Learn more
983983

984-
* The [Redis Streams Tutorial]({{< relref "/develop/data-types/streams" >}}) explains Redis streams with many examples.
985984
* [Redis Streams Explained](https://www.youtube.com/watch?v=Z8qcpXyMAiA) is an entertaining introduction to streams in Redis.
986985
* [Redis University's RU202](https://university.redis.com/courses/ru202/) is a free, online course dedicated to Redis Streams.

content/embeds/rdi-supported-source-versions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
| Database | Versions | AWS RDS Versions | GCP SQL Versions |
22
| :-- | :-- | :-- | :-- |
3-
| Oracle | 12c, 19c, 21c | 19c, 21c | - |
3+
| Oracle | 19c, 21c | 19c, 21c | - |
44
| MariaDB | 10.5, 11.4.3 | 10.4 to 10.11, 11.4.3 | - |
55
| MongoDB | 6.0, 7.0, 8.0 | - | - |
66
| MySQL | 5.7, 8.0.x, 8.2 | 8.0.x | 8.0 |

content/integrate/redis-data-integration/data-pipelines/deploy.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,9 @@ secrets are only relevant for TLS/mTLS connections.
5252
{{< note >}}When creating secrets for TLS or mTLS, ensure that all certificates and keys are in `PEM` format. The only exception to this is that for PostgreSQL, the private key `SOURCE_DB_KEY` secret must be in `DER` format. If you have a key in `PEM` format, you must convert it to `DER` before creating the `SOURCE_DB_KEY` secret using the command:
5353

5454
```bash
55-
openssl pkcs8 -topk8 -inform PEM -outform DER -in /path/to/myclient.pem -out /path/to/myclient.pk8 -nocrypt
55+
openssl pkcs8 -topk8 -inform PEM -outform DER \
56+
-in /path/to/myclient.pem \
57+
-out /path/to/myclient.pk8 -nocrypt
5658
```
5759

5860
This command assumes that the private key is not encrypted. See the [`openssl` documentation](https://docs.openssl.org/master/) to learn how to convert an encrypted private key.

content/integrate/redis-data-integration/data-pipelines/prepare-dbs/oracle.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ exit;
233233
| SELECT ANY TABLE | Enables the connector to read any table. Optionally, rather than granting SELECT permission on all tables, you can grant the SELECT privilege for specific tables only. |
234234
| SELECT_CATALOG_ROLE | Enables the connector to read the data dictionary, which is needed by Oracle LogMiner sessions. |
235235
| EXECUTE_CATALOG_ROLE | Enables the connector to write the data dictionary into the Oracle redo logs, which is needed to track schema changes. |
236-
| SELECT ANY TRANSACTION | Enables the snapshot process to perform a Flashback snapshot query against any transaction so that the connector can read past changes from LogMiner. When FLASHBACK ANY TABLE is granted, this should also be granted. This grant is optional for Oracle 12c and later. In those later releases, the connector obtains the required privileges through the EXECUTE_CATALOG_ROLE and LOGMINING grants. |
236+
| SELECT ANY TRANSACTION | Enables the snapshot process to perform a Flashback snapshot query against any transaction so that the connector can read past changes from LogMiner. When FLASHBACK ANY TABLE is granted, this should also be granted. This grant is optional for Oracle 19c and later. In those later releases, the connector obtains the required privileges through the EXECUTE_CATALOG_ROLE and LOGMINING grants. |
237237
| LOGMINING | This role was added in newer versions of Oracle as a way to grant full access to Oracle LogMiner and its packages. On older versions of Oracle that don’t have this role, you can ignore this grant. |
238238
| CREATE TABLE | Enables the connector to create its flush table in its default tablespace. The flush table allows the connector to explicitly control flushing of the LGWR internal buffers to disk. |
239239
| LOCK ANY TABLE | Enables the connector to lock tables during schema snapshot. If snapshot locks are explicitly disabled via configuration, this grant can be safely ignored. |

content/integrate/redis-data-integration/data-pipelines/prepare-dbs/spanner.md

Lines changed: 28 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -60,9 +60,9 @@ for more details, including additional configuration options and dialect-specifi
6060

6161
## 3. Create a service account
6262

63-
To allow RDI to access the Spanner instance, you'll need to create a service account with the
64-
appropriate permissions. This service account will then be provided to RDI as a secret for
65-
authentication.
63+
To allow RDI to access the Spanner instance, you'll need to create a service account with the
64+
appropriate permissions. By default, RDI uses Google Cloud Workload Identity authentication. In this case RDI will assume the [service account is assigned to the GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_on_clusters_and_node_pools). Alternatively, you can provide the
65+
service account credentials as a Kubernetes secret (see step 4 for details).
6666

6767
1. Create the service account
6868

@@ -109,25 +109,42 @@ authentication.
109109
--project=YOUR_PROJECT_ID
110110
```
111111

112-
## 4. Set up secrets for Kubernetes deployment
112+
### Authentication methods
113113

114-
Before deploying the RDI pipeline, you need to configure the necessary secrets for both the source
115-
and target databases. Instructions for setting up the target database secrets are available in the
114+
RDI supports two authentication methods for accessing Spanner:
115+
116+
1. **Workload Identity (default)**: The service account is assigned to the GKE cluster, and RDI
117+
automatically uses the cluster's identity to authenticate. This is the recommended approach
118+
as it's more secure and doesn't require managing credential files.
119+
120+
2. **Service account credentials file**: You provide the service account key file as a Kubernetes
121+
secret. This method requires setting `use_credentials_file: true` in your RDI configuration.
122+
123+
## 4. Set up secrets for Kubernetes deployment (optional)
124+
125+
Before deploying the RDI pipeline, you need to configure the necessary secrets for the target
126+
database. Instructions for setting up the target database secrets are available in the
116127
[RDI deployment guide]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy#set-secrets-for-k8shelm-deployment-using-kubectl-command" >}}).
117128
118-
In addition to the target database secrets, you'll also need to create a Spanner-specific secret
119-
named `source-db-credentials`. This secret should contain the service account key file generated
120-
during the Spanner setup phase. Use the command below to create it:
129+
**Optional**: If you prefer to use a service account credentials file instead of Workload Identity
130+
authentication, you'll need to create a Spanner-specific secret named `source-db-credentials`.
131+
This secret should contain the service account key file generated during the Spanner setup phase.
132+
Use the command below to create it:
121133

122134
```bash
123135
kubectl create secret generic source-db-credentials --namespace=rdi \
124136
--from-file=gcp-service-account.json=~/spanner-reader-account.json \
125137
--save-config --dry-run=client -o yaml | kubectl apply -f -
126138
```
127139

128-
Be sure to adjust the file path (`~/spanner-reader-account.json`) if your service account key is
140+
Be sure to adjust the file path (`~/spanner-reader-account.json`) if your service account key is
129141
stored elsewhere.
130142

143+
{{< note >}}
144+
If you create the `source-db-credentials` secret, you must also set `use_credentials_file: true`
145+
in your RDI configuration to use the credentials file instead of Workload Identity authentication.
146+
{{< /note >}}
147+
131148
## 5. Configure RDI for Spanner
132149

133150
When configuring your RDI pipeline for Spanner, use the following example configuration in your
@@ -142,6 +159,7 @@ sources:
142159
project_id: your-project-id
143160
instance_id: your-spanner-instance
144161
database_id: your-spanner-database
162+
# use_credentials_file: false # Default: uses Workload Identity. Set to true to use service account credentials file instead
145163
change_streams:
146164
change_stream_all:
147165
{}

content/integrate/redis-data-integration/installation/install-k8s.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -213,6 +213,48 @@ also use mTLS, you must set the client certificate and private key contents in
213213
Please see [these docs]({{< relref "/integrate/redis-data-integration/data-pipelines/prepare-dbs/spanner#6-additional-kubernetes-configuration" >}}) if this RDI installation is for use with GCP Spanner.
214214
{{< /note >}}
215215

216+
If you are deploying to [OpenShift](https://docs.openshift.com/), you must
217+
set `global.openshift` to `true`:
218+
219+
```yaml
220+
global:
221+
# Indicates whether the deployment is intended for an OpenShift environment.
222+
openShift: true
223+
```
224+
225+
You should also set `global.securityContext.runAsUser` and
226+
`global.securityContext.runAsGroup` to the appropriate values for your
227+
OpenShift environment.
228+
229+
```yaml
230+
global:
231+
# Container default security context.
232+
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
233+
securityContext:
234+
runAsNonRoot: true
235+
# On OpenShift, user and group 1000 are usually not allowed.
236+
# If using OpenShift, set runAsUser and runAsGroup to values in your project's user and group ranges.
237+
# You can examine the latter via `oc get projects <rid-project-name> -o yaml | grep "openshift.io/sa.scc"`
238+
runAsUser: 1000701234
239+
runAsGroup: 1000701234
240+
allowPrivilegeEscalation: false
241+
```
242+
243+
{{< warning >}}The default OpenShift Security Context Constraints (SCCs)
244+
will not allow RDI to run if `global.securityContext.runAsUser`
245+
and `global.securityContext.runAsGroup` have their default values of `1000`.
246+
You must edit your `rdi-values.yaml` file to ensure these values are
247+
in the valid range for your OpenShift environment.
248+
249+
Use the following [OpenShift CLI](https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/cli_tools/openshift-cli-oc) command
250+
to find the user and group ranges for your project:
251+
252+
```bash
253+
oc get projects <rid-project-name> -o yaml | grep "openshift.io/sa.scc"
254+
```
255+
{{< /warning >}}
256+
257+
216258
## Check the installation
217259

218260
To verify the status of the K8s deployment, run the following command:
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
Title: Redis Data Integration release notes 1.14.1 (August 2025)
3+
alwaysopen: false
4+
categories:
5+
- docs
6+
- operate
7+
- rs
8+
description: |
9+
New RDI API v2 with enhanced pipeline management.
10+
Improved Oracle RAC support with configuration scaffolding.
11+
Enhanced metrics and monitoring capabilities.
12+
Better TLS/mTLS support across components.
13+
linkTitle: 1.14.1 (August 2025)
14+
toc: 'true'
15+
weight: 977
16+
---
17+
18+
{{< note >}}This maintenance release replaces the 1.14.0 release.{{< /note >}}
19+
20+
RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to:
21+
22+
- Meet the required speed and scale of read queries and provide an excellent and predictable user experience.
23+
- Save resources and time when building pipelines and coding data transformations.
24+
- Reduce the total cost of ownership by saving money on expensive database read replicas.
25+
26+
RDI keeps the Redis cache up to date with changes in the primary database, using a [_Change Data Capture (CDC)_](https://en.wikipedia.org/wiki/Change_data_capture) mechanism.
27+
It also lets you _transform_ the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required.
28+
29+
## What's New in 1.14.1
30+
31+
- Support for Google Cloud Workload Identity authentication when a service account is assigned to the GKE cluster
32+
- Fixed RDI API job validation that was incorrectly failing when schemas are not explicitly specified in source configuration, even though the configuration was valid
33+
34+
## Limitations
35+
36+
RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits.

content/operate/rs/7.4/databases/auto-tiering/storage-engine.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,9 @@ url: '/operate/rs/7.4/databases/auto-tiering/storage-engine/'
1515

1616
Redis Enterprise Auto Tiering supports two storage engines:
1717

18-
* [Speedb](https://www.speedb.io/) (default, recommended)
19-
* [RocksDB](https://rocksdb.org/)
18+
- Speedb: Redis proprietary storage engine. The default and recommended storage engine as of Redis Enterprise Software version 7.2.4.
19+
20+
- [RocksDB](https://rocksdb.org/): Used up to Redis version 6.2. Deprecated for later Redis versions.
2021

2122
{{<warning>}}Switching between storage engines requires guidance by Redis Support or your Account Manager.{{</warning>}}
2223

0 commit comments

Comments
 (0)