diff --git a/content/operate/kubernetes/7.4.6/_index.md b/content/operate/kubernetes/7.4.6/_index.md
new file mode 100644
index 0000000000..0c37a0422f
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/_index.md
@@ -0,0 +1,17 @@
+---
+Title: Redis Enterprise for Kubernetes
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: The Redis Enterprise operators allows you to use Redis Enterprise for
+ Kubernetes.
+hideListLinks: false
+linkTitle: 7.4.6
+weight: 50
+url: '/operate/kubernetes/7.4.6/'
+---
+
+Kubernetes provides enterprise orchestration of containers and has been widely adopted. Redis Enterprise for Kubernetes provides a simple way to get a Redis Enterprise cluster on Kubernetes and enables more complex deployment scenarios.
+
diff --git a/content/operate/kubernetes/7.4.6/active-active/_index.md b/content/operate/kubernetes/7.4.6/active-active/_index.md
new file mode 100644
index 0000000000..bbf2e5c1b6
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/active-active/_index.md
@@ -0,0 +1,99 @@
+---
+Title: Active-Active databases
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Content related to Active-Active Redis Enterprise databases for Kubernetes.
+hideListLinks: true
+linkTitle: Active-Active databases
+weight: 40
+url: '/operate/kubernetes/7.4.6/active-active/'
+---
+
+On Kubernetes, Redis Enterprise [Active-Active]({{< relref "/operate/rs/databases/active-active/" >}}) databases provide read and write access to the same dataset from different Kubernetes clusters.
+
+## Active-Active setup methods
+
+There are two methods for creating an Active-Active database with Redis Enterprise for Kubernetes:
+
+- The `RedisEnterpriseActiveActiveDatabase` (REAADB) custom resource is available for versions 6.4.2 and later.
+- The `crdb-cli` method is available for versions 6.4.2 or earlier.
+
+
+We recommend creating new Active-Active databases using the RedisEnterpriseActiveActiveDatabase (REAADB) custom resource. This allows you to manage your Active-Active database with the operator and ensures you have the latest features and functionality.
+
+### Active-Active controller method
+
+Versions 6.4.2-6 or later fully support the Active-Active controller. Some of these features were available as a preview in 6.4.2-4 and 6.4.2-5. Please upgrade to 6.4.2-6 for the full set of general availability features and bug fixes.
+
+This setup method includes the following steps:
+
+1. Gather REC credentials and [prepare participating clusters]({{< relref "/operate/kubernetes/active-active/prepare-clusters.md" >}}).
+2. Create [`RedisEnterpriseRemoteCluster` (RERC)]({{< relref "/operate/kubernetes/active-active/create-reaadb#create-rerc" >}}) resources.
+3. Create [`RedisEnterpriseActiveActiveDatabase` (REAADB)]({{< relref "/operate/kubernetes/active-active/create-reaadb#create-reaadb" >}}) resource.
+
+### `crdb-cli` method
+
+For versions 6.4.2 or earlier, this Active-Active setup method includes the following steps:
+
+1. Install and configure an ingress.
+2. Gather configuration details.
+3. Add the `ActiveActive` field to the REC spec.
+4. Create the database with the `crdb-cli` tool.
+
+## Redis Enterprise Active-Active controller for Kubernetes
+
+{{}}These features are supported for general availability in releases 6.4.2-6 and later.{{}}
+
+[Active-Active]({{< relref "/operate/rs/databases/active-active/" >}}) databases give you read-and-write access to Redis Enterprise clusters (REC) in different Kubernetes clusters or namespaces. Active-Active deployments managed by the Redis Enterprise operator require two additional custom resources: Redis Enterprise Active-Active database (REAADB) and Redis Enterprise remote cluster (RERC).
+
+To create an Active-Active Redis Enterprise deployment for Kubernetes with these new features, first [prepare participating clusters]({{< relref "/operate/kubernetes/active-active/prepare-clusters.md" >}}) then [create an Active-Active database]({{< relref "/operate/kubernetes/active-active/create-reaadb.md" >}}).
+
+### Preview versions
+
+If you are using a preview version of these features (operator version 6.4.2-4 or 6.4.2-5), you'll need to enable the Active-Active controller with the following steps. You need to do this only once per cluster. We recommend using the fully supported 6.4.2-6 version.
+
+1. Download the custom resource definitions (CRDs) for the most recent release (6.4.2-4) from [redis-enterprise-k8s-docs Github](https://github.com/RedisLabs/redis-enterprise-k8s-docs/tree/master/crds).
+
+1. Apply the new CRDs for the Redis Enterprise Active-Active database (REAADB) and Redis Enterprise remote cluster (RERC) to install those controllers.
+
+ ```sh
+ kubectl apply -f crds/reaadb_crd.yaml
+ kubectl apply -f crds/rerc_crd.yaml
+ ```
+
+1. Enable the Active-Active and remote cluster controllers on the operator ConfigMap.
+
+ ```sh
+ kubectl patch cm operator-environment-config --type merge --patch "{\"data\": \
+ {\"ACTIVE_ACTIVE_DATABASE_CONTROLLER_ENABLED\":\"true\", \
+ \"REMOTE_CLUSTER_CONTROLLER_ENABLED\":\"true\"}}"
+
+
+### REAADB custom resource
+
+Redis Enterprise Active-Active database (REAADB) contains a link to the RERC for each participating cluster, and provides configuration and status to the management plane.
+
+For a full list of fields and options, see the [REAADB API reference]({{}}).
+
+### RERC custom resource
+
+Redis Enterprise remote cluster (RERC) custom resource contains configuration details for all the participating clusters.
+
+For a full list of fields and options, see the [RERC API reference]({{}}).
+
+### Limitations
+
+* Existing Redis databases cannot be migrated to a REAADB. (DOC-3594)
+* Admission is not blocking REAADB with `shardCount` which exceeds license quota. (RED-96301)
+ Workaround: Fix the problems with the REAADB and reapply.
+* The `/` value must be unique for each RERC resource. (RED-96302)
+* Only global database options are supported, no support for specifying configuration per location.
+* No support for migration from old (`crdb-cli`) Active-Active database method to new Active-Active controller.
+* No support for REAADB with participating clusters co-located within the same Kubernetes cluster, except for a single designated local participating cluster.
+
+## More info
+
+For more general information about Active-Active, see the [Redis Enterprise Software docs]({{< relref "/operate/rs/databases/active-active/" >}}).
diff --git a/content/operate/kubernetes/7.4.6/active-active/create-aa-crdb-cli.md b/content/operate/kubernetes/7.4.6/active-active/create-aa-crdb-cli.md
new file mode 100644
index 0000000000..404cb4e17f
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/active-active/create-aa-crdb-cli.md
@@ -0,0 +1,217 @@
+---
+Title: Create Active-Active databases with crdb-cli
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This section shows how to set up an Active-Active Redis Enterprise database
+ on Kubernetes using the Redis Enterprise Software operator.
+linkTitle: Create Active-Active with crdb-cli
+weight: 99
+url: '/operate/kubernetes/7.4.6/active-active/create-aa-crdb-cli/'
+---
+{{}} Versions 6.4.2 and later support the Active-Active database controller. This controller allows you to create Redis Enterprise Active-Active databases (REAADB) and Redis Enterprise remote clusters (RERC) with custom resources. We recommend using the [REAADB method for creating Active-Active databases]({{< relref "/operate/kubernetes/active-active/create-reaadb.md" >}}).{{}}
+
+On Kubernetes, Redis Enterprise [Active-Active]({{< relref "/operate/rs/databases/active-active/" >}}) databases provide read-and-write access to the same dataset from different Kubernetes clusters. For more general information about Active-Active, see the [Redis Enterprise Software docs]({{< relref "/operate/rs/databases/active-active/" >}}).
+
+Creating an Active-Active database requires routing [network access]({{< relref "/operate/kubernetes/networking/" >}}) between two Redis Enterprise clusters residing in different Kubernetes clusters. Without the proper access configured for each cluster, syncing between the databases instances will fail.
+
+This process consists of:
+
+1. Documenting values to be used in later steps. It's important these values are correct and consistent.
+1. Editing the Redis Enterprise cluster (REC) spec file to include the `ActiveActive` section. This will be slightly different depending on the K8s distribution you are using.
+1. Creating the database with the `crdb-cli` command. These values must match up with values in the REC resource spec.
+
+## Prerequisites
+
+Before creating Active-Active databases, you'll need admin access to two or more working Kubernetes clusters that each have:
+
+- Routing for external access with an [ingress resources]({{< relref "/operate/kubernetes/networking/ingress.md" >}}) (or [route resources]({{< relref "/operate/kubernetes/networking/routes.md" >}}) on OpenShift).
+- A working [Redis Enterprise cluster (REC)]({{< relref "/operate/kubernetes/reference/redis_enterprise_cluster_api" >}}) with a unique name.
+- Enough memory resources available for the database (see [hardware requirements]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}})).
+
+{{}} The `activeActive` field and the `ingressOrRouteSpec` field cannot coexist in the same REC. If you configured your ingress via the `ingressOrRouteSpec` field in the REC, create your Active-Active database with the RedisEnterpriseActiveActiveDatabase (REAADB) custom resource.{{}}
+
+## Document required parameters
+
+The most common mistake when setting up Active-Active databases is incorrect or inconsistent parameter values. The values listed in the resource file must match those used in the crdb-cli command.
+
+- **Database name** ``:
+ - Description: Combined with ingress suffix to create the Active-Active database hostname
+ - Format: string
+ - Example value: `myaadb`
+ - How you get it: you choose
+ - The database name requirements are:
+ - Maximum of 63 characters
+ - Only letter, number, or hyphen (-) characters
+ - Starts with a letter; ends with a letter or digit.
+ - Database name is not case-sensitive
+
+You'll need the following information for each participating Redis Enterprise cluster (REC):
+
+{{}}
+You'll need to create DNS aliases to resolve your API hostname ``,``, `` to the IP address for the ingress controller’s LoadBalancer (or routes in Openshift) for each database. To avoid entering multiple DNS records, you can use a wildcard in your alias (such as *.ijk.example.com).
+{{}}
+
+- **REC hostname** ``:
+ - Description: Hostname used to identify your Redis Enterprise cluster in the `crdb-cli` command. This MUST be different from other participating clusters.
+ - Format: `..svc.cluster.local`
+ - Example value: `rec01.ns01.svc.cluster.local`
+ - How to get it: List all your Redis Enterprise clusters
+ ```bash
+ kubectl get rec
+ ```
+- **API hostname** ``:
+ - Description: Hostname used to access the Redis Enterprise cluster API from outside the K8s cluster
+ - Format: string
+ - Example value: `api.ijk.example.com`
+- **Ingress suffix** ``:
+ - Description: Combined with database name to create the Active-Active database hostname
+ - Format: string
+ - Example value: `-cluster.ijk.example.com`
+- [**REC admin credentials**]({{< relref "/operate/kubernetes/security/manage-rec-credentials.md" >}}) ` `:
+ - Description: Admin username and password for the REC stored in a secret
+ - Format: string
+ - Example value: username: `user@example.com`, password: `something`
+ - How to get them:
+ ```sh
+ kubectl get secret \
+ -o jsonpath='{.data.username}' | base64 --decode
+ kubectl get secret \
+ -o jsonpath='{.data.password}' | base64 --decode
+ ```
+- **Replication hostname** ``:
+ - Description: Hostname used inside the ingress for the database
+ - Format: ``
+ - Example value: `myaadb-cluster.ijk.example.com`
+ - How to get it: Combine `` and ` values you documented above.
+- **Replication endpoint** ``:
+ - Description: Endpoint used externally to contact the database
+ - Format: `:443`
+ - Example value: `myaadb-cluster.ijk.example.com:443`
+ - How to get it:`:443`
+
+## Add `activeActive` section to the REC resource file
+
+From inside your K8s cluster, edit your Redis Enterprise cluster (REC) resource to add the following to the `spec` section. Do this for each participating cluster.
+
+ The operator uses the API hostname (``) to create an ingress to the Redis Enterprise cluster's API; this only happens once per cluster. Every time a new Active-Active database instance is created on this cluster, the operator creates a new ingress route to the database with the ingress suffix (``). The hostname for each new database will be in the format ``.
+
+### Using ingress controller
+
+1. If your cluster uses an [ingress controller]({{< relref "/operate/kubernetes/networking/ingress.md" >}}), add the following to the `spec` section of your REC resource file.
+
+ Nginx:
+
+ ```sh
+ activeActive:
+ apiIngressUrl:
+ dbIngressSuffix:
+ ingressAnnotations:
+ kubernetes.io/ingress.class: nginx
+ nginx.ingress.kubernetes.io/backend-protocol: HTTPS
+ nginx.ingress.kubernetes.io/ssl-passthrough: "true"
+ method: ingress
+ ```
+
+HAproxy:
+
+ ```sh
+ activeActive:
+ apiIngressUrl:
+ dbIngressSuffix:
+ ingressAnnotations:
+ kubernetes.io/ingress.class: haproxy
+ ingress.kubernetes.io/ssl-passthrough: "true"
+ method: ingress
+ ```
+
+2. After the changes are saved and applied, you can verify a new ingress was created for the API.
+
+ ```sh
+ $ kubectl get ingress
+ NAME HOSTS ADDRESS PORTS AGE
+ rec01 api.abc.cde.example.com 225161f845b278-111450635.us.cloud.com 80 24h
+ ```
+
+3. Verify you can access the API from outside the K8s cluster.
+
+ ```sh
+ curl -k -L -i -u : https:///v1/cluster
+ ```
+
+ If the API call fails, create a DNS alias that resolves your API hostname (``) to the IP address for the ingress controller's LoadBalancer.
+
+4. Make sure you have DNS aliases for each database that resolve your API hostname ``,``, `` to the IP address of the ingress controller’s LoadBalancer. To avoid entering multiple DNS records, you can use a wildcard in your alias (such as `*.ijk.example.com`).
+
+#### If using Istio Gateway and VirtualService
+
+No changes are required to the REC spec if you are using [Istio]({{< relref "/operate/kubernetes/networking/istio-ingress.md" >}}) in place of an ingress controller. The `activeActive` section added above creates ingress resources. The two custom resources used to configure Istio (Gateway and VirtualService) replace the need for ingress resources.
+
+{{}}
+These custom resources are not controlled by the operator and will need to be configured and maintained manually.
+{{}}
+
+For each cluster, verify the VirtualService resource has two `- match:` blocks in the `tls` section. The hostname under `sniHosts:` should match your ``.
+
+### Using OpenShift routes
+
+1. Make sure your Redis Enterprise cluster (REC) has a different name (``) than any other participating clusters. If not, you'll need to manually rename the REC or move it to a different namespace.
+ You can check your new REC name with:
+ ```sh
+ oc get rec -o jsonpath='{.items[0].metadata.name}'
+ ```
+
+ If the rec name was modified, reapply [scc.yaml](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/openshift/scc.yaml) to the namespace to reestablish security privileges.
+
+ ```sh
+ oc apply -f scc.yaml
+ oc adm policy add-scc-to-group redis-enterprise-scc-v2 system:serviceaccounts:
+ ```
+
+ Releases before 6.4.2-6 use the earlier version of the SCC, named `redis-enterprise-scc`.
+
+1. Make sure you have DNS aliases for each database that resolve your API hostname ``,``, `` to the route IP address. To avoid entering multiple DNS records, you can use a wildcard in your alias (such as `*.ijk.example.com`).
+
+1. If your cluster uses [OpenShift routes]({{< relref "/operate/kubernetes/networking/routes.md" >}}), add the following to the `spec` section of your Redis Enterprise cluster (REC) resource file.
+
+ ```sh
+ activeActive:
+ apiIngressUrl:
+ dbIngressSuffix:
+ method: openShiftRoute
+ ```
+
+1. Make sure you have DNS aliases that resolve to the routes IP for both the API hostname (``) and the replication hostname (``) for each database. To avoid entering each database individually, you can use a wildcard in your alias (such as `*.ijk.example.com`).
+
+1. After the changes are saved and applied, you can see that a new route was created for the API.
+
+ ```sh
+ $ oc get route
+ NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
+ rec01 api-openshift.apps.abc.example.com rec01 api passthrough None
+ ```
+
+## Create an Active-Active database with `crdb-cli`
+
+The `crdb-cli` command can be run from any Redis Enterprise pod hosted on any participating K8s cluster. You'll need the values for the [required parameters]({{< relref "/operate/kubernetes/active-active/create-aa-crdb-cli#document-required-parameters" >}}) for each Redis Enterprise cluster.
+
+```sh
+crdb-cli crdb create \
+ --name \
+ --memory-size \
+ --encryption yes \
+ --instance fqdn=,url=https://,username=,password=,replication_endpoint=,replication_tls_sni= \
+ --instance fqdn=,url=https://,username=,password=,replication_endpoint=,replication_tls_sni=
+```
+
+To create a database that syncs between more than two instances, add additional `--instance` arguments.
+
+See the [`crdb-cli` reference]({{< relref "/operate/rs/references/cli-utilities/crdb-cli" >}}) for more options.
+
+## Test your database
+
+The easiest way to test your Active-Active database is to set a key-value pair in one database and retrieve it from the other.
+
+You can connect to your databases with the instructions in [Manage databases]({{< relref "/operate/kubernetes/re-databases/db-controller#connect-to-a-database" >}}). Set a test key with `SET foo bar` in the first database. If your Active-Active deployment is working properly, when connected to your second database, `GET foo` should output `bar`.
diff --git a/content/operate/kubernetes/7.4.6/active-active/create-reaadb.md b/content/operate/kubernetes/7.4.6/active-active/create-reaadb.md
new file mode 100644
index 0000000000..a47886b761
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/active-active/create-reaadb.md
@@ -0,0 +1,174 @@
+---
+Title: Create Active-Active database (REAADB)
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: null
+linkTitle: Create database
+weight: 30
+url: '/operate/kubernetes/7.4.6/active-active/create-reaadb/'
+---
+
+{{}}This feature is supported for general availability in releases 6.4.2-6 and later. Some of these features were available as a preview in 6.4.2-4 and 6.4.2-5. Please upgrade to 6.4.2-6 for the full set of general availability features and bug fixes.{{}}
+
+## Prerequisites
+
+To create an Active-Active database, make sure you've completed all the following steps and have gathered the information listed below each step.
+
+1. Configure the [admission controller and ValidatingWebhook]({{< relref "/operate/kubernetes/deployment/quick-start.md#enable-the-admission-controller/" >}}).
+ {{}}These are installed and enabled by default on clusters created via the OpenShift OperatorHub. {{}}
+
+2. Create two or more [RedisEnterpriseCluster (REC) custom resources]({{< relref "/operate/kubernetes/deployment/quick-start#create-a-redis-enterprise-cluster-rec" >}}) with enough [memory resources]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}).
+ * Name of each REC (``)
+ * Namespace for each REC (``)
+
+3. Configure the REC [`ingressOrRoutes` field]({{< relref "/operate/kubernetes/networking/ingressorroutespec.md" >}}) and [create DNS records]({{< relref "/operate/kubernetes/networking/ingressorroutespec#configure-dns/" >}}).
+ * REC API hostname (`api--.`)
+ * Database hostname suffix (`-db--.`)
+
+4. [Prepare participating clusters]({{< relref "/operate/kubernetes/active-active/prepare-clusters.md" >}})
+ * RERC name (`)
+ * RERC secret name (`redis-enterprise-`)
+
+For a list of example values used throughout this article, see the [Example values](#example-values) section.
+
+## Create `RedisEnterpriseRemoteCluster` resources {#create-rerc}
+
+1. Create a `RedisEnterpriseRemoteCluster` (RERC) custom resource file for each participating Redis Enterprise cluster (REC).
+
+ Below are examples of RERC resources for two participating clusters. Substitute your own values to create your own resource.
+
+ Example RERC (`rerc-ohare`) for the REC named `rec-chicago` in the namespace `ns-illinois`:
+
+ ```yaml
+ apiVersion: app.redislabs.com/v1alpha1
+ kind: RedisEnterpriseRemoteCluster
+ metadata:
+ name: rerc-ohare
+ spec:
+ recName: rec-chicago
+ recNamespace: ns-illinois
+ apiFqdnUrl: api-rec-chicago-ns-illinois.example.com
+ dbFqdnSuffix: -db-rec-chicago-ns-illinois.example.com
+ secretName: redis-enterprise-rerc-ohare
+ ```
+
+ Example RERC (`rerc-raegan`) for the REC named `rec-arlington` in the namespace `ns-virginia`:
+
+ ```yaml
+ apiVersion: app.redislabs.com/v1alpha1
+ kind: RedisEnterpriseRemoteCluster
+ metadata:
+ name: rerc-reagan
+ spec:
+ recName: rec-arlington
+ recNamespace: ns-virginia
+ apiFqdnUrl: test-example-api-rec-arlington-ns-virginia.example.com
+ dbFqdnSuffix: -example-cluster-rec-arlington-ns-virginia.example.com
+ secretName: redis-enterprise-rerc-reagan
+ ```
+
+ For more details on RERC fields, see the [RERC API reference]({{}}).
+
+1. Create a Redis Enterprise remote cluster from each RERC custom resource file.
+
+ ```sh
+ kubectl create -f
+ ```
+
+1. Check the status of your RERC. If `STATUS` is `Active` and `SPEC STATUS` is `Valid`, then your configurations are correct.
+
+ ```sh
+ kubectl get rerc
+ ```
+
+ The output should look similar to:
+
+ ```sh
+ kubectl get rerc rerc-ohare
+
+ NAME STATUS SPEC STATUS LOCAL
+ rerc-ohare Active Valid true
+ ```
+
+ In case of errors, review the RERC custom resource events and the Redis Enterprise operator logs.
+
+## Create `RedisEnterpriseActiveActiveDatabase` resource {#create-reaadb}
+
+1. Create a `RedisEnterpriseActiveActiveDatabase` (REAADB) custom resource file meeting the naming requirements and listing the names of the RERC custom resources created in the last step.
+
+ Naming requirements:
+ * less than 63 characters
+ * contains only lowercase letters, numbers, or hyphens
+ * starts with a letter
+ * ends with a letter or digit
+
+ Example REAADB named `reaadb-boeing` linked to the REC named `rec-chicago` with two participating clusters and a global database configuration with shard count set to 3:
+
+ ```yaml
+ apiVersion: app.redislabs.com/v1alpha1
+ kind: RedisEnterpriseActiveActiveDatabase
+ metadata:
+ name: reaadb-boeing
+ spec:
+ globalConfigurations:
+ databaseSecretName:
+ memorySize: 200MB
+ shardCount: 3
+ participatingClusters:
+ - name: rerc-ohare
+ - name: rerc-reagan
+ ```
+
+ {{}}Sharding is disabled on Active-Active databases created with a `shardCount` of 1. Sharding cannot be enabled after database creation. {{}}
+
+ For more details on RERC fields, see the [RERC API reference]({{}}).
+
+1. Create a Redis Enterprise Active-Active database from the REAADB custom resource file.
+
+ ```sh
+ kubectl create -f
+ ```
+
+1. Check the status of your RERC. If `STATUS` is `Active` and `SPEC STATUS` is `Valid`, your configurations are correct.
+
+ ```sh
+ kubectl get reaadb
+ ```
+
+ The output should look similar to:
+
+ ```sh
+ kubectl get reaadb reaadb-boeing
+
+ NAME STATUS SPEC STATUS LINKED REDBS REPLICATION STATUS
+ reaadb-boeing active Valid up
+ ```
+
+
+ In case of errors, review the REAADB custom resource events and the Redis Enterprise operator logs.
+
+## Example values
+
+This article uses the following example values:
+
+#### Example cluster 1
+
+* REC name: `rec-chicago`
+* REC namespace: `ns-illinois`
+* RERC name: `rerc-ohare`
+* RERC secret name: `redis-enterprise-rerc-ohare`
+* API FQDN: `api-rec-chicago-ns-illinois.example.com`
+* DB FQDN suffix: `-db-rec-chicago-ns-illinois.example.com`
+
+#### Example cluster 2
+
+* REC name: `rec-arlington`
+* REC namespace: `ns-virginia`
+* RERC name: `rerc-raegan`
+* RERC secret name: `redis-enterprise-rerc-reagan`
+* API FQDN: `api-rec-arlington-ns-virginia.example.com`
+* DB FQDN suffix: `-db-rec-arlington-ns-virginia.example.com`
+
diff --git a/content/operate/kubernetes/7.4.6/active-active/edit-clusters.md b/content/operate/kubernetes/7.4.6/active-active/edit-clusters.md
new file mode 100644
index 0000000000..2907ec069a
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/active-active/edit-clusters.md
@@ -0,0 +1,158 @@
+---
+Title: Edit participating clusters for Active-Active database
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Steps to add or remove a participating cluster to an existing Active-Active
+ database with Redis Enterprise for Kubernetes.
+linkTitle: Edit participating clusters
+weight: 40
+url: '/operate/kubernetes/7.4.6/active-active/edit-clusters/'
+---
+{{}}This feature is supported for general availability in releases 6.4.2-6 and later. Some of these features were available as a preview in 6.4.2-4 and 6.4.2-5. Please upgrade to 6.4.2-6 for the full set of general availability features and bug fixes. and later.{{}}
+
+## Add a participating cluster
+
+Use the following steps to add a participating cluster to an existing Redis Enterprise Active-Active database (REAADB) for Kubernetes.
+
+### Prerequisites
+
+To prepare the Redis Enterprise cluster (REC) to participate in an Active-Active database, perform the following tasks from [Prepare participating clusters]({{< relref "/operate/kubernetes/active-active/prepare-clusters.md" >}}):
+
+- Make sure the cluster meets the hardware and naming requirements.
+- Enable the Active-Active controllers.
+- Configure external routing.
+- Configure `ValidatingWebhookConfiguration`.
+
+### Collect REC credentials
+
+To communicate with other clusters, all participating clusters need access to the admin credentials for all other clusters.
+
+1. Get the REC credentials secret for the new participating cluster.
+
+ ```sh
+ kubectl get secret -o yaml
+ ```
+
+ This example shows an admin credentials secret for an REC named `rec-boston`:
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password: ABcdef12345
+ username: GHij56789
+ kind: Secret
+ metadata:
+ name: rec-boston
+ type: Opaque
+ ```
+
+1. Create a secret for the new participating cluster named `redis-enterprise-` and add the username and password.
+
+ The example below shows a secret file for a remote cluster named `rerc-logan` .
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password: ABcdef12345
+ username: GHij56789
+ kind: Secret
+ metadata:
+ name: redis-enterprise-rerc-logan
+ type: Opaque
+ ```
+
+1. Apply the file of collected secrets to every participating REC.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+ If the admin credentials for any of the clusters change, update and reapply the file to all clusters.
+
+### Create RERC
+
+1. From one of the existing participating clusters, create a `RedisEnterpriseRemoteCluster` (RERC) custom resource for the new participating cluster.
+
+ This example shows an RERC custom resource for an REC named `rec-boston` in the namespace `ns-massachusetts`.
+
+ ```yaml
+ apiVersion: app.redislabs.com/v1alpha1
+ kind: RedisEnterpriseRemoteCluster
+ metadata:
+ name: rerc-logan
+ spec:
+ recName: rec-boston
+ recNamespace: ns-massachusetts
+ apiFqdnUrl: test-example-api-rec-boston-ns-massachusetts.example.com
+ dbFqdnSuffix: -example-cluster-rec-boston-ns-massachusetts.example.com
+ secretName: redis-enterprise-rerc-logan
+ ```
+
+1. Create the RERC custom resource.
+
+ ```sh
+ kubectl create -f
+ ```
+
+1. Check the status of the newly created RERC custom resource.
+
+ ```sh
+ kubectl get rerc
+ ```
+
+ The output should look like this:
+
+ ```sh
+ NAME STATUS SPEC STATUS LOCAL
+ rerc-logan Active Valid true
+ ```
+
+### Edit REAADB spec
+
+1. Patch the REAADB spec to add the new RERC name to the `participatingClusters`, replacing `` and `` with your own values.
+
+ ```sh
+ kubectl patch reaadb < --type merge --patch '{"spec": {"participatingClusters": [{"name": ""}]}}'
+ ```
+
+1. View the REAADB `participatingClusters` status to verify the cluster was added.
+
+ ```sh
+ kubectl get reaadb -o=jsonpath='{.status.participatingClusters}'
+ ```
+
+ The output should look like this:
+
+ ```sh
+ [{"id":1,"name":"rerc-ohare"},{"id":2,"name":"rerc-reagan"},{"id":3,"name":"rerc-logan"}]
+ ```
+
+## Remove a participating cluster
+
+1. On an existing participating cluster,remove the desired cluster from the `participatingCluster` section of the REAADB spec.
+
+ ```sh
+ kubectl edit reaadb
+ ```
+
+1. On each of the other participating clusters, verify the status is `active` and the spec status is `Valid` and the cluster was removed.
+
+ ```sh
+ kubectl get reaadb }}This feature is supported for general availability in releases 6.4.2-6 and later. Some of these features were available as a preview in 6.4.2-4 and 6.4.2-5. Please upgrade to 6.4.2-6 for the full set of general availability features and bug fixes. and later.{{}}
+
+Before a RedisEnterpriseCluster (REC) can participate in an Active-Active database, it needs an accompanying RedisEnterpriseRemoteCluster (RERC) custom resource. The RERC contains details allowing the REC to link to the RedisEnterpriseActiveActiveDatabase (REAADB). The RERC resource is listed in the REAADB resource to become a participating cluster for the Active-Active database.
+
+The RERC controller periodically connects to the local REC endpoint via its external address, to ensure it’s setup correctly. For this to work, the external load balancer must support [NAT hairpinning](https://en.wikipedia.org/wiki/Network_address_translation#NAT_loopback). In some cloud environments, this may involve disabling IP preservation for the load balancer target groups.
+
+For more details, see the [RERC API reference]({{}}).
+
+## Edit RERC
+
+Use the `kubectl patch rerc --type merge --patch` command to patch the local RERC custom resource with your changes. For a full list of available fields, see the [RERC API reference]({{}}).
+
+The following example edits the `dbFqdnSuffix` field for the RERC named `rerc-ohare`.
+
+```sh
+kubectl patch rerc rerc-ohare --type merge --patch \
+'{"spec":{"dbFqdnSuffix": "-example2-cluster-rec-chicago-ns-illinois.example.com"}}'
+```
+
+## Update RERC secret
+
+If the credentials are changed or updated for a REC participating cluster, you need to manually edit the RERC secret and apply it to all participating clusters.
+
+1. On the local cluster, update the secret with new credentials and name it with the following convention: `redis-enterprise-`.
+
+ A secret for a remote cluster named `rerc-ohare` would be similar to the following:
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password: PHNvbWUgcGFzc3dvcmQ+
+ username: PHNvbWUgdXNlcj4
+ kind: Secret
+ metadata:
+ name: redis-enterprise-rerc-ohare
+ type: Opaque
+ ```
+
+1. Apply the file.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+1. Watch the RERC to verify the status is "Active" and the spec status is "Valid."
+
+ ```sh
+ kubectl get rerc
+ ```
+
+ The output should look like this:
+
+ ```sh
+ NAME STATUS SPEC STATUS LOCAL
+ rerc-ohare Active Valid true
+ ```
+
+ To troubleshoot invalid configurations, view the RERC custom resource events and the [Redis Enterprise operator logs]({{< relref "/operate/kubernetes/logs/" >}}).
+
+1. Verify the status of each REAADB using that RERC is "Active" and the spec status is "Valid."
+
+ ```sh
+ kubectl get reaadb reaadb-boeing
+
+ NAME STATUS SPEC STATUS LINKED REDBS REPLICATION STATUS
+ reaadb-boeing active Valid up
+ ```
+
+ To troubleshoot invalid configurations, view the RERC custom resource events and the [Redis Enterprise operator logs]({{< relref "/operate/kubernetes/logs/" >}}).
+
+1. Repeat the above steps on all other participating clusters.
diff --git a/content/operate/kubernetes/7.4.6/active-active/global-config.md b/content/operate/kubernetes/7.4.6/active-active/global-config.md
new file mode 100644
index 0000000000..012825995c
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/active-active/global-config.md
@@ -0,0 +1,107 @@
+---
+Title: Set global database configurations
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: The REAADB contains the field '.spec.globalConfigurations' and through
+ this the database configurations are set.
+linkTitle: Global configuration
+weight: 50
+url: '/operate/kubernetes/7.4.6/active-active/global-config/'
+---
+{{}}This feature is supported for general availability in releases 6.4.2-6 and later. Some of these features were available as a preview in 6.4.2-4 and 6.4.2-5. Please upgrade to 6.4.2-6 for the full set of general availability features and bug fixes.{{}}
+
+The Redis Enterprise Active-Active database (REAADB) custom resource contains the field `.spec.globalConfigurations`. This field sets configurations for the Active-Active database across all participating clusters, such as memory size, shard count, and the global database secrets.
+
+The [REAADB API reference]({{}}) contains a full list of available fields.
+
+## Edit global configurations
+
+1. Edit or patch the REAADB custom resource with your global configuration changes.
+
+ The example command below patches the REAADB named `reaadb-boeing` to set the global memory size to 200MB:
+
+ ```sh
+ kubectl patch reaadb reaadb-boeing --type merge --patch \
+ '{"spec": {"globalConfigurations": {"memorySize": "200mb"}}}'
+ ```
+
+1. Verify the status is `active` and the spec status is `Valid`.
+
+ This example shows the status for the `reaadb-boeing` database.
+
+ ```sh
+ kubectl get reaadb reaadb-boeing
+
+ NAME STATUS SPEC STATUS GLOBAL CONFIGURATIONS REDB LINKED REDBS
+ reaadb-boeing active Valid
+ ```
+
+1. View the global configurations on each participating cluster to verify they are synced.
+
+ ```sh
+ kubectl get reaadb -o yaml
+ ```
+
+## Edit global configuration secrets
+
+This section edits the secrets under the REAADB `.spec.globalConfigurations` section. For more information and all available fields, see the [REAADB API reference]({{}}).
+
+
+1. On an existing participating cluster, generate a YAML file containing the database secret with the relevant data.
+
+ This example shoes a secret named `my-db-secret` with the password `my-password` encoded in base 64.
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password: bXktcGFzcw
+ kind: Secret
+ metadata:
+ name: my-db-secret
+ type: Opaque
+ ```
+
+1. Apply the secret file from the previous step, substituting your own value for ``.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+1. Patch the REAADB custom resource to specify the database secret, substituting your own values for `` and ``.
+
+ ```sh
+ kubectl patch reaadb --type merge --patch \
+ '{"spec": {"globalConfigurations": {"databaseSecretName": "secret-name"}}}'
+ ```
+
+1. Check the REAADB status for an `active` status and `Valid` spec status.
+
+ ```sh
+ kubectl get reaadb
+
+ NAME STATUS SPEC STATUS GLOBAL CONFIGURATIONS REDB LINKED REDBS
+ reaadb-boeing active Valid
+ ```
+
+1. On each other participating cluster, check the secret status.
+
+ ``sh
+ kubectl get reaadb -o=jsonpath='{.status.secretsStatus}'
+ ```
+
+ The output should show the status as `Invalid`.
+
+ ```sh
+ [{"name":"my-db-secret","status":"Invalid"}]
+ ```
+
+1. Sync the secret on each participating cluster.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+1. Repeat the previous two steps on every participating cluster.
diff --git a/content/operate/kubernetes/7.4.6/active-active/global-db-secret.md b/content/operate/kubernetes/7.4.6/active-active/global-db-secret.md
new file mode 100644
index 0000000000..5039795eab
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/active-active/global-db-secret.md
@@ -0,0 +1,76 @@
+---
+Title: Set global database secret
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: The REAADB contains the field '.spec.globalConfigurations' to set the
+ global database secret.
+linkTitle: Global database secret
+weight: 50
+url: '/operate/kubernetes/7.4.6/active-active/global-db-secret/'
+---
+{{}}This feature is supported for general availability in releases 6.4.2-6 and later. Some of these features were available as a preview in 6.4.2-4 and 6.4.2-5. Please upgrade to 6.4.2-6 for the full set of general availability features and bug fixes.{{}}
+
+## Set global database secret
+
+One of the fields available for `globalConfigurations` is `databaseSecretName` which can point to a secret containing the database password. To set the database secret name and sync the data to all participating clusters, follow the steps below.
+
+To edit other global configruations, see [global configuration]({{< relref "/operate/kubernetes/active-active/global-config.md" >}})
+
+1. On an existing participating cluster, generate a YAML file containing the database secret with the database password.
+
+ This example shoes a secret named `my-db-secret` with the password `my-password` encoded in base 64.
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password: bXktcGFzcw
+ kind: Secret
+ metadata:
+ name: my-db-secret
+ type: Opaque
+ ```
+
+1. Apply the secret file from the previous step, substituting your own value for ``.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+1. Patch the REAADB custom resource to specify the database secret, substituting your own values for `` and ``.
+
+ ```sh
+ kubectl patch reaadb --type merge --patch \
+ '{"spec": {"globalConfigurations": {"databaseSecretName": "secret-name"}}}'
+ ```
+
+1. Check the REAADB status for an `active` status and `Valid` spec status.
+
+ ```sh
+ kubectl get reaadb
+
+ NAME STATUS SPEC STATUS GLOBAL CONFIGURATIONS REDB LINKED REDBS
+ example-aadb-1 active Valid
+ ```
+
+1. On each other participating cluster, check the secret status.
+
+ ``sh
+ kubectl get reaadb -o=jsonpath='{.status.secretsStatus}'
+ ```
+
+ The output should show the status as `Invalid`.
+
+ ```sh
+ [{"name":"my-db-secret","status":"Invalid"}]
+ ```
+
+1. Sync the secret on each participating cluster.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+1. Repeat the previous two steps on every participating cluster.
diff --git a/content/operate/kubernetes/7.4.6/active-active/prepare-clusters.md b/content/operate/kubernetes/7.4.6/active-active/prepare-clusters.md
new file mode 100644
index 0000000000..795f38c1a9
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/active-active/prepare-clusters.md
@@ -0,0 +1,166 @@
+---
+Title: Prepare participating clusters
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Prepare your participating RECs to be part of an Active-Active database
+ deployment.
+linkTitle: Prepare clusters
+weight: 10
+url: '/operate/kubernetes/7.4.6/active-active/prepare-clusters/'
+---
+
+{{}}This feature is supported for general availability in releases 6.4.2-6 and later. Some of these features were available as a preview in 6.4.2-4 and 6.4.2-5. Please upgrade to 6.4.2-6 for the full set of general availability features and bug fixes.{{}}
+
+## Prepare participating clusters
+
+Before you prepare your clusters to participate in an Active-Active database, make sure you've completed all the following steps and have gathered the information listed below each step.
+
+1. Configure the [admission controller and ValidatingWebhook]({{< relref "/operate/kubernetes/deployment/quick-start.md#enable-the-admission-controller/" >}}).
+
+2. Create two or more [RedisEnterpriseCluster (REC) custom resources]({{< relref "/operate/kubernetes/deployment/quick-start#create-a-redis-enterprise-cluster-rec" >}}) with enough [memory resources]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}).
+ * Name of each REC (``)
+ * Namespace for each REC (``)
+
+3. Configure the REC [`ingressOrRoutes` field]({{< relref "/operate/kubernetes/networking/ingressorroutespec.md" >}}) and [create DNS records]({{< relref "/operate/kubernetes/networking/ingressorroutespec#configure-dns/" >}}).
+ * REC API hostname (`api--.`)
+ * Database hostname suffix (`-db--.`)
+
+Next you'll [collect credentials](#collect-rec-credentials) for your participating clusters and create secrets for the RedisEnterprsieRemoteCluster (RERC) to use.
+
+For a list of example values used throughout this article, see the [Example values](#example-values) section.
+
+### Preview versions
+
+If you are using a preview version of these features (operator version 6.4.2-4 or 6.4.2-5), you'll need to enable the Active-Active controller with the following steps. You need to do this only once per cluster. We recommend using the fully supported 6.4.2-6 version.
+
+1. Download the custom resource definitions (CRDs) for the most recent release (6.4.2-4) from [redis-enterprise-k8s-docs Github](https://github.com/RedisLabs/redis-enterprise-k8s-docs/tree/master/crds).
+
+1. Apply the new CRDs for the Redis Enterprise Active-Active database (REAADB) and Redis Enterprise remote cluster (RERC) to install those controllers.
+
+ ```sh
+ kubectl apply -f crds/reaadb_crd.yaml
+ kubectl apply -f crds/rerc_crd.yaml
+ ```
+
+1. Enable the Active-Active and remote cluster controllers on the operator ConfigMap.
+
+ ```sh
+ kubectl patch cm operator-environment-config --type merge --patch "{\"data\": \
+ {\"ACTIVE_ACTIVE_DATABASE_CONTROLLER_ENABLED\":\"true\", \
+ \"REMOTE_CLUSTER_CONTROLLER_ENABLED\":\"true\"}}"
+
+## Collect REC credentials
+
+To communicate with other clusters, all participating clusters will need access to the admin credentials for all other clusters.
+
+1. Create a file to hold the admin credentials for all participating RECs (such as `all-rec-secrets.yaml`).
+
+1. Within that file, create a new secret for each participating cluster named `redis-enterprise-`.
+
+ The example below shows a file (`all-rec-secrets.yaml`) holding secrets for two participating clusters:
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password:
+ username:
+ kind: Secret
+ metadata:
+ name: redis-enterprise-rerc-ohare
+ type: Opaque
+
+ ---
+
+ apiVersion: v1
+ data:
+ password:
+ username:
+ kind: Secret
+ metadata:
+ name: redis-enterprise-rerc-reagan
+ type: Opaque
+
+ ```
+
+1. Get the REC credentials secret for each participating cluster.
+
+ ```sh
+ kubectl get secret -o yaml
+ ```
+
+ The admin credentials secret for an REC named `rec-chicago` would be similar to this:
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password: ABcdef12345
+ username: GHij56789
+ kind: Secret
+ metadata:
+ name: rec-chicago
+ type: Opaque
+ ```
+
+1. Add the username and password to the new secret for that REC and namespace.
+
+ This example shows the collected secrets file (`all-rec-secrets.yaml`) for `rerc-ohare` (representing `rec-chicago` in namespace `ns-illinois`) and `rerc-reagan` (representing `rec-arlington` in namespace `ns-virginia`).
+
+ ```yaml
+ apiVersion: v1
+ data:
+ password: ABcdef12345
+ username: GHij56789
+ kind: Secret
+ metadata:
+ name: redis-enterprise-rerc-ohare
+ type: Opaque
+
+ ---
+
+ apiVersion: v1
+ data:
+ password: KLmndo123456
+ username: PQrst789010
+ kind: Secret
+ metadata:
+ name: redis-enterprise-rerc-reagan
+ type: Opaque
+
+ ```
+
+1. Apply the file of collected secrets to every participating REC.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+ If the admin credentials for any of the clusters changes, the file will need to be updated and reapplied to all clusters.
+
+## Next steps
+
+Now you are ready to [create your Redis Enterprise Active-Active database]({{< relref "/operate/kubernetes/active-active/create-reaadb.md" >}}).
+
+## Example values
+
+This article uses the following example values:
+
+#### Example cluster 1
+
+* REC name: `rec-chicago`
+* REC namespace: `ns-illinois`
+* RERC name: `rerc-ohare`
+* RERC secret name: `redis-enterprise-rerc-ohare`
+* API FQDN: `api-rec-chicago-ns-illinois.example.com`
+* DB FQDN suffix: `-db-rec-chicago-ns-illinois.example.com`
+
+#### Example cluster 2
+
+* REC name: `rec-arlington`
+* REC namespace: `ns-virginia`
+* RERC name: `rerc-raegan`
+* RERC secret name: `redis-enterprise-rerc-reagan`
+* API FQDN: `api-rec-arlington-ns-virginia.example.com`
+* DB FQDN suffix: `-db-rec-arlington-ns-virginia.example.com`
diff --git a/content/operate/kubernetes/7.4.6/architecture/_index.md b/content/operate/kubernetes/7.4.6/architecture/_index.md
new file mode 100644
index 0000000000..18fc4f323d
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/architecture/_index.md
@@ -0,0 +1,49 @@
+---
+Title: Redis Enterprise for Kubernetes architecture
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This section provides an overview of the architecture and considerations
+ for Redis Enterprise for Kubernetes.
+hideListLinks: true
+linkTitle: Architecture
+weight: 11
+url: '/operate/kubernetes/7.4.6/architecture/'
+---
+Redis bases its Kubernetes architecture on several vital concepts.
+
+## Layered architecture
+
+Kubernetes is an excellent orchestration tool, but it was not designed to deal with all the nuances associated with operating Redis Enterprise. Therefore, it can fail to react accurately to internal Redis Enterprise edge cases or failure conditions. Also, Kubernetes orchestration runs outside the Redis Cluster deployment and may fail to trigger failover events, for example, in split network scenarios.
+
+To overcome these issues, Redis created a layered architecture approach that splits responsibilities between operations Kubernetes does well, procedures Redis Enterprise Cluster excels at, and the processes both can orchestrate together. The figure below illustrated this layered orchestration architecture:
+
+{{< image filename="/images/k8s/kubernetes-overview-layered-orchestration.png" >}}
+
+## Operator based deployment
+
+Operator allows Redis to maintain a unified deployment solution across various Kubernetes environments, i.e., RedHat OpenShift, VMware Tanzu (Tanzu Kubernetes Grid, and Tanzu Kubernetes Grid Integrated Edition, formerly known as PKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and vanilla (upstream) Kubernetes. Statefulset and anti-affinity guarantee that each Redis Enterprise node resides on a Pod that is hosted on a different VM or physical server. See this setup shown in the figure below:
+
+{{< image filename="/images/k8s/kubernetes-overview-unified-deployment.png" >}}
+
+## Network-attached persistent storage {#networkattached-persistent-storage}
+
+Kubernetes and cloud-native environments require that storage volumes be network-attached to the compute instances, to guarantee data durability. Otherwise, if using local storage, data may be lost in a Pod failure event. See the figure below:
+
+{{< image filename="/images/k8s/kubernetes-overview-network-attached-persistent-storage.png" >}}
+
+On the left-hand side (marked #1), Redis Enterprise uses local ephemeral storage for durability. When a Pod fails, Kubernetes launches another Pod as a replacement, but this Pod comes up with empty local ephemeral storage, and the data from the original Pod is now lost.
+
+On the right-hand side of the figure (marked #2), Redis Enterprise uses network-attached storage for data durability. In this case, when a Pod fails, Kubernetes launches another Pod and automatically connects it to the storage device used by the failed Pod. Redis Enterprise then instructs the Redis Enterprise database instance/s running on the newly created node to load the data from the network-attached storage, which guarantees a durable setup.
+
+Redis Enterprise is not only great as an in-memory database but also extremely efficient in the way it uses persistent storage, even when the user chooses to configure Redis Enterprise to write every change to the disk. Compared to a disk-based database that requires multiple interactions (in most cases) with a storage device for every read or write operation, Redis Enterprise uses a single IOPS, in most cases, for a write operation and zero IOPS for a read operation. As a result, significant performance improvements are seen in typical Kubernetes environments, as illustrated in the figures below:
+
+{{< image filename="/images/k8s/kubernetes-overview-performance-improvements-read.png" >}}{{< image filename="/images/k8s/kubernetes-overview-performance-improvements-write.png" >}}
+
+## Multiple services on each pod
+
+Each Pod includes multiple Redis Enterprise instances (multiple services). We found that the traditional method of deploying a Redis Enterprise database over Kubernetes, in which each Pod includes only a single Redis Enterprise instance while preserving a dedicated CPU, is notably inefficient. Redis Enterprise is exceptionally fast and in many cases can use just a fraction of the CPU resources to deliver the requested throughput. Furthermore, when running a Redis Enterprise Cluster with multiple Redis Enterprise instances across multiple Pods, the Kubernetes network, with its multiple vSwitches, can quickly become the deployment’s bottleneck. Therefore, Redis took a different approach to managing Redis Enterprise over the Kubernetes environment. Deploying multiple Redis Enterprise database instances on a single Pod allows us to better utilize the hardware resources used by the Pod such as CPU, memory, and network while keeping the same level of isolation. See the figure below:
+
+{{< image filename="/images/k8s/kubernetes-overview-multiple-services-per-pod.png" >}}
diff --git a/content/operate/kubernetes/7.4.6/architecture/operator.md b/content/operate/kubernetes/7.4.6/architecture/operator.md
new file mode 100644
index 0000000000..8f9ddb3a21
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/architecture/operator.md
@@ -0,0 +1,72 @@
+---
+Title: Redis Enterprise for Kubernetes operator-based architecture
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This section provides a description of the design of Redis Enterprise
+ for Kubernetes.
+linkTitle: What is an operator?
+weight: 30
+url: '/operate/kubernetes/7.4.6/architecture/operator/'
+---
+Redis Enterprise is the fastest, most efficient way to
+deploy and maintain a Redis Enterprise cluster in Kubernetes.
+
+## What is an operator?
+
+An operator is a [Kubernetes custom controller](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers) which extends the native K8s API.
+
+Operators were developed to handle sophisticated, stateful applications
+that the default K8s controllers aren’t able to handle. While stock
+Kubernetes controllers—for example,
+[StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)—are
+ideal for deploying, maintaining and scaling simple stateless
+applications, they are not equipped to handle access to stateful
+resources, upgrade, resize and backup of more elaborate, clustered
+applications such as databases.
+
+## What does an operator do?
+
+In abstract terms, Operators encode human operational knowledge into
+software that can reliably manage an application in an extensible,
+modular way and do not hinder the basic primitives that comprise the K8s
+architecture.
+
+Redis created an operator that deploys and manages the lifecycle of a Redis Enterprise Cluster.
+
+The Redis Enterprise operator acts as a custom controller for the custom
+resource RedisEnterpriseCluster, or ‘rec’, which is defined through K8s
+CRD ([custom resource definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources))
+and deployed with a yaml file.
+
+The operator functions include:
+
+- Validating the deployed Cluster spec (for example, requiring the
+deployment of an odd number of nodes)
+- Implementing a reconciliation loop to monitor all the applicable
+resources
+- Logging events
+- Enabling a simple mechanism for editing the Cluster spec
+
+The Redis Enterprise operator functions as the logic “glue” between the
+K8s infrastructure and the Redis Enterprise Cluster.
+
+The operator creates the following resources:
+
+- Service account
+- Service account role
+- Service account role binding
+- Secret – holds the cluster username, password, and license
+- Statefulset – holds Redis Enterprise nodes
+- The Services Manager deployment – exposes databases and tags nodes
+- The Redis UI service
+- The service that runs the REST API + Sentinel
+- Pod Disruption Budget
+- Optionally: a deployment for the Service Broker, including services and a PVC
+
+The following diagram shows the high-level architecture of the Redis
+Enterprise operator:
+
+{{< image filename="/images/k8s/k8-high-level-architecture-diagram-of-redis-enterprise.png" >}}
diff --git a/content/operate/kubernetes/delete-custom-resources.md b/content/operate/kubernetes/7.4.6/delete-custom-resources.md
similarity index 99%
rename from content/operate/kubernetes/delete-custom-resources.md
rename to content/operate/kubernetes/7.4.6/delete-custom-resources.md
index 6ec48ffd3d..04e8113700 100644
--- a/content/operate/kubernetes/delete-custom-resources.md
+++ b/content/operate/kubernetes/7.4.6/delete-custom-resources.md
@@ -9,6 +9,7 @@ description: This article explains how to delete Redis Enterprise clusters and R
Enterprise databases from your Kubernetes environment.
linkTitle: Delete custom resources
weight: 70
+url: '/operate/kubernetes/7.4.6/delete-custom-resources/'
---
## Multi-namespace management
diff --git a/content/operate/kubernetes/7.4.6/deployment/_index.md b/content/operate/kubernetes/7.4.6/deployment/_index.md
new file mode 100644
index 0000000000..06408c25a7
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/_index.md
@@ -0,0 +1,40 @@
+---
+Title: Deployment
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This section lists the different ways to set up and run Redis Enterprise
+ for Kubernetes. You can deploy on variety of Kubernetes distributions both on-prem
+ and in the cloud via our Redis Enterprise operator for Kubernetes.
+hideListLinks: false
+linkTitle: Deployment
+weight: 11
+url: '/operate/kubernetes/7.4.6/deployment/'
+---
+
+This section lists the different ways to set up and run Redis Enterprise for Kubernetes. You can deploy on variety of Kubernetes distributions both on-prem and in the cloud via our Redis Enterprise operator for Kubernetes.
+
+## Operator overview {#overview}
+
+Redis Enterprise for Kubernetes uses [custom resource definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) (CRDs) to create and manage Redis Enterprise clusters (REC) and Redis Enterprise databases (REDB).
+
+The operator is a deployment that runs within a given namespace. These operator pods must run with sufficient privileges to create the Redis Enterprise cluster resources within that namespace.
+
+When the operator is installed, the following resources are created:
+
+* a service account under which the operator will run
+* a set of roles to define the privileges necessary for the operator to perform its tasks
+* a set of role bindings to authorize the service account for the correct roles (see above)
+* the CRD for a Redis Enterprise cluster (REC)
+* the CRD for a Redis Enterprise database (REDB)
+* the operator itself (a deployment)
+
+The operator currently runs within a single namespace and is scoped to operate only on the Redis Enterprise cluster in that namespace.
+
+## Compatibility
+
+Before installing, check [Supported Kubernetes distributions]({{< relref "/operate/kubernetes/reference/supported_k8s_distributions" >}}) to see which Redis Enterprise operator version supports your Kubernetes distribution.
+
+
diff --git a/content/operate/kubernetes/7.4.6/deployment/container-images.md b/content/operate/kubernetes/7.4.6/deployment/container-images.md
new file mode 100644
index 0000000000..2c1f5ec10a
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/container-images.md
@@ -0,0 +1,254 @@
+---
+Title: Use a private registry for container images
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This section details how the Redis Enterprise Software and Kubernetes
+ operator images can be configured to be pulled from a variety of sources. This page
+ describes how to configure alternate private repositories for images, plus some
+ techniques for handling public repositories with rate limiting.
+linktitle: Use a private container registry
+weight: 92
+url: '/operate/kubernetes/7.4.6/deployment/container-images/'
+---
+
+Redis Enterprise Software, its Kubernetes operator, and the Service Rigger
+are all distributed as separate container images.
+Your Kubernetes deployment will pull these images as needed.
+ You can control where these images are
+pulled from within the operator deployment and also via the
+Redis Enterprise custom resources.
+
+In general, images for deployments that do not have a registry domain
+name (e.g., `gcr.io` or `localhost:5000`) are pulled from the default registry associated
+with the Kubernetes cluster. A plain reference to `redislabs/redis` will likely pull from DockerHub
+(except on OpenShift where it pulls from Red Hat).
+
+For security reasons (e.g., in air-gapped environments), you may want to pull the images
+from a public registry once and then push them to a private registry under
+your control.
+
+{{}}It is very important that the images you are pushing to the private registry have the same exact version tag as the original images. {{}}
+
+Furthermore, because [Docker rate limits public pulls](https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/),
+you may want to consider pulling images from a
+private registry to avoid deployment failures when you hit your DockerHub rate limit.
+
+The information below will help you track and configure where your deployments pull container images.
+
+{{< note >}}
+**IMPORTANT**
+* Each version of the Redis Enterprise operator is mapped to a specific version of Redis Enterprise Software. The semantic versions always match (for example, 7.2.4), although the specific release numbers may be different (for example, 7.2.4-7 is the operator version for Redis Enterprise Software 7.2.4-64).
+* A specific operator version only supports a specific Redis Enterprise version. Other combinations of operator and Redis Enterprise versions are **not supported**.
+{{< /note >}}
+
+
+## Find container sources
+
+Every pod in your deployed application has a source registry. Any image not prefixed by a registry domain name (e.g., "gcr.io") will pull from the default registry for the Kubernetes cluster (i.e., DockerHub). You can use the commands below to discover the pull sources for the images on your cluster.
+
+To list all the images used by your cluster:
+
+```sh
+kubectl get pods --all-namespaces -o jsonpath="{..image}" |tr -s '[[:space:]]' '\n' | uniq -c
+```
+
+To specifically determine the pull source for the Redis Enterprise operator itself, run the following command:
+
+```sh
+kubectl get pods --all-namespaces -o jsonpath="{..image}" |tr -s '[[:space:]]' '\n' | uniq -c | grep redislabs
+```
+
+You can limit this command to specific namespaces by replacing the `--all-namespaces` parameter with
+a set of `-n {namespace}` parameters, where each `{namespace}` is a specific
+namespace of interest on your cluster.
+
+## Create a private container registry
+
+You can set up a private container registry in a couple of ways:
+
+* On-premise via [Docker registry](https://docs.docker.com/registry/deploying/), [Red Hat Quay](https://www.redhat.com/en/technologies/cloud-computing/quay), or other providers
+* Cloud provider based registries (e.g., Azure Container Registry, Google Container Registry, etc.).
+
+Once you have set up a private container registry, you will identify the container registry using:
+
+* A domain name
+* A port (optional)
+* A repository path (optional)
+
+## Push images to a private container registry
+
+Important images for a Redis Enterprise Software deployment include:
+
+* Redis Enterprise Software
+* Bootstrapping a Redis Enterprise cluster node (in the operator image)
+* The Service Rigger
+* The Redis Enterprise Software operator
+
+You will need to push all these images to your private container registry. In general,
+to push the images you must:
+
+ 1. [Pull](https://docs.docker.com/engine/reference/commandline/pull/) the various images locally for the Redis Enterprise Software, the Service Rigger, and the operator.
+ 2. Tag the local images with the private container registry, repository, and version tag.
+ 3. [Push](https://docs.docker.com/engine/reference/commandline/push/) the newly tagged images.
+
+The example below shows the commands for pushing the images for Redis Enterprise Software and its operator to a private container registry:
+
+```sh
+PRIVATE_REPO=...your repo...
+RS_VERSION=7.2.4-64
+OPERATOR_VERSION=7.2.4-7
+docker pull redislabs/redis:${RS_VERSION}
+docker pull redislabs/operator:${OPERATOR_VERSION}
+docker pull redislabs/k8s-controller:${OPERATOR_VERSION}
+docker tag redislabs/redis:${RS_VERSION} ${PRIVATE_REPO}/redislabs/redis:${RS_VERSION}
+docker tag redislabs/operator:${OPERATOR_VERSION} ${PRIVATE_REPO}/redislabs/operator:${OPERATOR_VERSION}
+docker tag redislabs/k8s-controller:${OPERATOR_VERSION} ${PRIVATE_REPO}/redislabs/k8s-controller:${OPERATOR_VERSION}
+docker push ${PRIVATE_REPO}/redislabs/redis:${RS_VERSION}
+docker push ${PRIVATE_REPO}/redislabs/operator:${OPERATOR_VERSION}
+docker push ${PRIVATE_REPO}/redislabs/k8s-controller:${OPERATOR_VERSION}
+```
+
+## Configure deployments to use a private container registry
+
+Once you push your images to your private container registry, you need to
+configure your deployments to use that registry for Redis Enterprise Software and operator
+deployments. The operator container image is configured directly by the operator deployment
+bundle. The Redis Enterprise cluster pod (RS and bootstrapper) and Service Rigger
+images are configured in the Redis Enterprise custom resource.
+
+Depending on your Kubernetes platform, your private container registry may
+require authentication. If you do need authentication, add a [pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) to your namespace. Then you'll need to configure Kubernetes and the operator to use the pull secret. The two following sections have examples of adding the `imagePullSecrets` to the operator deployment and `pullSecrets` to the cluster custom resource.
+
+### Specify the operator image source
+
+The operator bundle contains the operator deployment and the reference to the operator image (`redislabs/operator`). To use a private container registry, you must
+change this image reference in your operator deployment file **before** you deploy the operator. If you apply this change to modify an existing operator deployment, the operator's pod will restart.
+
+In the operator deployment file, 'containers:image' should point to the same repository and tag you used when [pushing]({{< relref "/operate/kubernetes/deployment/container-images.md#push-images-to-a-private-container-registry" >}}) to the private container registry:
+
+```sh
+${PRIVATE_REPO}/redislabs/operator:${OPERATOR_VERSION}
+```
+
+The example below specifies a 7.2.4-7 operator image in a Google Container Registry:
+
+```YAML
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: redis-enterprise-operator
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ name: redis-enterprise-operator
+ template:
+ metadata:
+ labels:
+ name: redis-enterprise-operator
+ spec:
+ serviceAccountName: redis-enterprise-operator
+ containers:
+ - name: redis-enterprise-operator
+ image: gcr.io/yourproject/redislabs/operator:7.2.4-7
+...
+```
+
+If your container registry requires a pull secret, configure `imagePullSecrets` on the operator deployment:
+
+```YAML
+spec:
+ template:
+ spec:
+ imagePullSecrets:
+ - name: regcred
+```
+
+### Specify the Redis Enterprise cluster images source
+
+A Redis Enterprise cluster managed by the operator consists of three
+container images:
+
+* **`redislabs/redis`**: the Redis Enterprise Software container image
+* **`redislabs/operator`**: the bootstrapper is packaged within the operator container image
+* **`redislabs/k8s-controller`**: the Service Rigger container image
+
+By default, a new Redis Enterprise cluster is created using the
+container images listed above. These container images are pulled from the K8s cluster's default
+container registry.
+
+To [pull](https://docs.docker.com/engine/reference/commandline/pull/) the Redis Enterprise container images from
+a private container registry, you must specify them in the
+Redis Enterprise cluster custom resource.
+
+Add the following sections to the `spec` section of your RedisEnterpriseCluster resource file:
+
+ * **`redisEnterpriseImageSpec`**: controls the Redis Enterprise Software container image. *The version should match the RS version associated with the operator version*.
+ * **`bootstrapperImageSpec`**": controls the bootstrapper container image. *The version must match the operator version*.
+ * **`redisEnterpriseServicesRiggerImageSpec`**: controls the Service Rigger container image. *The version must match the operator version*.
+
+The REC custom resource example below pulls all three container images from a GCR private registry:
+
+```YAML
+apiVersion: app.redislabs.com/v1
+kind: RedisEnterpriseCluster
+metadata:
+ name: rec
+spec:
+ nodes: 3
+ redisEnterpriseImageSpec:
+ imagePullPolicy: IfNotPresent
+ repository: gcr.io/yourproject/redislabs/redis
+ versionTag: 7.2.4-64
+ bootstrapperImageSpec:
+ imagePullPolicy: IfNotPresent
+ repository: gcr.io/yourproject/redislabs/operator
+ versionTag: 7.2.4-7
+ redisEnterpriseServicesRiggerImageSpec:
+ imagePullPolicy: IfNotPresent
+ repository: gcr.io/yourproject/redislabs/k8s-controller
+ versionTag: 7.2.4-7
+```
+
+If your private container registry requires pull secrets, you must add `pullSecrets`
+to the `spec` section:
+
+```YAML
+apiVersion: app.redislabs.com/v1
+kind: RedisEnterpriseCluster
+metadata:
+ name: rec
+spec:
+ nodes: 3
+ pullSecrets:
+ - name: regcred
+ redisEnterpriseImageSpec:
+ imagePullPolicy: IfNotPresent
+ repository: gcr.io/yourproject/redislabs/redis
+ versionTag: 7.2.4-64
+ bootstrapperImageSpec:
+ imagePullPolicy: IfNotPresent
+ repository: gcr.io/yourproject/redislabs/operator
+ versionTag: 7.2.4-7
+ redisEnterpriseServicesRiggerImageSpec:
+ imagePullPolicy: IfNotPresent
+ repository: gcr.io/yourproject/redislabs/k8s-controller
+ versionTag: 7.2.4-7
+```
+
+## Rate limiting with DockerHub
+
+Docker has [rate limits for image pulls](https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/).
+Anonymous users are allowed a certain number of pulls every 6 hours. For authenticated users, the limit is larger.
+These rate limits may affect your Kubernetes cluster in a number of ways:
+
+* The cluster nodes will likely be treated as a single anonymous user.
+* The number of pulls during a deployment might exceed the rate limit for other deployment dependencies, including the operator, Redis Enterprise Software, or other non-Redis pods.
+* Pull failures may prevent your deployment from downloading the required images in a timely manner. Delays here can affect the stability of deployments used by the Redis Enterprise operator.
+
+For these reasons, you should seriously consider where your images
+are pulled from to **avoid failures caused by rate limiting**. The easiest solution
+is to push the required images to a private container registry under your control.
diff --git a/content/operate/kubernetes/7.4.6/deployment/deployment-options.md b/content/operate/kubernetes/7.4.6/deployment/deployment-options.md
new file mode 100644
index 0000000000..faa5028675
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/deployment-options.md
@@ -0,0 +1,59 @@
+---
+Title: Flexible deployment options
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Redis Enterprise for Kubernetes allows you to deploy to multiple namespaces.
+ This article describes flexible deployment options you can use to meet your specific
+ needs.
+linkTitle: Deployment options
+weight: 12
+url: '/operate/kubernetes/7.4.6/deployment/deployment-options/'
+---
+You can deploy Redis Enterprise for Kubernetes in several different ways depending on your database needs.
+
+Multiple Redis Enterprise database resources (REDB) can be associated with single Redis Enterprise cluster resource (REC) even if they reside in different namespaces.
+
+The Redis Enterprise cluster (REC) custom resource must reside in the same namespace as the Redis Enterprise operator.
+
+{{}} Multi-namespace installations don't support Active-Active databases (REEADB). Only databases created with the REDB resource are supported in multi-namespace deployments at this time.{{}}
+
+
+## Single REC and single namespace (one-to-one)
+
+The standard and simplest deployment deploys your Redis Enterprise databases (REDB) in the same namespace as the Redis Enterprise cluster (REC). No additional configuration is required for this, since there is no communication required to cross namespaces. See [Deploy Redis Enterprise for Kubernetes]({{< relref "/operate/kubernetes/deployment/quick-start.md" >}}).
+
+{{< image filename="/images/k8s/k8s-deploy-one-to-one.png" >}}
+
+## Single REC and multiple namespaces (one-to-many)
+
+Multiple Redis Enterprise databases (REDB) spread across multiple namespaces within the same K8s cluster can be associated with the same Redis Enterprise cluster (REC). See [Manage databases in multiple namespaces]({{< relref "/operate/kubernetes/re-clusters/multi-namespace.md" >}}) for more information.
+
+{{< image filename="/images/k8s/k8s-deploy-one-to-many.png" >}}
+
+## Multiple RECs and multiple namespaces (many-to-many)
+
+A single Kubernetes cluster can contain multiple Redis Enterprise clusters (REC), as long as they reside in different namespaces. Each namespace can host only one REC and each operator can only manage one REC.
+
+You have the flexibility to create databases in separate namespaces, or in the same namespace as the REC, or combine any of the supported deployment options above. This configuration is geared towards use cases that require multiple Redis Enterprise clusters with greater isolation or different cluster configurations.
+
+See [Manage databases in multiple namespaces]({{< relref "/operate/kubernetes/re-clusters/multi-namespace.md" >}}) for more information.
+
+
+{{< image filename="/images/k8s/k8s-deploy-many-to-many.png" >}}
+
+## Unsupported deployment patterns
+
+### Cross-cluster operations
+
+Redis Enterprise for Kubernetes does not support operations that cross Kubernetes clusters. Redis Enterprise clusters (REC) work inside a single K8s cluster. Crossing clusters could result in functional and security issues.
+
+{{< image filename="/images/k8s/k8s-deploy-cross-namespaces.png" >}}
+
+### Multiple RECs in one namespace
+
+Redis Enterprise for Kubernetes does not support multiple Redis Enterprise clusters (REC) in the same namespace. Creating more than one REC in the same namespace will result in errors.
+
+{{< image filename="/images/k8s/k8s-deploy-multicluster-antipattern.png" >}}
diff --git a/content/operate/kubernetes/7.4.6/deployment/openshift/_index.md b/content/operate/kubernetes/7.4.6/deployment/openshift/_index.md
new file mode 100644
index 0000000000..a89f46e026
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/openshift/_index.md
@@ -0,0 +1,43 @@
+---
+Title: Deploy Redis Enterprise for Kubernetes with OpenShift
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: A quick introduction to the steps necessary to get a Redis Enterprise
+ cluster installed in your OpenShift Kubernetes cluster
+hideListLinks: true
+linkTitle: OpenShift
+weight: 11
+url: '/operate/kubernetes/7.4.6/deployment/openshift/'
+---
+The deployment of Redis Enterprise clusters is managed with the Redis Enterprise operator that you deploy in the namespace for your project.
+To create a database that your application
+workloads can use:
+
+1. Install the Redis Enterprise operator.
+
+1. Create a Redis Enterprise CRD to describe your desired cluster.
+
+1. The operator reads this cluster description and deploys the various components on your K8s cluster.
+
+1. Once running, use the Redis Enterprise cluster to create a database.
+
+1. The operator automatically exposes the new database as a K8s service.
+
+## For OpenShift via the OperatorHub
+
+To [create a database on an OpenShift 4.x cluster via the OperatorHub]({{< relref "/operate/kubernetes/deployment/openshift/openshift-operatorhub" >}}) you only need to have the [OpenShift 4.x cluster installed](https://docs.openshift.com/container-platform/4.3/welcome/index.html) with at least three nodes that each meet the [minimum requirements for a development installation]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}).
+
+## For OpenShift via the CLI
+
+To [create a database on an OpenShift cluster via the CLI]({{< relref "/operate/kubernetes/deployment/openshift/openshift-cli.md" >}}), you need:
+
+1. An [OpenShift cluster installed](https://docs.openshift.com/container-platform/4.3/welcome/index.html) with at least three nodes that each meet the [minimum requirements for a development installation]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}).
+1. The [kubectl package installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/) at version 1.9 or higher
+1. The [OpenShift cli installed](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html)
+
+## Version compatibility
+
+To see which version of Redis Enterprise for Kubernetes supports your OpenShift version, see [Supported Kubernetes distributions]({{< relref "/operate/kubernetes/reference/supported_k8s_distributions" >}}).
diff --git a/content/operate/kubernetes/7.4.6/deployment/openshift/old-getting-started-openshift-crdb.md b/content/operate/kubernetes/7.4.6/deployment/openshift/old-getting-started-openshift-crdb.md
new file mode 100644
index 0000000000..90cea9a85b
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/openshift/old-getting-started-openshift-crdb.md
@@ -0,0 +1,225 @@
+---
+Title: Getting Started with Active-Active (CRDB) on OpenShift with Route-Based Ingress
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: null
+draft: true
+hidden: true
+weight: $weight
+url: '/operate/kubernetes/7.4.6/deployment/openshift/old-getting-started-openshift-crdb/'
+---
+In this guide, we'll set up an [Active-Active database]({{< relref "/operate/rs/databases/active-active/_index.md" >}})
+(formerly known as CRDB) deployment with Active-Active replication
+spanning across two Redis Enterprise clusters over OpenShift, using Redis Enterprise Operator
+and OpenShift Route.
+
+## Overview
+
+An Active-Active deployment requires connectivity between different Kubernetes clusters.
+A router is the most common way to allow such external access. A [router]
+(https://docs.openshift.com/container-platform/3.5/architecture/core_concepts/routes.html#architecture-core-concepts-routes)
+is configured to accept requests external to the cluster and proxy them into the
+cluster based on how the route is configured. Routes are limited to HTTP/HTTPS(SNI)/TLS(SNI),
+which covers web applications.
+
+Typically, a Kubernetes cluster administrator configures a [DNS wildcard entry]
+(https://docs.openshift.com/container-platform/3.9/install_config/install/prerequisites.html#prereq-dns)
+that resolves to an OpenShift Container Platform node that is running
+the OpenShift router.
+
+The default router in OpenShift is HAProxy, which is a free, fast, and reliable solution
+offering high availability, load balancing, and proxying for TCP and HTTP-based applications.
+
+The Redis Enterprise Operator uses the routes mechanism to expose 2 inter-cluster services:
+the Redis Enterprise Cluster API service and the DB service that exposes the Active-Active database.
+Both services are used during the creation and management of an Active-Active deployment.
+The routes are configured with TLS passthrough.
+
+{{< note >}}
+Routes should have unique hostnames across a Kubernetes cluster.
+{{< /note >}}
+
+## Steps for creating an Active-Active deployment with Service Broker
+
+Before you create an Active-Active deployment with Service Broker, you must create a cluster
+using the REC custom resource, with a Service Broker deployment as covered in
+[Getting Started with Kubernetes and Openshift]({{< relref "/operate/platforms/openshift/_index.md" >}}), while noting the following:
+
+1. Make sure you use the latest versions of the deployment files available on GitHub.
+1. Deploy nodes with at least 6GB of RAM in order to accommodate the Active-Active database plan's 5GB database size.
+1. Make sure you follow the instructions to deploy the Redis Enterprise Service Broker.
+
+The peerClusters section in the spec is used for creating an Active-Active with the Service Broker.
+
+{{< note >}}
+This is only relevant for OpenShift deployments, which support Service Brokers natively.
+{{< /note >}}
+
+Copy this section of the REC spec and modify it for your environment. To apply it
+to every cluster that will participate in the Active-Active database deployment, edit the cluster yaml file
+and apply it using `kubectl apply -f `:
+
+```yaml
+ activeActive:
+ apiIngressUrl: api1.cluster1.
+ dbIngressSuffix: -cluster1.
+ method: openShiftRoute
+
+ peerClusters:
+ - apiIngressUrl: api2.cluster2.
+ authSecret: cluster2_secret
+ dbIngressSuffix: -cluster2.
+ fqdn: ..svc.cluster.local
+ - apiIngressUrl: api3.cluster3.
+ authSecret: cluster2_secret
+ dbIngressSuffix: -cluster3.
+ fqdn: ..svc.cluster.local
+```
+
+This block is added to the Service Broker config map when the REC spec changes, and
+it triggers a restart of the Service Broker pod to pass the peer clusters configuration
+to the service broker. Once the Service Broker pod restarts, you can select the
+Active-Active database plan from the OS service catalog UI.
+
+The elements of the section are:
+
+- **apiIngressUrl** - The OpenShift hostname that is created using OpenShift route.
+
+- **dbIngressSuffix** - The suffix of the db route name. The resulting host is
+``. This is used by the Redis Enterprise Syncer to
+sync data between the databases.
+
+- **fqdn** - The FQDN of the Kubernetes cluster where the pattern is `.
+.svc.cluster.local`. (Remember that the RS cluster name is set in the REC spec).
+
+- **authSecret** - The Kubernetes secret name that contains the username and password
+to access this cluster.
+
+We need to create a secret to reference from authSecret based on the cluster admin credentials
+that were automatically created when the clusters were created. To do this,
+repeat the following process for each of the clusters involved:
+
+1. Login to the OpenShift cluster where your Redis Enterprise Cluster (REC) resides.
+1. To find the secret that holds the REC credentials, run: `kubectl get secrets`
+
+ From the secrets listed, you’ll find one that is named after your REC and
+ of type Opaque, like this:
+
+ ```sh
+ redis-enterprise-cluster Opaque 3 1d
+ ```
+
+1. Copy the hashed password and username from the file and create a yaml file
+with that information in the following format:
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: crdb1-cred
+ type: Opaque
+ data:
+ password: NWhYRWU2OWQ=
+ username: YWRtaW5AYWNtZS5jb20=
+ ```
+
+1. Deploy the newly created secret yaml file in the other clusters:
+
+ ```sh
+ $ kubectl create -f crdb1-secret.yaml
+ ```
+
+ A typical response looks like:
+
+ ```
+ secret/crdb1-cred created
+ ```
+
+1. Repeat the process for the other clusters until each cluster has a secret
+with the credentials of the other clusters.
+
+After applying the update cluster deployment file, the Service Broker is redeployed
+to apply the changes to the config map.
+
+Now, proceed to the Openshift web console.
+
+1. From the left menu, select a project that holds one of your configured clusters and
+then select **Add to Project > Browse Catalog**.
+
+ {{< image filename="/images/rs/openshift-crdb-catalog.png" >}}
+
+1. Find the **Redis Enterprise [Project Name:Cluster Name]** tile and double-click it to start the wizard.
+
+ {{< image filename="/images/rs/openshift-crdb-information.png" >}}
+
+1. Click **Next** in the Information step.
+
+ {{< image filename="/images/rs/openshift-crdb-plan.png" >}}
+
+1. Then, to deploy an Active-Active database on the clusters you’ve previously configured,
+select the **geo-distributed-redis** plan radio button and click **Next**.
+
+ {{< image filename="/images/rs/openshift-crdb-configuration.png" >}}
+
+1. Click **Next** on the Configuration step, choose a binding option in the Binding step,
+and click **Create**.
+
+ {{< image filename="/images/rs/openshift-crdb-binding.png" >}}
+
+The Active-Active database connected databases are now created with the specified binding, if you selected a binding.
+
+{{< image filename="/images/rs/openshift-crdb-results.png" >}}
+
+You can view the binding by following the link to the secret.
+
+{{< image filename="/images/rs/openshift-crdb-secret.png" >}}
+
+## Validating Active-Active database deployment
+
+To do a basic validation test of database replication:
+
+1. Connect to one of the cluster pods using the following command:
+
+ ```sh
+ oc exec -it -0 bash
+ ```
+
+1. At the prompt, launch the redis CLI:
+
+ ```sh
+ $ redis-cli -h -p -a
+ ```
+
+1. Set some values and verify they have been set:
+
+ ```sh
+ > set keymaster Vinz
+ OK
+ > set gatekeeper Zuul
+ OK
+ > get keymaster
+ "Vinz"
+ > get gatekeeper
+ "Zuul"
+ ```
+
+1. Now, exit the CLI and the pod execution environment and login to the synched database
+on the other cluster.
+
+ ```sh
+ oc exec -it -0 bash
+ $redis-cli -h -p -a
+ ```
+
+1. Retrieve the values you previously set or continue manipulating key:value pairs
+and observe the 2-way synchronization, for example:
+
+ ```sh
+ > get keymaster
+ "Vinz"
+ > get gatekeeper
+ "Zuul"
+ ```
diff --git a/content/operate/kubernetes/7.4.6/deployment/openshift/old-index.md b/content/operate/kubernetes/7.4.6/deployment/openshift/old-index.md
new file mode 100644
index 0000000000..ad4cf67e12
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/openshift/old-index.md
@@ -0,0 +1,345 @@
+---
+Title: Redis Enterprise Software deployment for Kubernetes with OpenShift
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Redis Enterprise is supported on OpenShift Kubernetes cluster deployments
+ via an operator.
+draft: true
+hidden: true
+hideListLinks: false
+weight: 60
+url: '/operate/kubernetes/7.4.6/deployment/openshift/old-index/'
+---
+
+Redis Enterprise is supported on OpenShift Kubernetes cluster deployments via
+an operator. The operator is a software component that runs in your
+deployment namespace and facilitates deploying and managing
+Redis Enterprise clusters.
+
+
+
+{{% comment %}}
+These are the steps required to set up a Redis Enterprise Software
+Cluster with OpenShift.
+
+Prerequisites:
+
+1. An [OpenShift cluster installed](https://docs.openshift.com/container-platform/4.8/installing/index.html) at version 4.6 or higher, with at least three nodes (each meeting the [minimum requirements for a development installation]({{< relref "/operate/rs/installing-upgrading/hardware-requirements.md" >}})
+1. The [kubectl package installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/) at version 1.9 or higher
+1. The [OpenShift cli installed](https://docs.openshift.com/container-platform/4.8/cli_reference/openshift_cli/getting-started-cli.html)
+
+## Step 1: Login
+
+- Log in to your OpenShift account as a super admin (so you have access to all the default projects).
+- Create a new project, fill in the name and other details for the project, and click **Create**.
+
+ {{< image filename="/images/rs/getting-started-kubernetes-openshift-image1.png" >}}
+
+- Click on “admin” (upper right corner) and then “Copy Login.”
+
+ {{< image filename="/images/rs/getting-started-kubernetes-openshift-image4.png" >}}
+
+- Paste the *login* command into your shell; it should look something like this:
+
+ ```sh
+ oc login https://your-cluster.acme.com –token=your$login$token
+ ```
+
+- Next, verify that you are using the newly created project. Type:
+
+ ```sh
+ oc project
+ ```
+
+This will shift to your project rather than the default project (you can verify the project you’re currently using with the *oc project* command).
+
+## Step 2: Get deployment files
+
+Clone this repository, which contains the deployment files:
+
+```sh
+git clone https://github.com/RedisLabs/redis-enterprise-k8s-docs
+```
+
+
+
+Specifically for the custom resource (cr) yaml file, you may also download and edit one of the files in the [example folder.](https://github.com/RedisLabs/redis-enterprise-k8s-docs/tree/master/examples)
+
+## Step 3: Prepare your yaml files
+
+Let’s look at each yaml file to see what requires editing:
+
+- [scc.yaml](https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/master/openshift/scc.yaml)
+
+ The scc ([Security Context Constraint](https://docs.openshift.com/container-platform/4.8/authentication/managing-security-context-constraints.html)) yaml defines the cluster’s security context constraints, which we will apply to our project later on. We strongly recommend **not** changing anything in this yaml file.
+
+ Apply the file:
+
+ ```sh
+ oc apply -f scc.yaml
+ ```
+
+ You should receive the following response:
+
+ ```sh
+ securitycontextconstraints.security.openshift.io “redis-enterprise-scc” configured
+ ```
+
+ Now you need to bind the scc to your project by typing:
+
+ ```sh
+ oc adm policy add-scc-to-group redis-enterprise-scc system:serviceaccounts:your_project_name
+ ```
+
+ (If you do not remember your project name, run “oc project”)
+
+- [openshift.bundle.yaml](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/openshift.bundle.yaml) -
+
+ The bundle file includes several declarations:
+
+ 1. rbac (Role-Based Access Control) defines who can access which resources. The Operator application requires these definitions to deploy and manage the entire Redis Enterprise deployment (all cluster resources within a namespace). These include declaration of rules, role and rolebinding.
+ 1. crd declaration, creating a [CustomResourceDefinition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) for your Redis Enterprise Cluster resource. This provides another API resource to be handled by the k8s API server and managed by the operator we will deploy next
+ 1. operator deployment declaration, creates the operator deployment, which is responsible for managing the k8s deployment and lifecycle of a Redis Enterprise Cluster. Among many other responsibilities, it creates a [stateful set](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) that runs the Redis Enterprise nodes, as pods. The yaml contains the latest image tag representing the latest Operator version available.
+
+ This yaml should be applied as-is, without changes. To apply it:
+
+ ```sh
+ kubectl apply -f openshift.bundle.yaml
+ ```
+
+ You should receive the following response:
+
+ ```sh
+ role.rbac.authorization.k8s.io/redis-enterprise-operator created
+ serviceaccount/redis-enterprise-operator created
+ rolebinding.rbac.authorization.k8s.io/redis-enterprise-operator created
+ customresourcedefinition.apiextensions.k8s.io/redisenterpriseclusters.app.redislabs.com configured
+ deployment.apps/redis-enterprise-operator created
+ ```
+
+1. Now, verify that your redis-enterprise-operator deployment is running:
+
+ ```sh
+ kubectl get deployment -l name=redis-enterprise-operator
+ ```
+
+ A typical response will look like this:
+
+ ```sh
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ redis-enterprise-operator 1/1 1 1 0m36s
+ ```
+
+
+
+ [sb_rbac.yaml](https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/master/openshift/sb_rbac.yaml)
+
+ If you’re deploying a service broker, also apply the sb_rbac.yaml file. The sb_rbac (Service Broker Role-Based Access Control) yaml defines the access permissions of the Redis Enterprise Service Broker.
+
+ We strongly recommend **not** changing anything in this yaml file.
+
+ To apply it, run:
+
+ ```sh
+ kubectl apply -f sb_rbac.yaml
+ ```
+
+ You should receive the following response:
+
+ ```sh
+ clusterrole.rbac.authorization.k8s.io/redis-enterprise-operator-sb configured
+ clusterrolebinding.rbac.authorization.k8s.io/redis-enterprise-operator configured
+ ``` -->
+
+
+
+- The [rec_rhel.yaml](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/openshift/rec_rhel.yaml) defines the configuration of the newly created resource: Redis Enterprise Cluster. This yaml could be renamed your_cluster_name.yaml to keep things tidy, but this isn’t a mandatory step.
+
+ This yaml can be edited to the required use case, however, the sample provided can be used for test/dev and quick start purposes. Here are the main fields you may review and edit:
+
+ - name: “your_cluster_name” (e.g. “demo-cluster”)
+ - nodes: number_of_nodes_in_the_cluster (Must be an uneven number of at least 3 or greater—[here’s why](https://redislabs.com/redis-enterprise/technology/highly-available-redis/))
+ - uiServiceType: service_type
+
+ Service type value can be either ClusterIP or LoadBalancer. This is an optional configuration based on [k8s service types](https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/). The default is ClusterIP.
+
+ - storageClassName: “gp2“
+
+ This specifies the [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) used for your nodes’ persistent disks. For example, AWS uses “gp2” as a default, GKE uses “standard” and Azure uses "default").
+
+ - redisEnterpriseNodeResources: The [compute resources](https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html#dev-compute-resources) required for each node.
+ - limits – specifies the max resources for a Redis node
+ - requests – specifies the minimum resources for a Redis node
+
+ For example:
+
+ ```sh
+ limits
+ cpu: “4000m”
+ memory: 4Gi
+ requests
+
+ cpu: “4000m”
+ memory: 4Gi
+ ```
+
+ The default (if unspecified) is 4 cores (4000m) and 4GB (4Gi).
+
+ {{< note >}}
+Resource limits should equal requests ([Learn why](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/topics.md#guaranteed-quality-of-service)).
+ {{< /note >}}
+
+ - serviceBrokerSpec –
+ - enabled: \
+
+ This specifies [persistence](https://redislabs.com/redis-features/persistence) for the Service Broker with an “enabled/disabled” flag. The default is “false.”
+
+ persistentSpec:
+ storageClassName: “gp2“
+
+ - redisEnterpriseImageSpec: This configuration controls the Redis Enterprise version used, and where it is fetched from. This is an optional field. The Operator will automatically use the matching RHEL image version for the release.
+
+ [imagePullPolicy](https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/builds_and_image_streams.html#image-pull-policy):
+ IfNotPresent
+ Repository: redislabs/redis
+ versionTag: 5.2.10-22
+
+ The version tag, as it appears on your repository (e.g. on [DockerHub](https://hub.docker.com/r/redislabs/redis/)).
+
+## Step 4: Create your cluster
+
+Once you have your_cluster_name yaml set, you need to apply it to create your Redis Enterprise Cluster:
+
+```sh
+kubectl apply -f your_cluster_name.yaml
+```
+
+Run kubectl get rec and verify that creation was successful (rec is a shortcut for “RedisEnterpriseClusters”).
+
+You should receive a response similar to the following:
+
+```sh
+NAME AGE
+
+Your_cluster_name 17s
+```
+
+Your Cluster will be ready shortly, typically within a few minutes.
+
+To check the cluster status, type the following:
+
+```sh
+kubectl get pod
+```
+
+You should receive a response similar to the following:
+
+| | | | | |
+| ---------------------------------- | ----- | ------- | -------- | --- |
+| NAME | READY | STATUS | RESTARTS | AGE |
+| your_cluster_name-0 | 2/2 | Running | 0 | 1m |
+| your_cluster_name-1 | 2/2 | Running | 0 | 1m |
+| your_cluster_name-2 | 2/2 | Running | 0 | 1m |
+| your_cluster_name-controller-x-x | 1/1 | Running | 0 | 1m |
+| Redis-enterprise-operator-x-x | 1/1 | Running | 0 | 5m |
+
+Next, create your databases.
+
+## Step 5: Create a database
+
+In order to create your database, we will log in to the Redis Enterprise UI.
+
+- First, apply port forwarding to your Cluster:
+
+ ```sh
+ kubectl port-forward your_cluster_name-0 8443:8443
+ ```
+
+ {{< note >}}
+- your_cluster_name-0 is one of your cluster pods. You may consider running the port-forward command in the background.
+- The Openshift UI provides tools for creating additional routing options, including external routes. These are covered in [RedHat Openshift documentation](https://docs.openshift.com/container-platform/3.11/dev_guide/routes.html).
+ {{< /note >}}
+
+ Next, create your database.
+
+- Open a browser window and navigate to localhost:8443
+
+ {{< image filename="/images/rs/getting-started-kubernetes-openshift-image5.png" >}}
+
+- In order to retrieve your password, navigate to the OpenShift management console, select your project name, go to Resources-\>Secrets-\>your_cluster_name
+- Retrieve your password by selecting “Reveal Secret.”
+
+ {{< warning >}}
+Do not change the default admin user password in the Redis Enterprise admin console.
+Changing the admin password impacts the proper operation of the K8s deployment.
+ {{< /warning >}}
+
+ {{< image filename="/images/rs/getting-started-kubernetes-openshift-image3.png" >}}
+
+- Follow the interface’s [instructions to create your database]({{< relref "/operate/rs/administering/creating-databases/_index.md" >}}).
+
+{{< note >}}
+In order to conduct the Ping test through Telnet, you can create a new route to the newly created database port in the same way as described above for the UI port. After you create your database, go to the Openshift management console, select your project name and go to Applications-\>Services. You will see two newly created services representing the database along with their IP and port information, similar to the screenshot below.
+{{< /note >}}
+
+{{< image filename="/images/rs/getting-started-kubernetes-openshift-image6.png" >}}
+
+{{% /comment %}}
diff --git a/content/operate/kubernetes/7.4.6/deployment/openshift/openshift-cli.md b/content/operate/kubernetes/7.4.6/deployment/openshift/openshift-cli.md
new file mode 100644
index 0000000000..efab1ab1a9
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/openshift/openshift-cli.md
@@ -0,0 +1,248 @@
+---
+Title: Deployment with OpenShift CLI for Redis Enterprise for Kubernetes
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Redis Enterprise for Kubernetes and cluster can be installed via CLI
+ tools OpenShift
+linkTitle: OpenShift CLI
+weight: 60
+url: '/operate/kubernetes/7.4.6/deployment/openshift/openshift-cli/'
+---
+Use these steps to set up a Redis Enterprise Software cluster with OpenShift.
+
+## Prerequisites
+
+- [OpenShift cluster](https://docs.openshift.com/container-platform/4.8/installing/index.html) with at least 3 nodes (each meeting the [minimum requirements for a development installation]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements" >}}))
+- [OpenShift CLI](https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html)
+
+To see which version of Redis Enterprise for Kubernetes supports your OpenShift version, see [Supported Kubernetes distributions]({{< relref "/operate/kubernetes/reference/supported_k8s_distributions" >}}).
+
+## Deploy the operator
+
+1. Create a new project.
+
+ ```sh
+ oc new-project
+ ```
+
+1. Verify the newly created project.
+
+ ```sh
+ oc project
+ ```
+
+1. Get the deployment files.
+
+ ```sh
+ git clone https://github.com/RedisLabs/redis-enterprise-k8s-docs
+ ```
+
+1. Deploy the OpenShift operator bundle.
+
+ If you are using version 6.2.18-41 or earlier, you must [apply the security context constraint](#install-security-context-constraint) before the operator bundle.
+
+ ```sh
+ oc apply -f openshift.bundle.yaml
+ ```
+
+ {{< warning >}}
+Changes to the `openshift.bundle.yaml` file can cause unexpected results.
+ {{< /warning >}}
+
+1. Verify that your `redis-enterprise-operator` deployment is running.
+
+ ```sh
+ oc get deployment
+ ```
+
+ A typical response looks like this:
+
+ ```sh
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ redis-enterprise-operator 1/1 1 1 0m36s
+ ```
+
+ {{}}
+DO NOT modify or delete the StatefulSet created during the deployment process. Doing so could destroy your Redis Enterprise cluster (REC).
+ {{}}
+
+## Install security context constraint
+
+The Redis Enterprise pods must run in OpenShift with privileges set in a [Security Context Constraint](https://docs.openshift.com/container-platform/4.4/authentication/managing-security-context-constraints.html#security-context-constraints-about_configuring-internal-oauth). This grants the pod various rights, such as the ability to change system limits or run as a particular user.
+
+1. Apply the file `scc.yaml` file.
+
+ {{}}
+Do not edit this file.
+ {{}}
+
+ ```sh
+ oc apply -f openshift/scc.yaml
+ ```
+
+ You should receive the following response:
+
+ ```sh
+ securitycontextconstraints.security.openshift.io "redis-enterprise-scc-v2" configured
+ ```
+
+ Releases before 6.4.2-6 use the earlier version of the SCC, named `redis-enterprise-scc`.
+
+1. Provide the operator permissions for the pods.
+
+ ```sh
+ oc adm policy add-scc-to-user redis-enterprise-scc-v2 \
+ system:serviceaccount::
+ ```
+
+ {{}}
+If you are using version 6.2.18-41 or earlier, add additional permissions for your cluster.
+
+```sh
+oc adm policy add-scc-to-user redis-enterprise-scc \
+system:serviceaccount::redis-enterprise-operator
+```
+{{}}
+
+ You can check the name of your project using the `oc project` command. To replace the project name, use `oc edit project myproject`. Replace `rec` with the name of your Redis Enterprise cluster, if different.
+
+## Create a Redis Enterprise cluster custom resource
+
+1. Apply the `RedisEnterpriseCluster` resource file ([rec_rhel.yaml](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/openshift/rec_rhel.yaml)).
+
+ You can rename the file to `.yaml`, but it is not required. Examples below use `.yaml`. [Options for Redis Enterprise clusters]({{< relref "/operate/kubernetes/reference/redis_enterprise_cluster_api" >}}) has more info about the Redis Enterprise cluster (REC) custom resource, or see the [Redis Enterprise cluster API]({{}}) for a full list of options.
+
+ The REC name cannot be changed after cluster creation.
+
+ {{}}
+Each Redis Enterprise cluster requires at least 3 nodes. Single-node RECs are not supported.
+ {{}}
+
+2. Apply the custom resource file to create your Redis Enterprise cluster.
+
+ ```sh
+ oc apply -f .yaml
+ ```
+
+ The operator typically creates the REC within a few minutes.
+
+1. Check the cluster status.
+
+ ```sh
+ oc get pod
+ ```
+
+ You should receive a response similar to the following:
+
+ ```sh
+ NAME | READY | STATUS | RESTARTS | AGE |
+ | -------------------------------- | ----- | ------- | -------- | --- |
+ | rec-name-0 | 2/2 | Running | 0 | 1m |
+ | rec-name-1 | 2/2 | Running | 0 | 1m |
+ | rec-name-2 | 2/2 | Running | 0 | 1m |
+ | rec-name-controller-x-x | 1/1 | Running | 0 | 1m |
+ | Redis-enterprise-operator-x-x | 1/1 | Running | 0 | 5m |
+ ```
+
+## Configure the admission controller
+
+{{< embed-md "k8s-admission-webhook-cert.md" >}}
+
+### Limit the webhook to relevant namespaces
+
+If not limited, the webhook intercepts requests from all namespaces. If you have several REC objects in your Kubernetes cluster, limit the webhook to the relevant namespaces. If you aren't using multiple namespaces, skip this step.
+
+1. Verify your namespace is labeled and the label is unique to this namespace, as shown in the next example.
+
+ ```sh
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ labels:
+ namespace-name: staging
+ name: staging
+ ```
+
+1. Patch the webhook spec with the `namespaceSelector` field.
+
+ ```sh
+ cat > modified-webhook.yaml <}}
+For releases before 6.4.2-4, use this command instead:
+
+```sh
+oc patch ValidatingWebhookConfiguration \
+ redb-admission --patch "$(cat modified-webhook.yaml)"
+```
+
+The 6.4.2-4 release introduces a new `ValidatingWebhookConfiguration` to replace `redb-admission`. See the [6.4.2-4 release notes]({{< relref "/operate/kubernetes/release-notes/6-4-2-releases/" >}}).
+ {{}}
+
+### Verify admission controller installation
+
+Apply an invalid resource as shown below to force the admission controller to reject it. If it applies successfully, the admission controller is not installed correctly.
+
+```sh
+oc apply -f - << EOF
+apiVersion: app.redislabs.com/v1alpha1
+kind: RedisEnterpriseDatabase
+metadata:
+ name: redis-enterprise-database
+spec:
+ evictionPolicy: illegal
+EOF
+```
+
+You should see this error from the admission controller webhook `redisenterprise.admission.redislabs`.
+
+```sh
+Error from server: error when creating "STDIN": admission webhook "redisenterprise.admission.redislabs" denied the request: eviction_policy: u'illegal' is not one of [u'volatile-lru', u'volatile-ttl', u'volatile-random', u'allkeys-lru', u'allkeys-random', u'noeviction', u'volatile-lfu', u'allkeys-lfu']
+```
+
+## Create a Redis Enterprise database custom resource
+
+The operator uses the instructions in the Redis Enterprise database (REDB) custom resources to manage databases on the Redis Enterprise cluster.
+
+1. Create a `RedisEnterpriseDatabase` custom resource.
+
+ This example creates a test database. For production databases, see [create a database]({{< relref "/operate/kubernetes/re-databases/db-controller.md#create-a-database" >}}) and [RedisEnterpriseDatabase API reference]({{< relref "/operate/kubernetes/reference/redis_enterprise_database_api" >}}).
+
+ ```sh
+ cat << EOF > /tmp/redis-enterprise-database.yml
+ apiVersion: app.redislabs.com/v1alpha1
+ kind: RedisEnterpriseDatabase
+ metadata:
+ name: redis-enterprise-database
+ spec:
+ memorySize: 100MB
+ EOF
+ ```
+
+1. Apply the newly created REDB resource.
+
+ ```sh
+ oc apply -f /tmp/redis-enterprise-database.yml
+ ```
+
+## More info
+
+- [Redis Enterprise cluster API]({{}})
+- [Redis Enterprise database API]({{}})
diff --git a/content/operate/kubernetes/7.4.6/deployment/openshift/openshift-operatorhub.md b/content/operate/kubernetes/7.4.6/deployment/openshift/openshift-operatorhub.md
new file mode 100644
index 0000000000..71e54ebb84
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/openshift/openshift-operatorhub.md
@@ -0,0 +1,92 @@
+---
+Title: Deploy Redis Enterprise with OpenShift OperatorHub
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: OpenShift provides the OperatorHub where you can install the Redis Enterprise
+ operator from the administrator user interface.
+linkTitle: OpenShift OperatorHub
+weight: 70
+url: '/operate/kubernetes/7.4.6/deployment/openshift/openshift-operatorhub/'
+---
+
+You can deploy Redis Enterprise for Kubernetes from the Red Hat OpenShift CLI. You can also use a UI, [OperatorHub](https://docs.openshift.com/container-platform/4.11/operators/index.html) (Red Hat) to install operators and create custom resources.
+
+To see which version of Redis Enterprise for Kubernetes supports your OpenShift version, see [Supported Kubernetes distributions]({{< relref "/operate/kubernetes/reference/supported_k8s_distributions" >}}).
+
+## Install the Redis Enterprise operator
+
+{{}} If using version 6.2.18-41 or earlier, [Install the security context constraint](#install-security-context-constraint) before installing the operator. {{}}
+
+1. Select **Operators > OperatorHub**.
+
+2. Search for _Redis Enterprise_ in the search dialog and select the **Redis Enterprise Operator provided by Redis** marked as **Certified**.
+
+ By default, the image is pulled from Red Hat's registry.
+
+3. On the **Install Operator** page, specify the namespace for the operator.
+
+ Only one namespace per operator is supported.
+
+4. Update the **channel** with the version you're installing.
+
+ For more information about specific versions, see the [release notes]({{< relref "/operate/kubernetes/release-notes/" >}}).
+
+5. Choose an approval strategy.
+
+ Use **Manual** for production systems to ensure the operator is only upgraded by approval.
+
+6. Select **Install** and approve the install plan.
+
+ You can monitor the subscription status in **Operators > Installed Operators**.
+
+{{}}DO NOT modify or delete the StatefulSet created during the deployment process. Doing so could destroy your Redis Enterprise cluster (REC).{{}}
+
+## Install security context constraint
+
+The Redis Enterprise pods must run in OpenShift with privileges set in a [Security Context Constraint](https://docs.openshift.com/container-platform/4.4/authentication/managing-security-context-constraints.html#security-context-constraints-about_configuring-internal-oauth). This grants the pod various rights, such as the ability to change system limits or run as a particular user.
+
+{{}}
+ Before creating any clusters, install the security context constraint (SCC) for the operator in [scc.yaml](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/openshift/scc.yaml).
+{{}}
+
+You only need to install the SCC once, but you must not delete it.
+
+1. Select the project you'll be using or create a new project.
+
+1. Download [`scc.yaml`](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/openshift/scc.yaml).
+
+1. Apply the file to install the security context constraint.
+
+ ```sh
+ oc apply -f scc.yaml
+ ```
+
+After the install, the OperatorHub automatically uses the constraint for Redis Enterprise node pods.
+
+{{< note >}}
+If you are using the recommended RedisEnterpriseCluster name of `rec`, the SCC is automatically bound to the RedisEnterpriseCluster after install.
+
+If you choose a different name for the RedisEnterpriseCluster, or override the default service account name, you must manually bind the SCC to the RedisEnterpriseCluster’s service account:
+
+ ```sh
+ oc adm policy add-scc-to-user redis-enterprise-scc-v2 \
+ system:serviceaccount::
+ ```
+
+{{< /note >}}
+
+## Create Redis Enterprise custom resources
+
+The **Installed Operators**->**Operator details** page shows the provided APIs: **RedisEnterpriseCluster** and **RedisEnterpriseDatabase**. You can select **Create instance** to create custom resources using the OperatorHub interface.
+
+Use the YAML view to create a custom resource file or let OperatorHub generate the YAML file for you by specifying your configuration options in the form view.
+
+ The REC name cannot be changed after cluster creation.
+
+{{}} In versions 6.4.2-4 and 6.4.2-5, REC creation might fail when using the form view due to an error related to the cluster level LDAP. To avoid this, use the YAML view.
+{{}}
+
+For more information on creating and maintaining Redis Enterprise custom resources, see [Redis Enterprise clusters (REC)]({{< relref "/operate/kubernetes/re-clusters/" >}}) and [Redis Enterprise databases (REDB)]({{< relref "/operate/kubernetes/re-databases/" >}}).
diff --git a/content/operate/kubernetes/7.4.6/deployment/quick-start.md b/content/operate/kubernetes/7.4.6/deployment/quick-start.md
new file mode 100644
index 0000000000..aadf1b8be5
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/deployment/quick-start.md
@@ -0,0 +1,260 @@
+---
+Title: Deploy Redis Enterprise Software for Kubernetes
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: How to install Redis Enterprise Software for Kubernetes.
+linkTitle: Kubernetes
+weight: 10
+url: '/operate/kubernetes/7.4.6/deployment/quick-start/'
+---
+
+To deploy Redis Enterprise Software for Kubernetes and start your Redis Enterprise cluster (REC), you need to do the following:
+
+- Create a new namespace in your Kubernetes cluster.
+- Download the operator bundle.
+- Apply the operator bundle and verify it's running.
+- Create a Redis Enterprise cluster (REC).
+
+This guide works with most supported Kubernetes distributions. If you're using OpenShift, see [Redis Enterprise on OpenShift]({{< relref "/operate/kubernetes/deployment/openshift" >}}). For details on what is currently supported, see [supported distributions]({{< relref "/operate/kubernetes/reference/supported_k8s_distributions.md" >}}).
+
+## Prerequisites
+
+To deploy Redis Enterprise for Kubernetes, you'll need:
+
+- Kubernetes cluster in a [supported distribution]({{< relref "/operate/kubernetes/reference/supported_k8s_distributions.md" >}})
+- minimum of three worker nodes
+- Kubernetes client (kubectl)
+- access to DockerHub, RedHat Container Catalog, or a private repository that can hold the required images.
+
+### Create a new namespace
+
+**Important:** Each namespace can only contain one Redis Enterprise cluster. Multiple RECs with different operator versions can coexist on the same Kubernetes cluster, as long as they are in separate namespaces.
+
+Throughout this guide, each command is applied to the namespace in which the Redis Enterprise cluster operates.
+
+1. Create a new namespace
+
+ ```sh
+ kubectl create namespace
+ ```
+
+1. Change the namespace context to make the newly created namespace default for future commands.
+
+ ```sh
+ kubectl config set-context --current --namespace=
+ ```
+
+You can use an existing namespace as long as it does not contain any existing Redis Enterprise cluster resources. It's best practice to create a new namespace to make sure there are no Redis Enterprise resources that could interfere with the deployment.
+
+## Install the operator
+
+Redis Enterprise for Kubernetes bundle is published as a container image. A list of required images is available in the [release notes]({{< relref "/operate/kubernetes/release-notes/_index.md" >}}) for each version.
+
+The operator [definition and reference materials](https://github.com/RedisLabs/redis-enterprise-k8s-docs) are available on GitHub. The operator definitions are [packaged as a single generic YAML file](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/bundle.yaml).
+
+{{}}
+If you do not pull images from DockerHub or another public registry, you need to use a [private container registry]({{< relref "/operate/kubernetes/deployment/container-images#manage-image-sources" >}}).
+{{}}
+
+### Download the operator bundle
+
+Pull the latest version of the operator bundle:
+
+```sh
+VERSION=`curl --silent https://api.github.com/repos/RedisLabs/redis-enterprise-k8s-docs/releases/latest | grep tag_name | awk -F'"' '{print $4}'`
+```
+
+ If you need a different release, replace `VERSION` with a specific release tag.
+
+ Check version tags listed with the [operator releases on GitHub](https://github.com/RedisLabs/redis-enterprise-k8s-docs/releases) or by [using the GitHub API](https://docs.github.com/en/rest/reference/repos#releases) to ensure the version of the bundle is correct.
+
+### Deploy the operator bundle
+
+Apply the operator bundle in your REC namespace:
+
+```sh
+kubectl apply -f https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/$VERSION/bundle.yaml
+```
+
+ You should see a result similar to this:
+
+ ```sh
+ role.rbac.authorization.k8s.io/redis-enterprise-operator created
+ serviceaccount/redis-enterprise-operator created
+ rolebinding.rbac.authorization.k8s.io/redis-enterprise-operator created
+ customresourcedefinition.apiextensions.k8s.io/redisenterpriseclusters.app.redislabs.com configured
+ customresourcedefinition.apiextensions.k8s.io/redisenterprisedatabases.app.redislabs.com configured
+ deployment.apps/redis-enterprise-operator created
+ ```
+
+{{}}DO NOT modify or delete the StatefulSet created during the deployment process. Doing so could destroy your Redis Enterprise cluster (REC).{{}}
+
+#### Verify the operator is running
+
+Check the operator deployment to verify it's running in your namespace:
+
+```sh
+kubectl get deployment redis-enterprise-operator
+```
+
+You should see a result similar to this:
+
+```sh
+NAME READY UP-TO-DATE AVAILABLE AGE
+redis-enterprise-operator 1/1 1 1 0m36s
+```
+
+## Create a Redis Enterprise cluster (REC)
+
+A Redis Enterprise cluster (REC) is created from a `RedisEnterpriseCluster` custom resource
+that contains cluster specifications.
+
+The following example creates a minimal Redis Enterprise cluster. See the [RedisEnterpriseCluster API reference]({{}}) for more information on the various options available.
+
+1. Create a file that defines a Redis Enterprise cluster with three nodes.
+
+ {{}}
+The REC name (`my-rec` in this example) cannot be changed after cluster creation.
+ {{}}
+
+ ```sh
+ cat < my-rec.yaml
+ apiVersion: "app.redislabs.com/v1"
+ kind: "RedisEnterpriseCluster"
+ metadata:
+ name: my-rec
+ spec:
+ nodes: 3
+ EOF
+ ```
+
+ This will request a cluster with three Redis Enterprise nodes using the default requests (i.e., 2 CPUs and 4GB of memory per node).
+
+ To test with a larger configuration, use the example below to add node resources to the `spec` section of your test cluster (`my-rec.yaml`).
+
+ ```sh
+ redisEnterpriseNodeResources:
+ limits:
+ cpu: 2000m
+ memory: 16Gi
+ requests:
+ cpu: 2000m
+ memory: 16Gi
+ ```
+
+ {{}}
+Each cluster must have at least 3 nodes. Single-node RECs are not supported.
+ {{}}
+
+ See the [Redis Enterprise hardware requirements]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}) for more information on sizing Redis Enterprise node resource requests.
+
+1. Apply your custom resource file in the same namespace as `my-rec.yaml`.
+
+ ```sh
+ kubectl apply -f my-rec.yaml
+ ```
+
+ You should see a result similar to this:
+
+ ```sh
+ redisenterprisecluster.app.redislabs.com/my-rec created
+ ```
+
+1. You can verify the creation of the cluster with:
+
+ ```sh
+ kubectl get rec
+ ```
+
+ You should see a result similar to this:
+
+ ```sh
+ NAME AGE
+ my-rec 1m
+ ```
+
+ At this point, the operator will go through the process of creating various services and pod deployments.
+
+ You can track the progress by examining the StatefulSet associated with the cluster:
+
+ ```sh
+ kubectl rollout status sts/my-rec
+ ```
+
+ or by looking at the status of all of the resources in your namespace:
+
+ ```sh
+ kubectl get all
+ ```
+
+## Enable the admission controller
+
+The admission controller dynamically validates REDB resources configured by the operator. It is strongly recommended that you use the admission controller on your Redis Enterprise Cluster (REC). The admission controller only needs to be configured once per operator deployment.
+
+As part of the REC creation process, the operator stores the admission controller certificate in a Kubernetes secret called `admission-tls`. You may have to wait a few minutes after creating your REC to see the secret has been created.
+
+{{< embed-md "k8s-admission-webhook-cert.md" >}}
+
+### Limit the webhook to the relevant namespaces {#webhook}
+
+The operator bundle includes a webhook file. The webhook will intercept requests from all namespaces unless you edit it to target a specific namespace. You can do this by adding the `namespaceSelector` section to the webhook spec to target a label on the namespace.
+
+1. Make sure the namespace has a unique `namespace-name` label.
+
+ ```sh
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ labels:
+ namespace-name: example-ns
+ name: example-ns
+ ```
+
+1. Patch the webhook to add the `namespaceSelector` section.
+
+ ```sh
+ cat > modified-webhook.yaml <}}) to create a new REDB.
diff --git a/content/operate/kubernetes/7.4.6/faqs/_index.md b/content/operate/kubernetes/7.4.6/faqs/_index.md
new file mode 100644
index 0000000000..a88240d714
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/faqs/_index.md
@@ -0,0 +1,171 @@
+---
+Title: Redis Enterprise for Kubernetes FAQs
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: null
+hideListLinks: true
+linkTitle: FAQs
+weight: 100
+url: '/operate/kubernetes/7.4.6/faqs/'
+---
+Here are some frequently asked questions about Redis Enterprise on integration platforms.
+
+## What is an Operator?
+
+An operator is a [Kubernetes custom controller](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources#custom-controllers) which extends the native K8s API. Refer to the article [Redis Enterprise K8s Operator-based deployments – Overview]({{< relref "/operate/kubernetes/architecture/operator.md" >}}).
+
+## Does Redis Enterprise operator support multiple RECs per namespace?
+
+Redis Enterprise for Kubernetes may only deploy a single Redis Enterprise cluster (REC) per [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). Each REC can run multiple databases while maintaining high capacity and performance.
+
+## Do I need to deploy a Redis Enterprise operator per namespace?
+
+Yes, one operator per namespace, each managing a single Redis Enterprise cluster.
+Each REC can run multiple databases while maintaining high capacity and performance.
+
+## How can I see the custom resource definitions (CRDs) created for my Redis Enterprise cluster?
+
+Run the following:
+
+```sh
+kubectl get rec
+kubectl describe rec
+```
+
+## How can I change the Redis Enterprise cluster admin user password?
+
+The cluster admin user password is created by the operator during the deployment of the Redis Enterprise cluster (REC) and is stored in a Kubernetes [secret](https://kubernetes.io/docs/concepts/configuration/secret/).
+
+See [Manage REC credentials]({{< relref "/operate/kubernetes/security/manage-rec-credentials" >}}) for instructions on changing the admin password.
+
+## How is using Redis Enterprise operator superior to using Helm charts?
+
+While [Helm charts](https://helm.sh/docs/topics/charts/) help automate multi-resource deployments, they do not provide the lifecycle management and lack many of the benefits provided by the operator:
+
+- Operators are a K8s standard, while Helm is a proprietary tool
+ - Using operators means better packaging for different Kubernetes deployments and distributions, as Helm is not supported in a straightforward way everywhere
+- Operators allow full control over the Redis Enterprise cluster lifecycle
+ - We’ve experienced difficulties managing the state and lifecycle of the application through Helm, as it essentially only allows to determine the resources being deployed, which is a problem when upgrading and evolve the Redis Enterprise Cluster settings
+- Operators support advanced flows which would otherwise require using an additional third party product
+
+## How to connect to the Redis Enterprise cluster user interface
+
+Create a [port forwarding](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward) rule to expose the cluster user interface (UI) port. For example, when the default port 8443 is used, run:
+
+```sh
+kubectl port-forward –namespace service/-cluster-ui 8443:8443
+```
+
+Connect to the UI by pointing your browser to `https://localhost:8443`
+
+## How should I size Redis Enterprise cluster nodes?
+
+For nodes hosting the Redis Enterprise cluster [statefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) [pods](https://kubernetes.io/docs/concepts/workloads/pods/), follow the guidelines provided for Redis Enterprise in the [hardware requirements]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/hardware-requirements.md" >}}).
+
+For additional information please also refer to [Kubernetes operator deployment – persistent volumes]({{< relref "/operate/kubernetes/recommendations/persistent-volumes.md" >}}).
+
+## How to retrieve the username/password for a Redis Enterprise Cluster?
+
+The Redis Enterprise cluster stores the username/password of the UI in a K8s [secret](https://kubernetes.io/docs/concepts/configuration/secret/).
+
+Find the secret by retrieving secrets and locating one of type [Opaque](https://kubernetes.io/docs/concepts/workloads/pods/) with a name identical or containing your Redis Enterprise cluster name.
+
+For example, run:
+
+```sh
+kubectl get secrets
+```
+
+A possible response may look like this:
+
+| NAME | TYPE | DATA | AGE |
+|------|------|------|-----|
+| redis-enterprise-cluster | Opaque | 2 | 5d |
+
+To retrieve the secret run:
+
+```sh
+kubectl get secret redis-enterprise-cluster -o yaml
+```
+
+A possible response may look like this:
+
+```yaml
+apiVersion: v1
+data:
+ password: Q2h5N1BBY28=
+ username: cmVkaXNsYWJzLnNi
+kind: Secret
+metadata:
+ creationTimestamp: 2018-09-03T14:06:39Z
+ labels:
+ app: redis-enterprise
+ redis.io/cluster: test
+ name: redis-enterprise-cluster
+ namespace: redis
+ ownerReferences:
+ – apiVersion: app.redislabs.com/v1alpha1
+ blockOwnerDeletion: true
+ controller: true
+ kind: RedisEnterpriseCluster
+ name: test
+ uid: 8b247469-c715-11e8-a5d5-0a778671fc2e
+ resourceVersion: “911969”
+ selfLink: /api/v1/namespaces/redis/secrets/redis-enterprise-cluster
+ uid: 8c4ff52e-c715-11e8-80f5-02cc4fca9682
+type: Opaque
+```
+
+Next, decode, for example, the password field. Run:
+
+```sh
+echo "Q2h5N1BBY28=" | base64 –-decode
+```
+
+
+## How to retrieve the username/password for a Redis Enterprise Cluster through the OpenShift Console?
+
+To retrieve your password, navigate to the OpenShift management console, select your project name, go to Resources->Secrets->your_cluster_name
+
+Retrieve your password by selecting “Reveal Secret.”
+{{< image filename="/images/rs/openshift-password-retrieval.png" >}}
+
+
+## What capabilities, privileges and permissions are defined by the Security Context Constraint (SCC) yaml?
+
+The `scc.yaml` file is defined like this:
+
+```yaml
+kind: SecurityContextConstraints
+apiVersion: security.openshift.io/v1
+metadata:
+ name: redis-enterprise-scc
+allowPrivilegedContainer: false
+allowedCapabilities:
+ - SYS_RESOURCE
+runAsUser:
+ type: MustRunAs
+ uid: 1001
+FSGroup:
+ type: MustRunAs
+ ranges: 1001,1001
+seLinuxContext:
+ type: RunAsAny
+```
+
+([latest version on GitHub](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/openshift/scc.yaml))
+
+([latest version on GitHub](https://github.com/RedisLabs/redis-enterprise-k8s-docs/tags))
+
+The SYS_RESOURCE capability is required by the Redis Enterprise cluster (REC) container so that REC can set correct out of memory (OOM) scores to its processes inside the container.
+Also, some of the REC services must be able to increase default resource limits, especially the number of open file descriptors.
+
+{{< note >}}
+- Removing NET_RAW blocks 'ping' from being used on the solution containers.
+- These changes were made as of release 5.4.6-1183 to better align the deployment with container and Kubernetes security best practices:
+ - The NET_RAW capability requirement in PSP was removed.
+ - The allowPrivilegeEscalation is set to 'false' by default.
+{{< /note >}}
diff --git a/content/operate/kubernetes/7.4.6/kubernetes-archive.md b/content/operate/kubernetes/7.4.6/kubernetes-archive.md
new file mode 100644
index 0000000000..f6e1a41019
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/kubernetes-archive.md
@@ -0,0 +1,21 @@
+---
+Title: Archive
+alwaysopen: false
+categories:
+- docs
+- operate
+- rc
+description: Describes where to view the archive of Redis Enterprise for Kubernetes
+ documentation.
+linkTitle: Archive
+weight: 99999999999
+url: '/operate/kubernetes/7.4.6/kubernetes-archive/'
+---
+
+Previous versions of Redis Enterprise for Kubernetes documentation are available on the archived web site:
+
+- [Redis Enterprise for Kubernetes v7.4 documentation archive](https://docs.redis.com/7.4/kubernetes/)
+
+- [Redis Enterprise for Kubernetes v7.2 documentation archive](https://docs.redis.com/7.2/kubernetes/)
+
+- [Redis Enterprise for Kubernetes v6.x documentation archive](https://docs.redis.com/6.4/kubernetes/)
diff --git a/content/operate/kubernetes/7.4.6/logs/_index.md b/content/operate/kubernetes/7.4.6/logs/_index.md
new file mode 100644
index 0000000000..cfd0851860
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/logs/_index.md
@@ -0,0 +1,43 @@
+---
+Title: Redis Enterprise Software logs on Kubernetes
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This section provides information about how logs are stored and accessed.
+hideListLinks: true
+linkTitle: Logs
+weight: 60
+url: '/operate/kubernetes/7.4.6/logs/'
+---
+
+## Logs
+
+Each redis-enterprise container stores its logs under `/var/opt/redislabs/log`.
+When using persistent storage this path is automatically mounted to the
+`redis-enterprise-storage` volume.
+This volume can easily be accessed by a sidecar, i.e. a container residing on the same pod.
+
+For example, in the REC (Redis Enterprise Cluster) spec you can add a sidecar container, such as a busybox, and mount the logs to there:
+
+```yaml
+sideContainersSpec:
+ - name: busybox
+ image: busybox
+ args:
+ - /bin/sh
+ - -c
+ - while true; do echo "hello"; sleep 1; done
+
+ volumeMounts:
+ - name: redis-enterprise-storage
+ mountPath: /home/logs
+ subPath: logs
+```
+
+Now the logs can be accessed from in the sidecar. For example by running
+
+```kubectl exec -it -c busybox tail home/logs/supervisord.log```
+
+The sidecar container is user determined and can be used to format, process and share logs in a specified format and protocol.
diff --git a/content/operate/kubernetes/7.4.6/logs/collect-logs.md b/content/operate/kubernetes/7.4.6/logs/collect-logs.md
new file mode 100644
index 0000000000..12bce83a15
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/logs/collect-logs.md
@@ -0,0 +1,42 @@
+---
+Title: Collect logs
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Run the log collector script to package relevant logs into a tar.gz file
+ to send to Redis Support for help troubleshooting your Kubernetes environment.
+linkTitle: Collect logs
+weight: 89
+url: '/operate/kubernetes/7.4.6/logs/collect-logs/'
+---
+
+The Redis Enterprise cluster (REC) log collector script ([`log_collector.py`](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/log_collector/log_collector.py)) creates and fills a directory with the relevant logs for your environment. These logs will help the support team with troubleshooting.
+
+As of version 6.2.18-3, the log collector tool has two modes:
+
+- **restricted** collects only resources and logs created by the operator and Redis Enterprise deployments
+ - This is the default for versions 6.2.18-3 and later
+- **all** collects everything from your environment
+ - This is the default mode for versions 6.2.12-1 and earlier
+
+{{}} This script requires Python 3.6 or later. {{}}
+
+1. Download the latest [`log_collector.py`](https://github.com/RedisLabs/redis-enterprise-k8s-docs/blob/master/log_collector/log_collector.py) file.
+
+1. Have a K8s administrator run the script on the system that runs your `kubectl` or `oc` commands.
+ - Pass `-n` parameter to run on a different namespace than the one you are currently on
+ - Pass `-m` parameter to change the log collector mode (`all` or `restricted`)
+ - Run with `-h` to see more options
+
+ ```bash
+ python log_collector.py
+ ```
+
+ {{< note >}} If you get an error because the yaml module is not found, install the pyYAML module with `pip install pyyaml`.
+ {{< /note >}}
+
+
+
+1. Upload the resulting `tar.gz` file containing all the logs to [Redis Support](https://support.redislabs.com/).
diff --git a/content/operate/kubernetes/7.4.6/networking/_index.md b/content/operate/kubernetes/7.4.6/networking/_index.md
new file mode 100644
index 0000000000..1aac680e94
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/networking/_index.md
@@ -0,0 +1,42 @@
+---
+Title: Networking
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: null
+hideListLinks: true
+linkTitle: Networking
+weight: 40
+url: '/operate/kubernetes/7.4.6/networking/'
+---
+
+Redis Enterprise for Kubernetes supports several ways to route external traffic to your RedisEnterpriseCluster:
+
+- Ingress controllers [HAProxy](https://haproxy-ingress.github.io/) and [NGINX](https://kubernetes.github.io/ingress-nginx/) require an `ingress` API resource.
+- [Istio](https://istio.io/latest/docs/setup/getting-started/) requires `Gateway` and `VirtualService` API resources.
+- OpenShift uses [routes]({{< relref "/operate/kubernetes/networking/routes.md" >}}) to route external traffic.
+- The RedisEnterpriseActiveActiveDatabase (REAADB) requires any of the above routing methods to be configured in the RedisEnterpriseCluster (REC) with the `ingressOrRouteSpec` field.
+
+## External routing using Redis Enterprise for Kubernetes
+
+Every time a RedisEnterpriseDatabase (REDB), Redis Enterprise Active-Active database (REAADB), or Redis Enterprise cluster (REC) is created, the Redis Enterprise operator automatically creates a [service](https://kubernetes.io/docs/concepts/services-networking/service/) to allow requests to be routed to that resource.
+
+Redis Enterprise supports three [types of services](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for accessing databases: `ClusterIP`, `headless`, or `LoadBalancer`.
+
+By default, the operator creates a `ClusterIP` type service, which exposes a cluster-internal IP and that can only be accessed from within the K8s cluster. For requests to be routed from outside the K8s cluster, you need an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) (or [route](https://docs.openshift.com/container-platform/4.12/networking/routes/route-configuration.html) if you are using OpenShift). See [kubernetes.io](https://kubernetes.io/docs/) for more details on [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) and [Ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
+
+* To use NGINX or HAProxy Ingress controllers, see [Ingress routing]({{< relref "/operate/kubernetes/networking/ingress.md" >}}).
+* To use OpenShift routes, see [OpenShift routes]({{< relref "/operate/kubernetes/networking/routes.md" >}}).
+* To use Istio as an Ingress controller, see [Istio Ingress routing]({{< relref "/operate/kubernetes/networking/istio-ingress.md" >}})
+
+## `ingressOrRouteSpec` for Active-Active databases
+
+Versions 6.4.2 or later of Redis Enterprise for Kubernetes include a feature for ingress configuration. The `ingressOrRouteSpec` field is available in the RedisEnterpriseCluster spec to automatically create an Ingress (or route) for the API service and databases (REAADB) on that REC. See [REC external routing]({{< relref "/operate/kubernetes/networking/ingressorroutespec.md" >}}) for more details.
+
+This feature only supports automatic Ingress creation for Active-Active databases created and managed with the RedisEnterpriseActiveActiveDatabase (REAADB) custom resource. Use with the standard Redis Enterprise database (REDB) is not currently supported.
+
+## REC domain name
+
+The RedisEnterpriseCluster does not support custom domain names. Domain names for the REC are in the following format: `..svc.cluster.local`.
diff --git a/content/operate/kubernetes/7.4.6/networking/ingress.md b/content/operate/kubernetes/7.4.6/networking/ingress.md
new file mode 100644
index 0000000000..f95eee6a02
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/networking/ingress.md
@@ -0,0 +1,225 @@
+---
+Title: Configure Ingress for external routing
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Configure an ingress controller to access your Redis Enterprise databases
+ from outside the Kubernetes cluster.
+linkTitle: Ingress routing
+weight: 5
+url: '/operate/kubernetes/7.4.6/networking/ingress/'
+---
+
+## Prerequisites
+
+Before creating an Ingress, you'll need:
+
+ - A RedisEnterpriseDatabase (REDB) with TLS enabled for client connections
+ - A supported Ingress controller with `ssl-passthrough` enabled
+ - [Ingress-NGINX Controller](https://kubernetes.github.io/ingress-nginx/deploy/)
+ - Be sure to use the `kubernetes/ingress-nginx` controller and NOT the `nginxinc/kubernetes-ingress` controller.
+ - [HAProxy Ingress](https://haproxy-ingress.github.io/docs/getting-started/)
+ - To use Istio for your Ingress resources, see [Configure Istio for external routing]({{< relref "/operate/kubernetes/networking/istio-ingress.md" >}})
+
+{{}}Make sure your Ingress controller has `ssl-passthrough`enabled. This is enabled by default for HAProxy, but disabled by default for NGINX. See the [NGINX User Guide](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#ssl-passthrough) for details. {{}}
+
+## Create an Ingress resource
+
+1. Retrieve the hostname of your Ingress controller's `LoadBalancer` service.
+
+ ``` sh
+ $ kubectl get svc \
+ -n
+ ```
+
+ Below is example output for an HAProxy running on a K8s cluster hosted by AWS.
+
+ ``` sh
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ haproxy-ingress LoadBalancer 10.43.62.53 a56e24df8c6173b79a63d5da54fd9cff-676486416.us-east-1.elb.amazonaws.com 80:30610/TCP,443:31597/TCP 21m
+ ```
+
+1. Choose the hostname you will use to access your database (this value will be represented in this article with ``).
+
+1. Create a DNS entry that resolves your chosen database hostname to the IP address for the Ingress controller's LoadBalancer.
+
+1. Create the Ingress resource YAML file.
+
+ ``` YAML
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ name: rec-ingress
+ annotations:
+
+ spec:
+ rules:
+ - host:
+ http:
+ paths:
+ - path: /
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name:
+ port:
+ name: redis
+ ```
+
+ For HAProxy, insert the following into the `annotations` section:
+
+ ``` YAML
+ kubernetes.io/ingress.class: haproxy
+ ingress.kubernetes.io/ssl-passthrough: "true"
+ ```
+
+ For NGINX, insert the following into the `annotations` section:
+
+ ``` YAML
+ kubernetes.io/ingress.class: nginx
+ nginx.ingress.kubernetes.io/ssl-passthrough: "true"
+ ```
+
+ The `ssl-passthrough` annotation is required to allow access to the database. The specific format changes depending on your Ingress controller and any additional customizations. See [NGINX Configuration annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) and [HAProxy Ingress Options](https://www.haproxy.com/documentation/kubernetes/latest/configuration/ingress/) for updated annotation formats.
+
+## Test your external access
+
+To test your external access to the database, you need a client that supports [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) and [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication).
+
+#### Test your access with Openssl
+
+1. Get the default CA certificate from the `redis-enterprise-node` container on any of the Redis Enterprise pods.
+
+ ``` sh
+ $ kubectl exec -it -c redis-enterprise-node \
+ -- cat /etc/opt/redislabs/proxy_cert.pem
+ ```
+
+1. Run the following `openssl` command, substituting your own values for ``.
+
+ ``` sh
+ $ openssl s_client \
+ -connect :443 \
+ -crlf -CAfile ./proxy_cert.pem \
+ -servername
+ ```
+
+ If you are connected to the database, you will receive `PONG` back, as shown below:
+
+ ``` sh
+ ...
+ Verify return code: 0 (ok)
+ ---
+
+ PING
+ +PONG
+ ```
+
+#### Test your access with Python
+
+You can use the code below to test your access with Python, substituting your own values for `` and ``.
+
+``` python
+import redis
+
+r = redis.StrictRedis(host='',
+ port=443, db=0, ssl=True,
+ ssl_ca_certs='//proxy_cert.pem')
+
+
+print(r.info())
+```
+
+Your output should look something like this:
+
+``` sh
+$ /Users/example-user/Documents/Projects/test_client/venv3.7/bin/python \
+ /Users/example-user/Documents/Projects/test_client/test_ssl.py
+{
+ 'redis_version': '5.0.5',
+ 'redis_git_sha1': 0,
+ 'redis_git_dirty': 0,
+ 'redis_build_id': 0,
+ 'redis_mode': 'standalone',
+ 'os': 'Linux 4.14.154-128.181.amzn2.x86_64 x86_64',
+ 'arch_bits': 64,
+ 'multiplexing_api': 'epoll',
+ 'gcc_version': '7.4.0',
+ 'process_id': 1,
+ 'run_id': '3ce7721b096517057d28791aab555ed8ac02e1de',
+ 'tcp_port': 10811,
+ 'uptime_in_seconds': 316467,
+ 'uptime_in_days': 3,
+ 'hz': 10,
+ 'lru_clock': 0,
+ 'config_file': '',
+ 'connected_clients': 1,
+ 'client_longest_output_list': 0,
+ 'client_biggest_input_buf': 0,
+ 'blocked_clients': 0,
+ 'used_memory': 12680016,
+ 'used_memory_human': '12.9M',
+ 'used_memory_rss': 12680016,
+ 'used_memory_peak': 13452496,
+ 'used_memory_peak_human': '12.82M',
+ 'used_memory_lua': 151552,
+ 'mem_fragmentation_ratio': 1,
+ 'mem_allocator': 'jemalloc-5.1.0',
+ 'loading': 0,
+ 'rdb_changes_since_last_save': 0,
+ 'rdb_bgsave_in_progress': 0,
+ 'rdb_last_save_time': 1577753916,
+ 'rdb_last_bgsave_status': 'ok',
+ 'rdb_last_bgsave_time_sec': 0,
+ 'rdb_current_bgsave_time_sec': -1,
+ 'aof_enabled': 0,
+ 'aof_rewrite_in_progress': 0,
+ 'aof_rewrite_scheduled': 0,
+ 'aof_last_rewrite_time_sec': -1,
+ 'aof_current_rewrite_time_sec': -1,
+ 'aof_last_bgrewrite_status': 'ok',
+ 'aof_last_write_status': 'ok',
+ 'total_connections_received': 4,
+ 'total_commands_processed': 6,
+ 'instantaneous_ops_per_sec': 14,
+ 'total_net_input_bytes': 0,
+ 'total_net_output_bytes': 0,
+ 'instantaneous_input_kbps': 0.0,
+ 'instantaneous_output_kbps': 0.0,
+ 'rejected_connections': 0,
+ 'sync_full': 1,
+ 'sync_partial_ok': 0,
+ 'sync_partial_err': 0,
+ 'expired_keys': 0,
+ 'evicted_keys': 0,
+ 'keyspace_hits': 0,
+ 'keyspace_misses': 0,
+ 'pubsub_channels': 0,
+ 'pubsub_patterns': 0,
+ 'latest_fork_usec': 0,
+ 'migrate_cached_sockets': 0,
+ 'role': 'master',
+ 'connected_slaves': 1,
+ 'slave0': {
+ 'ip': '0.0.0.0',
+ 'port': 0,
+ 'state': 'online',
+ 'offset': 0,
+ 'lag': 0
+ },
+ 'master_repl_offset': 0,
+ 'repl_backlog_active': 0,
+ 'repl_backlog_size': 1048576,
+ 'repl_backlog_first_byte_offset': 0,
+ 'repl_backlog_histlen': 0,
+ 'used_cpu_sys': 0.0,
+ 'used_cpu_user': 0.0,
+ 'used_cpu_sys_children': 0.0,
+ 'used_cpu_user_children': 0.0,
+ 'cluster_enabled': 0
+}
+
+Process finished with exit code 0
+```
diff --git a/content/operate/kubernetes/7.4.6/networking/ingressorroutespec.md b/content/operate/kubernetes/7.4.6/networking/ingressorroutespec.md
new file mode 100644
index 0000000000..739fbe4954
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/networking/ingressorroutespec.md
@@ -0,0 +1,87 @@
+---
+Title: Establish external routing on the RedisEnterpriseCluster
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: null
+linkTitle: REC external routing
+weight: 30
+url: '/operate/kubernetes/7.4.6/networking/ingressorroutespec/'
+---
+An Ingress is an API resource that provides a standardized and flexible way to manage external access to services running within a Kubernetes cluster.
+
+## Install Ingress controller
+
+Redis Enterprise for Kubernetes supports the Ingress controllers below:
+* [HAProxy](https://haproxy-ingress.github.io/)
+* [NGINX](https://kubernetes.github.io/ingress-nginx/)
+* [Istio](https://istio.io/latest/docs/setup/getting-started/)
+
+OpenShift users can use [routes]({{< relref "/operate/kubernetes/networking/routes.md" >}}) instead of an Ingress.
+
+Install your chosen Ingress controller, making sure `ssl-passthrough` is enabled. `ssl-passthrough` is turned off by default for NGINX but enabled by default for HAProxy.
+
+## Configure DNS
+
+1. Choose the hostname (FQDN) you will use to access your database according to the recommended naming conventions below, replacing `` with your own values.
+
+ REC API hostname: `api--.`
+ REAADB hostname: `-db--.`
+
+ We recommend using a wildcard (`*`) in place of the database name, followed by the hostname suffix.
+
+1. Retrieve the `EXTERNAL-IP` of your Ingress controller's `LoadBalancer` service.
+
+ ``` sh
+ $ kubectl get svc \
+ -n
+ ```
+
+ Below is example output for an HAProxy ingress controller running on a K8s cluster hosted by AWS.
+
+ ``` sh
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ haproxy-ingress LoadBalancer 10.43.62.53 a56e24df8c6173b79a63d5da54fd9cff-676486416.us-east-1.elb.amazonaws.com 80:30610/TCP,443:31597/TCP 21m
+ ```
+
+1. Create DNS records to resolve your chosen REC API hostname and database hostname to the `EXTERNAL-IP` found in the previous step.
+
+## Edit the REC spec
+
+Edit the RedisEnterpriseCluster (REC) spec to add the `ingressOrRouteSpec` field, replacing `` below with your own values.
+
+### NGINX or HAproxy ingress controllers
+
+* Define the REC API hostname (`apiFqdnUrl`) and database hostname suffix (`dbFqdnSuffix`) you chose when configuring DNS.
+* Set `method` to `ingress`.
+* Set `ssl-passthrough` to "true".
+* Add any additional annotations required for your ingress controller. See [NGINX docs](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) or [HAproxy docs](https://haproxy-ingress.github.io/docs/configuration/keys/) for more information.
+
+```sh
+kubectl patch rec --type merge --patch "{\"spec\": \
+ {\"ingressOrRouteSpec\": \
+ {\"apiFqdnUrl\": \"api--.example.com\", \
+ \"dbFqdnSuffix\": \"-db--.example.com\", \
+ \"ingressAnnotations\": \
+ {\".io/ingress.class\": \
+ \"\", \
+ \"/ssl-passthrough\": \ \"true\"}, \
+ \"method\": \"ingress\"}}}"
+```
+
+### OpenShift routes
+
+* Define the REC API hostname (`apiFqdnUrl`) and database hostname suffix (`dbFqdnSuffix`) you chose when configuring DNS.
+* Set `method` to `openShiftRoute`.
+
+```sh
+kubectl patch rec --type merge --patch "{\"spec\": \
+ {\"ingressOrRouteSpec\": \
+ {\"apiFqdnUrl\": \"api--.example.com\" \
+ \"dbFqdnSuffix\": \"-db--.example.com\", \
+ \"method\": \"openShiftRoute\"}}}"
+```
+
+OpenShift routes do not require any `ingressAnnotations` in the `ingressOrRouteSpec`.
diff --git a/content/operate/kubernetes/7.4.6/networking/istio-ingress.md b/content/operate/kubernetes/7.4.6/networking/istio-ingress.md
new file mode 100644
index 0000000000..fbd88d87b7
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/networking/istio-ingress.md
@@ -0,0 +1,158 @@
+---
+Title: Configure Istio for external routing
+alwaysOpen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Configure Istio as an ingress controller for access to your Redis Enterprise
+ databases from outside the Kubernetes cluster.
+linkTitle: Istio ingress routing
+weight: 20
+url: '/operate/kubernetes/7.4.6/networking/istio-ingress/'
+---
+
+Redis Enterprise for Kubernetes has the ability to use an Istio Ingress gateway as an alternative to NGINX or HaProxy Ingress controllers.
+
+Istio can also understand Ingress resources, but using that mechanism takes away the advantages and options that the native Istio resources provide. Istio offers its own configuration methods using custom resources.
+
+To configure Istio to work with the Redis Kubernetes operator, we will use two custom resources: a `Gateway` and a `VirtualService`. Then you'll be able to establish external access to your database.
+
+## Install and configure Istio for Redis Enterprise
+
+1. [Download](https://istio.io/latest/docs/setup/getting-started/) and [install](https://istio.io/latest/docs/setup/getting-started/) Istio (see instructions from Istio's [Getting Started](https://istio.io/latest/docs/setup/getting-started/) guide).
+
+ Once the installation is complete, all the deployments, pods, and services will be deployed in a namespace called `istio-system`. This namespace contains a `LoadBalancer` type service called `service/istio-ingressgateway` that exposes the external IP address.
+
+1. Find the `EXTERNAL-IP` for the `istio-ingressgateway` service.
+
+ ```sh
+ kubectl get svc istio-ingressgateway -n istio-system
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ istio-ingressgateway LoadBalancer 10.34.67.89 10.145.78.91 15021:12345/TCP,80:67891/TCP,443:23456/TCP,31400:78901/TCP,15443:10112/TCP 3h8m
+ ```
+
+1. Create a DNS entry that resolves your chosen database hostname (or a wildcard `*` followed by your domain) to the Istio `EXTERNAL-IP`. Use this hostname to access your database from outside the cluster.
+
+ In this example, any hostname that ends with `.istio.k8s.my.example.com` will resolve to the Istio LoadBalancer's external IP of `10.145.78.91`. Substitute your own values accordingly.
+
+1. Verify the record was created successfully.
+
+ ```sh
+ dig api.istio.k8s.my.example.com
+ ```
+
+ Look in the `ANSWER SECTION` for the record you just created.
+
+ ```sh
+ ;; ANSWER SECTION:
+ api.istio.k8s.my.example.com 0 IN A 10.145.78.91
+ ```
+
+## Create custom resources
+
+### `Gateway` custom resource
+
+1. On a different namespace from `istio-system`, create a `Gateway` custom resource file (`redis-gateway.yaml` in this example).
+
+ - Replace `.istio.k8s.my.example.com` with the domain that matches your DNS record.
+ - Replace `` with the label set on your Istio ingress gateway pod (most common is `istio: ingress`).
+ - TLS passthrough mode is required to allow secure access to the database.
+
+ ```yaml
+ apiVersion: networking.istio.io/v1beta1
+ kind: Gateway
+ metadata:
+ name: redis-gateway
+ spec:
+ selector:
+ istio:
+ servers:
+ - hosts:
+ - '*.istio.k8s.my.example.com'
+ port:
+ name: https
+ number: 443
+ protocol: HTTPS
+ tls:
+ mode: PASSTHROUGH
+ ```
+
+
+
+1. Apply the `Gateway` custom resource file to create the Ingress gateway.
+
+ ```sh
+ kubectl apply -f redis-gateway.yaml
+ ```
+
+1. Verify the gateway was created successfully.
+
+ ```sh
+ kubectl get gateway
+
+ NAME AGE
+ redis-gateway 3h33m
+ ```
+
+### `VirtualService` custom resource
+
+1. On a different namespace than `istio-system`, create the `VirtualService` custom resource file (`redis-vs.yaml` in this example).
+
+ ```yaml
+ apiVersion: networking.istio.io/v1beta1
+ kind: VirtualService
+ metadata:
+ name: redis-vs
+ spec:
+ gateways:
+ - redis-gateway
+ hosts:
+ - "*.istio.k8s.my.example.com"
+ tls:
+ - match:
+ - port: 443
+ sniHosts:
+ - api.istio.k8s.my.example.com
+ route:
+ - destination:
+ host: rec1
+ port:
+ number: 9443
+ - match:
+ - port: 443
+ sniHosts:
+ - db1.istio.k8s.my.example.com
+ route:
+ - destination:
+ host: db1
+ ```
+
+ This creates both a route to contact the API server on the REC (`rec1`) and a route to contact one of the databases (`db1`).
+
+ - Replace `.istio.k8s.my.example.com` with the domain that matches your DNS record.
+ - The gateway's metadata name must be similar to the gateway's spec name (`redis-gateway` in this example).
+
+1. Apply `VirtualService` custom resource file to create the virtual service.
+
+ ```sh
+ kubectl apply -f redis-vs.yaml
+ ```
+
+1. Verify the virtual service was created successfully.
+
+ ```sh
+ kubectl get vs
+
+ NAME GATEWAYS HOSTS AGE
+ redis-vs ["redis-gateway"] ["*.istio.k8s.my.example.com"] 3h33m
+ ```
+
+1. [Deploy the operator]({{< relref "/operate/kubernetes/deployment/quick-start.md" >}}), Redis Enterprise Cluster (REC), and Redis Enterprise Database (REDB) on the same namespace as the gateway and virtual service.
+
+## Test your external access to the database
+
+To [test your external access]({{< relref "/operate/kubernetes/networking/ingress.md" >}}) to the database, you need a client that supports [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) and [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication).
+
+See [Test your access with Openssl]({{< relref "/operate/kubernetes/networking/ingress#test-your-access-with-openssl" >}}) or [Test your access with Python]({{< relref "/operate/kubernetes/networking/ingress#test-your-access-with-python" >}}) for more info.
diff --git a/content/operate/kubernetes/7.4.6/networking/routes.md b/content/operate/kubernetes/7.4.6/networking/routes.md
new file mode 100644
index 0000000000..6830be4628
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/networking/routes.md
@@ -0,0 +1,59 @@
+---
+Title: Use OpenShift routes for external database access
+alwaysOpen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: null
+linkTitle: OpenShift routes
+weight: 15
+url: '/operate/kubernetes/7.4.6/networking/routes/'
+---
+
+OpenShift routes allow requests to be routed to the database or cluster API from outside the cluster. For more information about routes, see [OpenShift documentation](https://docs.openshift.com/container-platform/4.13/networking/routes/route-configuration.html).
+
+## Prerequisites
+
+* Before you can connect to your database from outside the cluster, you'll need the root CA certificate of the DMC Proxy server to validate the server certificate.
+
+ By default, the DMC Proxy uses a self-signed certificate. You can retrieve it from the Redis Enterprise admin console and save it as a file (for example, named "ca.pem") on the client machine.
+
+* Your database also needs TLS encryption enabled.
+
+## Create OpenShift route
+
+1. Select the **Networking/Routes** section of the OpenShift web console.
+
+1. Select **Create route** and fill out the following fields:
+
+ * **Name**: Choose any name you want as the first part of your generated hostname
+ * **Hostname**: Leave blank
+ * **Path**: Leave as is ("/")
+ * **Service**: Select the service for the database you want to access
+ * **TLS Termination**: Choose "passthrough"
+ * **Insecure Traffic**: Select "None"
+
+1. Select **Create**.
+
+1. Find the hostname for your new route. After route creation, it appears in the "Host" field.
+
+1. Verify you have a DNS entry to resolve the hostname for your new route to the cluster's load balancer.
+
+## Access database
+
+Access the database from outside the cluster using `redis-cli` or `openssl`.
+
+To connect with `redis-cli`:
+
+ ```sh
+ redis-cli -h -p 443 --tls --cacert ./ca.pem --sni
+ ```
+
+Replace the `` value with the hostname for your new route.
+
+To connect with `openssl`:
+
+ ```sh
+ openssl s_client -connect :443 -crlf -CAfile ./ca.pem -servername
+ ```
diff --git a/content/operate/kubernetes/7.4.6/re-clusters/_index.md b/content/operate/kubernetes/7.4.6/re-clusters/_index.md
new file mode 100644
index 0000000000..880fa58317
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/re-clusters/_index.md
@@ -0,0 +1,18 @@
+---
+Title: Redis Enterprise clusters (REC)
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Articles to help you manage your Redis Enterprise clusters (REC).
+hideListLinks: false
+linkTitle: Redis Enterprise clusters (REC)
+weight: 30
+url: '/operate/kubernetes/7.4.6/re-clusters/'
+---
+
+This section contains articles to help you manage your Redis Enterprise clusters (REC).
+
+
+
diff --git a/content/operate/kubernetes/7.4.6/re-clusters/auto-tiering.md b/content/operate/kubernetes/7.4.6/re-clusters/auto-tiering.md
new file mode 100644
index 0000000000..e2adbef309
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/re-clusters/auto-tiering.md
@@ -0,0 +1,88 @@
+---
+Title: Use Auto Tiering on Kubernetes
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Deploy a cluster with Auto Tiering on Kubernetes.
+linkTitle: Auto Tiering
+weight: 16
+url: '/operate/kubernetes/7.4.6/re-clusters/auto-tiering/'
+---
+
+## Prerequisites
+
+Redis Enterprise Software for Kubernetes supports using Auto Tiering (previously known as Redis on Flash), which extends your node memory to use both RAM and flash storage. SSDs (solid state drives) can store infrequently used (warm) values while your keys and frequently used (hot) values are still stored in RAM. This improves performance and lowers costs for large datasets.
+
+{{}}
+NVMe (non-volatile memory express) SSDs are strongly recommended to achieve the best performance.
+{{}}
+
+Before creating your Redis clusters or databases, these SSDs must be:
+
+- [locally attached to worker nodes in your Kubernetes cluster](https://kubernetes.io/docs/concepts/storage/volumes/#local)
+- formatted and mounted on the nodes that will run Redis Enterprise pods
+- dedicated to Auto Tiering and not shared with other parts of the database, (e.g. durability, binaries)
+- [provisioned as local persistent volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local)
+ - You can use a [local volume provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/README.md) to do this [dynamically](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic)
+- a [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#local) resource with a unique name
+
+For more information on node storage, see [Node persistent and ephemeral storage]({{< relref "/operate/rs/installing-upgrading/install/plan-deployment/persistent-ephemeral-storage" >}}).
+
+## Create a Redis Enterprise cluster
+
+To deploy a Redis Enterprise cluster (REC) with Auto Tiering, you'll need to specify the following in the `redisOnFlashSpec` section of your [REC custom resource]({{< relref "/operate/kubernetes/reference/redis_enterprise_cluster_api" >}}):
+
+- enable Auto Tiering (`enabled: true`)
+- flash storage driver (`bigStoreDriver`)
+ - `rocksdb` or `speedb`(default)
+- storage class name (`storageClassName`)
+- minimal flash disk size (`flashDiskSize`)
+
+{{}} Clusters upgraded to version 7.2.4-2 from an earlier version will change the `bigStoreDriver` (previously called `flashStorageEngine`) to the new default `speedb`, regardless of previous configuration. {{}}
+
+{{}}Switching between storage engines (`speedb` and `rocksdb`) requires guidance by Redis Support or your Account Manager.{{}}
+
+Here is an example of an REC custom resource with these attributes:
+
+```YAML
+apiVersion: app.redislabs.com/v1
+kind: RedisEnterpriseCluster
+metadata:
+ name: "rec"
+spec:
+
+ nodes: 3
+ redisOnFlashSpec:
+ enabled: true
+ bigStoreDriver: speedb
+ storageClassName: local-scsi
+ flashDiskSize: 100G
+```
+
+### Create a Redis Enterprise database
+
+By default, any new database will use RAM only. To create a Redis Enterprise database (REDB) that can use flash storage, specify the following in the `redisEnterpriseCluster` section of the REDB custom resource definition:
+
+- `isRof: true` enables Auto Tiering
+- `rofRamSize` defines the RAM capacity for the database
+
+Below is an example REDB custom resource:
+
+```YAML
+apiVersion: app.redislabs.com/v1alpha1
+kind: RedisEnterpriseDatabase
+metadata:
+ name: autoteiring-redb
+spec:
+ redisEnterpriseCluster:
+ name: rec
+ isRof: true
+ memorySize: 2GB
+ rofRamSize: 0.5GB
+```
+
+{{< note >}}
+This example defines both `memorySize` and `rofRamSize`. When using Auto Tiering, `memorySize` refers to the total combined memory size (RAM + flash) allocated for the database. `rofRamSize` specifies only the RAM capacity for the database. `rofRamSize` must be at least 10% of `memorySize`.
+{{< /note >}}
diff --git a/content/operate/kubernetes/7.4.6/re-clusters/cluster-recovery.md b/content/operate/kubernetes/7.4.6/re-clusters/cluster-recovery.md
new file mode 100644
index 0000000000..974d8227db
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/re-clusters/cluster-recovery.md
@@ -0,0 +1,49 @@
+---
+Title: Recover a Redis Enterprise cluster on Kubernetes
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This task describes how to recover a Redis Enterprise cluster on Kubernetes.
+linkTitle: Recover a Redis cluster
+weight: 20
+url: '/operate/kubernetes/7.4.6/re-clusters/cluster-recovery/'
+---
+When a Redis Enterprise cluster loses contact with more than half of its nodes either because of failed nodes or network split,
+the cluster stops responding to client connections.
+When this happens, you must recover the cluster to restore the connections.
+
+You can also perform cluster recovery to reset cluster nodes, to troubleshoot issues, or in a case of active/passive failover.
+
+The Redis Enterprise for Kubernetes automates these recovery steps:
+
+1. Recreates a fresh Redis Enterprise cluster
+1. Mounts the persistent storage with the recovery files from the original cluster to the nodes of the new cluster
+1. Recovers the cluster configuration on the first node in the new cluster
+1. Joins the remaining nodes to the new cluster.
+
+{{}}Redis Enterprise for Kubernetes 7.2.4-2 introduces a new limitation. You cannot recover or upgrade your cluster if there are databases with old module versions or manually uploaded modules. See the [Redis Enterprise Software 7.2.4 known limitations]({{< relref "/operate/rs/release-notes/rs-7-2-4-releases/rs-7-2-4-52#cluster-recovery-with-manually-uploaded-modules" >}}) for more details.{{}}
+
+## Prerequisites
+
+- For cluster recovery, the cluster must be [deployed with persistence]({{< relref "/operate/kubernetes/recommendations/persistent-volumes.md" >}}).
+
+## Recover a cluster
+
+1. Edit the REC resource to set the `clusterRecovery` flag to `true`.
+
+ ```sh
+ kubectl patch rec --type merge --patch '{"spec":{"clusterRecovery":true}}'
+ ```
+
+
+1. Wait for the cluster to recover until it is in the "Running" state.
+
+ To see the state of the cluster, run:
+
+ ```sh
+ watch "kubectl describe rec | grep State"
+ ```
+
+1. To recover the database, see [Recover a failed database]({{< relref "/operate/rs/databases/recover.md" >}}).
diff --git a/content/operate/kubernetes/7.4.6/re-clusters/connect-prometheus-operator.md b/content/operate/kubernetes/7.4.6/re-clusters/connect-prometheus-operator.md
new file mode 100644
index 0000000000..16f28048aa
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/re-clusters/connect-prometheus-operator.md
@@ -0,0 +1,72 @@
+---
+Title: Connect the Prometheus operator to Redis Enterprise for Kubernetes
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: This article describes how to configure a Prometheus operator custom
+ resource to allow it to export metrics from Redis Enterprise for Kubernetes.
+linkTitle: Export metrics to Prometheus
+weight: 92
+url: '/operate/kubernetes/7.4.6/re-clusters/connect-prometheus-operator/'
+---
+
+To collect metrics data from your databases and Redis Enterprise cluster (REC), you can connect your [Prometheus](https://prometheus.io/) server to an endpoint exposed on your REC. Redis Enterprise for Kubernetes creates a dedicated service to expose the `prometheus` port (8070) for data collection. A custom resource called `ServiceMonitor` allows the [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator/tree/main/Documentation) to connect to this port and collect data from Redis Enterprise.
+
+## Prerequisites
+
+Before connecting Redis Enterprise to Prometheus on your Kubernetes cluster, make sure you've done the following:
+
+- [Deploy Redis Enterprise for Kubernetes]({{< relref "/operate/kubernetes/deployment/quick-start.md" >}}) (version 6.2.10-4 or newer)
+- [Deploy the Prometheus operator](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md) (version 0.19.0 or newer)
+- [Create a Redis Enterprise cluster]({{< relref "/operate/kubernetes/deployment/quick-start#create-a-redis-enterprise-cluster-rec" >}})
+
+## Create a `ServiceMonitor` custom resource
+
+Below is an example `ServiceMonitor` custom resource file. By specifying the service label (`app: redis.io/service=prom-metrics`) in the `selector.matchLabels` section, you can point the Prometheus operator to the correct Redis Enterprise service (`-prom`).
+
+You'll need to configure the following fields to connect Prometheus to Redis Enterprise:
+
+| Section | Field | Value |
+|---|---|---|
+| `spec.endpoints` | `port` | Name of exposed port (`prometheus`) |
+| `spec.namespaceSelector` | `matchNames` | Namespace for your REC |
+| `spec.selector` | `matchLabels` | REC service label (`app: redis.io/service=prom-metrics`) |
+
+Apply the file in the same namespace as your Redis Enterprise cluster (REC).
+ {{}}If Redis Enterprise and Prometheus are deployed in different namespaces, you'll also need to add the [`serviceMonitorNamespaceSelector`](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#namespaceselector) field to your Prometheus resource. See the [Prometheus operator documentation](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md#related-resources) for more details on cross-namespace `ServiceMonitor` configuration.{{}}
+
+
+```YAML
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: redis-enterprise
+spec:
+ endpoints:
+ - interval: 15s
+ port: prometheus
+ scheme: https
+ tlsConfig:
+ insecureSkipVerify: true
+ namespaceSelector:
+ matchNames:
+ -
+ selector:
+ matchLabels:
+ redis.io/service: prom-metrics
+```
+
+For more info about configuring the `ServiceMonitor` resource, see the [`ServiceMonitorSpec` API documentation](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec).
+
+## More info
+
+- github.com/prometheus-operator
+ - [Getting started](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md)
+ - [Running exporters](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/running-exporters.md)
+ - [Related resources](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md#related-resources)
+ - [Troubleshooting ServiceMonitor changes](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/troubleshooting.md)
+- redis.io/docs
+ - [Metrics in Prometheus]({{< relref "/integrate/prometheus-with-redis-enterprise/prometheus-metrics-definitions" >}})
+ - [Monitoring and metrics]({{< relref "/operate/rs/clusters/monitoring/" >}})
diff --git a/content/operate/kubernetes/7.4.6/re-clusters/connect-to-admin-console.md b/content/operate/kubernetes/7.4.6/re-clusters/connect-to-admin-console.md
new file mode 100644
index 0000000000..e11a29fbca
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/re-clusters/connect-to-admin-console.md
@@ -0,0 +1,59 @@
+---
+Title: Connect to the admin console
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Connect to the Redis Enterprise admin console to manage your Redis Enterprise
+ cluster.
+linkTitle: Connect to the admin console
+weight: 10
+url: '/operate/kubernetes/7.4.6/re-clusters/connect-to-admin-console/'
+---
+
+The username and password for the Redis Enterprise Software [admin console]({{< relref "/operate/rs/" >}}) are stored in a Kubernetes [secret](https://kubernetes.io/docs/concepts/configuration/secret/). After retrieving your credentials, you can use port forwarding to connect to the admin console.
+
+{{}}
+There are several methods for accessing the admin console. Port forwarding is the simplest, but not the most efficient method for long-term use. You could also use a load balancer service or Ingress.
+{{}}
+
+1. Switch to the namespace with your Redis Enterprise cluster (REC).
+
+ ```sh
+ kubectl config set-context --current --namespace=
+ ```
+
+1. Find your cluster name from your list of secrets.
+
+ ```sh
+ kubectl get secret
+ ```
+
+ In this example, the cluster name is `rec`.
+
+1. Extract and decode your credentials from the secret.
+
+ ```sh
+ kubectl get secret -o jsonpath='{.data.username}' | base64 --decode
+ kubectl get secret -o jsonpath='{.data.password}' | base64 --decode
+ ```
+
+1. Find the port for the REC UI service in the `spec:ports` section of the service definition file.
+
+ ```sh
+ kubectl get service/-ui -o yaml
+ ```
+
+ {{}}
+ The default port is 8443.
+ {{}}
+
+1. Use `kubectl port-forward` to forward your local port to the service port.
+
+ ```sh
+ kubectl port-forward service/-ui :
+ ```
+
+1. View the admin console from a web browser on your local machine at `https://localhost:8443`.
+
diff --git a/content/operate/kubernetes/7.4.6/re-clusters/expand-pvc.md b/content/operate/kubernetes/7.4.6/re-clusters/expand-pvc.md
new file mode 100644
index 0000000000..3eff33289b
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/re-clusters/expand-pvc.md
@@ -0,0 +1,104 @@
+---
+Title: Expand PersistentVolumeClaim (PVC)
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Expand your persistent volume claim by editing the REC.
+linkTitle: Expand PVC
+weight: 82
+url: '/operate/kubernetes/7.4.6/re-clusters/expand-pvc/'
+---
+
+This article outlines steps to increase the size of the persistent volume claim for your Redis Enterprise cluster (REC).
+
+[PersistentVolumeClaims (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) are created by the Redis Enterprise operator and used by the RedisEnterpriseCluster (REC). PVCs are created with a specific size and [can be expanded](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) with the following steps, if the underlying [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) supports it.
+
+This process involves deleting and recreating the REC StatefulSet with a larger persistent volume size. The pods owned by the StatefulSet are not restarted or affected by the deletion and recreation process, except when they are left without an owner momentarily.
+
+{{}}Shrinking (reducing the size) of your PVC is not allowed. This process only allows you to expand (size up) your PVC.{{}}
+
+## Prerequisites
+
+{{}}Do not change any other REC fields related to the StatefulSet while resizing is in progress.
+{{}}
+
+- PVC expansion must be supported and enabled by the StorageClass and underlying storage driver of the REC PVCs.
+ - The relevant StorageClass is the one associated with the REC PVCs. The StorageClass for existing PVCs cannot be changed.
+- The StorageClass must be configured with `allowVolumeExpansion: true`.
+- Your storage driver must support online expansion.
+- We highly recommend you backup your databases before beginning this PVC expansion process.
+
+## Expand REC PVC
+
+1. Enable the REC persistent volume resize flag.
+
+ ```YAML
+ spec:
+ persistentSpec:
+ enablePersistentVolumeResize: true
+ ```
+
+1. Set the value of `volumeSize` to your desired size.
+
+ ```YAML
+ spec:
+ persistentSpec:
+ enablePersistentVolumeResize: true
+ volumeSize: Gi
+ ```
+
+1. Apply the changes to the REC, replacing `` with the name of your REC.
+
+ ```sh
+ kubectl apply -f
+ ```
+
+After applying the REC changes, the PVCs will begin to expand to the new size.
+
+Once all the PVCs finish the resizing process, the operator will delete and recreate the StatefulSet with the new volume size.
+
+### Track progress
+
+You can track the progress by monitoring the status of the REC and PersistentVolumeClaim objects.
+
+The REC status will correspond to the status of one or more PVCs, and will reflect if the resizing is successful or failed.
+
+While the resizing is in progress, the status will be:
+
+```yaml
+status:
+ persistenceStatus:
+ status: Resizing
+ succeeded: 2/3
+```
+
+When the resizing is complete, the status becomes Provisioned and the new volume size is available for use by the REC pods.
+
+```yaml
+status:
+ persistenceStatus:
+ status: Provisioned
+ succeeded: 3/3
+```
+
+### Troubleshooting
+
+If an error occurs during this process:
+
+- Examine the status and events of the REC and PVC objects.
+
+ ```sh
+ kubectl describe pvc
+ ```
+
+ ```sh
+ kubectl get events
+ ```
+
+- Examine the logs of the operator pods.
+
+ ```sh
+ kubectl logs
+ ```
diff --git a/content/operate/kubernetes/7.4.6/re-clusters/multi-namespace.md b/content/operate/kubernetes/7.4.6/re-clusters/multi-namespace.md
new file mode 100644
index 0000000000..9e5b55a5d0
--- /dev/null
+++ b/content/operate/kubernetes/7.4.6/re-clusters/multi-namespace.md
@@ -0,0 +1,191 @@
+---
+Title: Manage databases in multiple namespaces
+alwaysopen: false
+categories:
+- docs
+- operate
+- kubernetes
+description: Redis Enterprise for Kubernetes allows you to deploy to multiple namespaces
+ within your Kubernetes cluster. This article shows you how to configure your Redis
+ Enterprise cluster to connect to databases in multiple namespaces
+linktitle: Manage multiple namespaces
+weight: 17
+url: '/operate/kubernetes/7.4.6/re-clusters/multi-namespace/'
+---
+
+Multiple Redis Enterprise database resources (REDBs) can be associated with a single Redis Enterprise cluster resource (REC) even if they reside in different namespaces.
+
+To learn more about designing a multi-namespace Redis Enterprise cluster, see [flexible deployment options]({{< relref "/operate/kubernetes/deployment/deployment-options.md" >}}).
+
+{{}} Multi-namespace installations don't support Active-Active databases (REEADB). Only databases created with the REDB resource are supported in multi-namespace deployments at this time.{{}}
+
+## Prerequisites
+
+Before configuring a multi-namespace deployment, you must have a running [Redis Enterprise cluster (REC)]({{< relref "/operate/kubernetes/deployment/quick-start.md" >}}). See more information in the [deployment]({{< relref "/operate/kubernetes/deployment/" >}}) section.
+
+## Create role and role binding for managed namespaces
+
+Both the operator and the RedisEnterpriseCluster (REC) resource need access to each namespace the REC will manage. For each **managed** namespace, create a `role.yaml` and `role_binding.yaml` file within the managed namespace, as shown in the examples below.
+
+{{}}These will need to be reapplied each time you [upgrade]({{< relref "/operate/kubernetes/upgrade/upgrade-redis-cluster.md" >}}). {{}}
+
+Replace `` with the namespace the REC resides in.
+Replace `` with your own value (defaults to the REC name).
+
+`role.yaml` example:
+
+```yaml
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: redb-role
+ labels:
+ app: redis-enterprise
+rules:
+ - apiGroups:
+ - app.redislabs.com
+ resources: ["redisenterpriseclusters", "redisenterpriseclusters/status", "redisenterpriseclusters/finalizers",
+ "redisenterprisedatabases", "redisenterprisedatabases/status", "redisenterprisedatabases/finalizers",
+ "redisenterpriseremoteclusters", "redisenterpriseremoteclusters/status",
+ "redisenterpriseremoteclusters/finalizers",
+ "redisenterpriseactiveactivedatabases", "redisenterpriseactiveactivedatabases/status",
+ "redisenterpriseactiveactivedatabases/finalizers"]
+ verbs: ["delete", "deletecollection", "get", "list", "patch", "create", "update", "watch"]
+ - apiGroups: [""]
+ resources: ["secrets"]
+ verbs: ["update", "get", "read", "list", "listallnamespaces", "watch", "watchlist",
+ "watchlistallnamespaces", "create","patch","replace","delete","deletecollection"]
+ - apiGroups: [""]
+ resources: ["endpoints"]
+ verbs: ["get", "list", "watch"]
+ - apiGroups: [""]
+ resources: ["events"]
+ verbs: ["create"]
+ - apiGroups: [""]
+ resources: ["services"]
+ verbs: ["get", "watch", "list", "update", "patch", "create", "delete"]
+```
+
+`role_binding.yaml` example:
+
+```yaml
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: redb-role
+ labels:
+ app: redis-enterprise
+subjects:
+- kind: ServiceAccount
+ name: redis-enterprise-operator
+ namespace:
+- kind: ServiceAccount
+ name:
+ namespace:
+roleRef:
+ kind: Role
+ name: redb-role
+ apiGroup: rbac.authorization.k8s.io
+```
+
+Apply the files, replacing `` with your own values:
+
+```sh
+kubectl apply -f role.yaml -n
+kubectl apply -f role_binding.yaml -n
+```
+
+{{}}
+If the REC is configured to watch a namespace without setting the role and role binding permissions, or a namespace that is not yet created, the operator will fail and halt normal operations.
+{{}}
+
+
+## Update Redis Enterprise operator ConfigMap
+
+There are two methods of updating the operator ConfigMap (`operator-environment-config`) to specify which namespaces to manage.
+
+- Method 1: Configure the operator to watch for a namespace label and add this label to managed namespaces (available in versions 6.4.2-4 or later).
+- Method 2: Configure the operator with an explicit list of namespaces to manage.
+
+You can create this ConfigMap manually before deployment, or it will be created automatically after the operator was deployed.
+
+
+### Method 1: Namespace label (available in versions 6.4.2-4 or later)
+
+1. Create the `cluster_role_binding.yaml` and `cluster_role.yaml` files. Replace the `` with the namespace the Redis Enterprise cluster (REC) resides in.
+
+ `operator_cluster_role.yaml` example:
+
+ ```yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: redis-enterprise-operator-consumer-ns
+ labels:
+ app: redis-enterprise
+ rules:
+ - apiGroups: [""]
+ resources: ["namespaces"]
+ verbs: ["list", "watch"]
+ ```
+
+ `operator_cluster_role_binding.yaml` example:
+
+ ```yaml
+ kind: ClusterRoleBinding
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+ name: redis-enterprise-operator-consumer-ns
+ labels:
+ app: redis-enterprise
+ subjects:
+ - kind: ServiceAccount
+ name: redis-enterprise-operator
+ namespace:
+ roleRef:
+ kind: ClusterRole
+ name: redis-enterprise-operator-consumer-ns
+ apiGroup: rbac.authorization.k8s.io
+ ```
+
+2. Apply the files.
+
+ ```sh
+ kubectl apply -f operator_cluster_role.yaml
+ kubectl apply -f operator_cluster_role_binding.yaml
+ ```
+
+3. Patch the ConfigMap in the REC namespace (``) to identify the managed namespaces with your label (``).
+
+ ```sh
+ kubectl patch ConfigMap/operator-environment-config \
+ -n \
+ --type merge \
+ -p '{"data": {"REDB_NAMESPACES_LABEL": ""}}'
+ ```
+
+4. For each managed namespace, apply the same label. Replace `` with the namespace the REC will manage. Replace `` with the value used in the previous step. If you specify a value for ``, both the label name and value in managed namespaces must match to be detected by the operator. If the `` is empty, only the label name needs to match on managed namespaces and the value is disregarded.
+
+
+ ```sh
+ kubectl label namespace =
+ ```
+
+{{}}
+The operator restarts when it detects a namespace label was added or removed.
+{{}}
+
+### Method 2: Explicit namespace list
+
+Patch the `operator-environment-config` in the REC namespace with a new environment variable (`REDB_NAMESPACES`).
+
+```sh
+kubectl patch ConfigMap/operator-environment-config \
+-n \
+--type merge \
+-p '{"data":{"REDB_NAMESPACES": "