@@ -16,14 +16,33 @@ links:
1616
1717# OpenShift Regional DR
1818
19+ ## Context
20+
21+ As more and more institution and mission critical organizations are moving
22+ in the cloud, the possible impact of having a provider failure, might this be
23+ only related to only one region, is very high.
24+
25+ This pattern is designed to prove the resiliency capabilities of Red Hat Openshift
26+ in such scenario.
27+
28+ The Regional Disaster Recovery Pattern, is designed to setup an multiple instances
29+ of Openshift Container Platform cluster connectedbetween them to prove multi-region
30+ resiliency by maintaing the application running in the event of a regional failure.
31+
32+ In this scenario we will be working in a Regional Disaster Recovery setup, and the
33+ synchronization parameters can be specified in the value file.
34+
35+ NOTE: please consider using longer times if you have a large dataset or very long
36+ distances between the clusters
37+
38+ ## Background
39+
1940The _ Regional DR Validated Pattern for [ Red Hat OpenShift] [ ocp ] _ increases the resiliency
2041of your applications by connecting multiple clusters across different regions. This pattern
2142uses [ Red Hat Advanced Cluster Management] [ acm ] to offer a
2243[ Red Hat OpenShift Data Foundation] [ odf ] -based multi-region disaster recovery plan if an
2344entire region fails.
2445
25- ## Background
26-
2746[ Red Hat OpenShift Data Foundation] [ odf ] offers two solutions for disaster
2847recovery: [ Metro DR] [ mdr ] and [ Regional DR] [ rdr ] . As their name suggests, _ Metro
2948DR_ refers to a metropolitan area disasters, which occur when the disaster
@@ -68,6 +87,14 @@ The _Regional DR Pattern_ leverages [Red Hat OpenShift Data Foundation][odf]'s
6887[ Regional DR] [ rdr ] solution, automating applications failover between
6988[ Red Had Advanced Cluster Management] [ acm ] managed clusters in different regions.
7089
90+ - The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process
91+ - The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF
92+ - We have developed a DR trigger which will be used to start the DR process
93+ - The end user needs to configure which PV's need synchronization and the latencies
94+ - ACS Can be used for eventual policies
95+ - The clusters are connected by submariner and, to have a faster recovery time, we suggest having
96+ hybernated clusters ready to be used
97+
7198### Red Hat Technologies
7299- [ Red Hat Openshift Container Platform] [ ocp ]
73100- [ Red Hat Openshift Data Foundation] [ odf ]
@@ -97,11 +124,11 @@ and Physical perspectives.
97124
98125## Installation
99126This patterns is designed to be installed in an Openshift cluster which will
100- work as the control plane for the rest of Openshift clusters. The controller
101- cluster will not execute the applications or store any data from them, but it
102- will work as the control panel for the interconnection of active-passive
103- clusters, coordinating their communication and orchestrating when and where an
104- application is going to be deployed.
127+ work as the orchestrator for the other clusters involved . The Adanced Cluster Manager
128+ installed will neither run the applications nor store any data from them, but it
129+ will take care of the plumbing of the various clusters involved,
130+ coordinating their communication and orchestrating when and where an application is
131+ going to be deployed.
105132
106133As part of the pattern configuration, the administrator needs to define both
107134clusters installation details as would be done using the Openshift-installer
0 commit comments