You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
+24-15Lines changed: 24 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ To remove a cache instance from an active geo-replication group, you just delete
62
62
63
63
In case one of the caches in your replication group is unavailable due to region outage, you can forcefully remove the unavailable cache from the replication group. After you apply **Force-unlink** to a cache, you can't sync any data that is written to that cache back to the replication group after force-unlinking.
64
64
65
-
You should remove the unavailable cache because the remaining caches in the replication group start storing the metadata that hasn’t been shared to the unavailable cache. When this happens, the available caches in your replication group might run out of memory.
65
+
You should remove the unavailable cache because the remaining caches in the replication group start storing the metadata that wasn't shared to the unavailable cache. When this happens, the available caches in your replication group might run out of memory.
66
66
67
67
1. Go to Azure portal and select one of the caches in the replication group that is still available.
68
68
@@ -154,7 +154,7 @@ Let's say you want to scale up each instance in this geo-replication group to an
154
154
155
155
At this point, the `Redis01` and `Redis02` instances can only scale up to an Enterprise E20 instance. All other scaling operations are blocked.
156
156
>[!NOTE]
157
-
> The `Redis00` instance is not blocked from scaling further at this point. But it will be blocked once either `Redis01` or `Redis02` is scaled to be an Enterprise E20.
157
+
> The `Redis00` instance isn't blocked from scaling further at this point. But it's blocked once either `Redis01` or `Redis02` is scaled to be an Enterprise E20.
158
158
>
159
159
160
160
Once each instance is scaled to the same tier and size, all scaling locks are removed:
@@ -171,29 +171,38 @@ Due to the potential for inadvertent data loss, you can't use the `FLUSHALL` and
171
171
172
172
## Geo-replication Metric
173
173
174
-
The _Geo Replication Healthy_ metric in Azure Cache for Redis Enterprise/Azure Managed Redis helps monitor the health of geo-replicated clusters. You can use this metric to monitor the sync status among geo-replicas.
174
+
The _Geo Replication Healthy_ metric in the Enterprise tier of Azure Cache for Redis helps monitor the health of geo-replicated clusters. You use this metric to monitor the sync status among geo-replicas.
175
175
176
176
To monitor the _Geo Replication Healthy_ metric in the Azure portal:
177
177
178
-
1. Navigate to Your Redis Resource: Open the Azure portal and select your Azure Cache for Redis instance.
179
-
1. Go to Metrics: In the left-hand menu, click Metrics under the Monitoring section.
180
-
1. Add Metric: Click on + Add Metric and select the "Geo Replication Healthy" metric.
181
-
1. Set Filters: If needed, apply filters for specific geo-replicas.
182
-
1. Create Alerts (optional): Configure an alert to notify you if the "Geo Replication Healthy" metric emits an unhealthy value (0) continuously for over 60 minutes.
183
-
1. Click New Alert Rule.
184
-
1. Define the condition to trigger if the metric value is 0 for at least 60 minutes. (recommended time)
185
-
1. Add action groups for notifications (email, SMS, etc.).
178
+
1. Open the Azure portal and select your Azure Cache for Redis instance.
179
+
180
+
1. On the Resource menu, select **Metrics** under the **Monitoring** section.
181
+
182
+
1. Select **Add Metric** and select the **Geo Replication Healthy** metric.
183
+
184
+
1. If needed, apply filters for specific geo-replicas.
185
+
186
+
1. You can configure an alert to notify you if the **Geo replication Healthy** metric emits an unhealthy value (0) continuously for over 60 minutes.
187
+
188
+
1. Select **New Alert Rule**.
189
+
190
+
1. Define the condition to trigger if the metric value is 0 for at least 60 minutes, the recommended time.
191
+
192
+
1. Add action groups for notifications, for example: email, SMS, and others.
193
+
186
194
1. Save the alert.
187
-
1. For more information on how to setup alerts for you Redis Enterprise/AMR cache follow this documentation - Monitor Azure Cache for Redis - Azure Cache for Redis | Microsoft Learn
195
+
196
+
1. For more information on how to setup alerts for you Redis Enterprise cache, see [Monitor Azure Cache for Redis](monitor-cache.md).
188
197
189
198
> [!IMPORTANT]
190
-
> This metric may temporarily show as unhealthy due to routine operations like maintenance events or scaling, initiated either by Azure or the customer. To avoid false alarms, we strongly recommend setting up an observation window of 60 minutes where the metric continues to stay unhealthy as the appropriate time for generating an alert as it may indicate a problem that requires intervention.
199
+
> This metric might temporarily show as unhealthy due to routine operations like maintenance events or scaling, initiated either by Azure or the customer. To avoid false alarms, we recommend setting up an observation window of 60 minutes, where the metric continues to stay unhealthy as the appropriate time for generating an alert as it might indicate a problem that requires intervention.
191
200
192
201
## Common Client-side issues that can cause sync issues among geo-replicas
193
202
194
-
- Use of custom Hash tags – Using custom hashtags in Redis can lead to uneven distribution of data across shards, which may cause performance issues and synchronization problems in geo-replicas therefore avoid using custom hashtags unless the database needs to perform multiple key operations.
203
+
- Use of custom Hash tags – Using custom hashtags in Redis can lead to uneven distribution of data across shards, which might cause performance issues and synchronization problems in geo-replicas therefore avoid using custom hashtags unless the database needs to perform multiple key operations.
195
204
196
-
- Large Key Size - Large keys can create synchronization issues among geo-replicas. To maintain smooth performance and reliable replication, we recommend keeping key sizes under 500MB when using geo-replication. If individual key size gets close to 2GB the cache will face geo=replication health issues.
205
+
- Large Key Size - Large keys can create synchronization issues among geo-replicas. To maintain smooth performance and reliable replication, we recommend keeping key sizes under 500MB when using geo-replication. If individual key size gets close to 2GB the cache faces geo=replication health issues.
0 commit comments