You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,16 +12,16 @@ ms.date: 02/21/2025
12
12
13
13
> [!IMPORTANT]
14
14
>
15
-
> The data persistence functionality provides resilience for unexpected Redis node failures. Data persistence is not a data backup or point in time recovery (PITR) feature. If corrupted data is written to the Redis instance, th corrupted data is also persisted. To make backups of your Redis instance, use the [export feature](cache--how-to-import-export-data.md).
15
+
> The data persistence functionality provides resilience for unexpected Redis node failures. Data persistence isn't a data backup or point in time recovery (PITR) feature. If corrupted data is written to the Redis instance, th corrupted data is also persisted. To make backups of your Redis instance, use the [export feature](cache--how-to-import-export-data.md).
16
16
>
17
17
18
18
> [!WARNING]
19
19
>
20
-
> If you are using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
20
+
> If you're using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
21
21
>
22
22
23
23
>[!WARNING]
24
-
> The _always write_ option for AOF persistence on the Enterprise and Enterprise Flash tiers is set to be retired on April 1, 2025. This option has significant performance limitations is no longer recommended. Using the _write every second_ option or using RDB persistence is recommended instead.
24
+
> The _always write_ option for AOF persistence on the Enterprise and Enterprise Flash tiers is set to retire on April 1, 2025. This option has significant performance limitations is no longer recommended. Using the _write every second_ option or using RDB persistence is recommended instead.
25
25
>
26
26
27
27
## Scope of availability
@@ -53,11 +53,11 @@ Persistence features are intended to be used to restore data to the same cache a
53
53
54
54
## Differences between persistence in the Premium and Enterprise tiers
55
55
56
-
On the **Premium** tier, data is persisted directly to an [Azure Storage](../storage/common/storage-introduction.md) account that you own and manage. Azure Storage automatically encrypts data when it's persisted, but you can also use your own keys for the encryption. For more information, see [Customer-managed keys for Azure Storage encryption](../storage/common/customer-managed-keys-overview.md).
56
+
On the **Premium** tier, data is persisted directly to an [Azure Storage](../storage/common/storage-introduction.md) account that you own and manage. Azure Storage automatically encrypts data when persisting it, but you can also use your own keys for the encryption. For more information, see [Customer-managed keys for Azure Storage encryption](../storage/common/customer-managed-keys-overview.md).
57
57
58
58
> [!WARNING]
59
59
>
60
-
> If you are using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
60
+
> If you're using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
61
61
>
62
62
63
63
On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a managed disk attached directly to the cache instance. The location isn't configurable nor accessible to the user. Using a managed disk increases the performance of persistence. The disk is encrypted using Microsoft managed keys (MMK) by default, but customer managed keys (CMK) can also be used. For more information, see [managing data encryption](#managing-data-encryption).
@@ -70,7 +70,7 @@ On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a man
70
70
71
71
:::image type="content" source="media/cache-how-to-premium-persistence/create-resource.png" alt-text="Screenshot that shows a form to create an Azure Cache for Redis resource.":::
72
72
73
-
2. On the **Create a resource** page, select **Databases** and then select **Azure Cache for Redis**.
73
+
2. On the **Create a resource** page, select **Databases**, and then select **Azure Cache for Redis**.
74
74
75
75
:::image type="content" source="media/cache-how-to-premium-persistence/select-cache.png" alt-text="Screenshot showing Azure Cache for Redis selected as a new database type.":::
76
76
@@ -105,7 +105,7 @@ On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a man
105
105
The first backup starts once the backup frequency interval elapses.
106
106
107
107
> [!NOTE]
108
-
> When RDB files are backed up to storage, they are stored in the form of page blobs. If you're using a storage account with HNS enabled, persistence will tend to fail because page blobs aren't supported in storage accounts with HNS enabled (ADLS Gen2).
108
+
> When RDB files are backed up to storage, they're stored in the form of page blobs. If you're using a storage account with HNS enabled, persistence tends to fail because page blobs aren't supported in storage accounts with HNS enabled (ADLS Gen2).
109
109
110
110
9. To enable AOF persistence, select **AOF** and configure the settings.
111
111
@@ -153,9 +153,9 @@ It takes a while for the cache to create. You can monitor progress on the Azure
153
153
1. Finish creating the cache by following the rest of the instructions in the [Enterprise tier quickstart guide](quickstart-create-redis-enterprise.md).
154
154
155
155
>[!WARNING]
156
-
> The _always write_ option for AOF persistence is set to be retired on April 1, 2025. This option has significant performance limitations is no longer recommended. Using the _write every second_ option or using RDB persistence is recommended instead.
156
+
> The _always write_ option for AOF persistence is set to retire on April 1, 2025. This option has significant performance limitations is no longer recommended. Using the _write every second_ option or using RDB persistence is recommended instead.
157
157
>
158
-
158
+
159
159
> [!NOTE]
160
160
> You can add persistence to a previously created Enterprise tier cache at any time by navigating to the **Advanced settings** in the Resource menu.
161
161
>
@@ -281,15 +281,15 @@ For more information on performance when using AOF persistence, see [Does AOF pe
281
281
282
282
AOF persistence does affect throughput. AOF runs on both the primary and replica process, therefore you see higher CPU and Server Load for a cache with AOF persistence than an identical cache without AOF persistence. AOF offers the best consistency with the data in memory because each write and delete is persisted with only a few seconds of delay. The trade-off is that AOF is more compute intensive.
283
283
284
-
As long as CPU and Server Load are both less than 90%, there's a penalty on throughput, but the cache operates normally, otherwise. Above 90% CPU and Server Load, the throughput penalty can get much higher, and the latency of all commands processed by the cache increases. Latency increases because AOF persistence runs on both the primary and replica process, increasing the load on the node in use, and putting persistence on the critical path of data.
284
+
As long as CPU and Server Load are both less than 90%, there's a penalty on throughput, but the cache operates normally, otherwise. Above 90% CPU and Server Load, the throughput penalty can get higher, and the latency of all commands processed by the cache increases. Latency increases because AOF persistence runs on both the primary and replica process, increasing the load on the node in use, and putting persistence on the critical path of data.
285
285
286
286
### What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?
287
287
288
288
For both RDB and AOF persistence:
289
289
290
-
- If you've scaled to a larger size, there's no effect.
291
-
- If you've scaled to a smaller size, and you have a custom [databases](cache-configure.md#databases) setting that is greater than the [databases limit](cache-configure.md#databases) for your new size, data in those databases isn't restored. For more information, see [Is my custom databases setting affected during scaling?](cache-how-to-scale.md#is-my-custom-databases-setting-affected-during-scaling)
292
-
- If you've scaled to a smaller size, and there isn't enough room in the smaller size to hold all of the data from the last backup, keys are evicted during the restore process. Typically, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
290
+
- If you scaled to a larger size, there's no effect.
291
+
- If you scaled to a smaller size, and you have a custom [databases](cache-configure.md#databases) setting that is greater than the [databases limit](cache-configure.md#databases) for your new size, data in those databases isn't restored. For more information, see [Is my custom databases setting affected during scaling?](cache-how-to-scale.md#is-my-custom-databases-setting-affected-during-scaling)
292
+
- If you scaled to a smaller size, and there isn't enough room in the smaller size to hold all of the data from the last backup, keys are evicted during the restore process. Typically, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
293
293
294
294
### Can I use the same storage account for persistence across two different caches?
295
295
@@ -315,15 +315,15 @@ Yes, you can change the backup frequency for RDB persistence using the Azure por
315
315
316
316
### Why is there more than 60 minutes between backups when I have an RDB backup frequency of 60 minutes?
317
317
318
-
The RDB persistence backup frequency interval doesn't start until the previous backup process has completed successfully. If the backup frequency is 60 minutes and it takes a backup process 15 minutes to complete, the next backup won't start until 75 minutes after the start time of the previous backup.
318
+
The RDB persistence backup frequency interval doesn't start until the previous backup process completes successfully. If the backup frequency is 60 minutes and it takes a backup process 15 minutes to complete, the next backup won't start until 75 minutes after the start time of the previous backup.
319
319
320
320
### What happens to the old RDB backups when a new backup is made?
321
321
322
322
All RDB persistence backups, except for the most recent one, are automatically deleted. This deletion might not happen immediately, but older backups aren't persisted indefinitely. If you're using the Premium tier for persistence, and soft delete is turned on for your storage account, the soft delete setting applies, and existing backups continue to reside in the soft delete state.
323
323
324
324
### When should I use a second storage account?
325
325
326
-
Use a second storage account for AOF persistence when you think you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits. This option is only available for Premium tier caches.
326
+
Use a second storage account for AOF persistence when you think you have higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits. This option is only available for Premium tier caches.
327
327
328
328
### How can I remove the second storage account?
329
329
@@ -335,15 +335,15 @@ When the AOF file becomes large enough, a rewrite is automatically queued on the
335
335
336
336
### What should I expect when scaling a cache with AOF enabled?
337
337
338
-
If the AOF file at the time of scaling is large, then expect the scale operation to take longer than expected because it reloads the file after scaling has finished.
338
+
If the AOF file at the time of scaling is large, then expect the scale operation to take longer than expected because it reloads the file after scaling finishes.
339
339
340
340
For more information on scaling, see [What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?](#what-happens-if-ive-scaled-to-a-different-size-and-a-backup-is-restored-that-was-made-before-the-scaling-operation)
341
341
342
342
### How is my AOF data organized in storage?
343
343
344
344
When you use the Premium tier, data stored in AOF files is divided into multiple page blobs per shard. By default, half of the blobs are saved in the primary storage account and half are saved in the secondary storage account. Splitting the data across multiple page blobs and two different storage accounts increases the performance.
345
345
346
-
If the peak rate of writes to the cache isn't very high, then this extra performance might not be needed. In that case, the secondary storage account configuration can be removed. All of the AOF files are instead stored in just the single primary storage account. The following table displays how many total page blobs are used for each pricing tier:
346
+
If the peak rate of writes to the cache isn't high, then this extra performance might not be needed. In that case, the secondary storage account configuration can be removed. All of the AOF files are instead stored in just the single primary storage account. The following table displays how many total page blobs are used for each pricing tier:
347
347
348
348
| Premium tier | Blobs |
349
349
|:------------:|---------------:|
@@ -352,7 +352,7 @@ If the peak rate of writes to the cache isn't very high, then this extra perform
352
352
| P3 | 32 per shard |
353
353
| P4 | 40 per shard |
354
354
355
-
When clustering is enabled, each shard in the cache has its own set of page blobs, as indicated in the previous table. For example, a P2 cache with three shards distributes its AOF file across 48 page blobs: sixteen blobs per shard, with three shards.
355
+
When clustering is enabled, each shard in the cache has its own set of page blobs, as indicated in the previous table. For example, a P2 cache with three shards distributes its AOF file across 48 page blobs: Sixteen blobs per shard, with three shards.
356
356
357
357
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to stay in the soft delete state.
0 commit comments