You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/redis/architecture.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,13 +11,13 @@ appliesto:
11
11
- ✅ Azure Managed Redis
12
12
---
13
13
14
-
# Azure Managed Redis Architecture
14
+
# Azure Managed Redis architecture
15
15
16
16
Azure Managed Redis runs on the [Redis Enterprise](https://redis.io/technology/advantages/) stack, which offers significant advantages over the community edition of Redis. The following information provides greater detail about how Azure Managed Redis is architected, including information that can be useful to power users.
17
17
18
18
## Comparison with Azure Cache for Redis
19
19
20
-
The Basic, Standard, and Premium tiers of Azure Cache for Redis were built on the community edition of Redis. This version of Redis has several significant limitations, including being single-threaded by design. This reduces performance significantly and makes scaling less efficient as more vCPUs aren't fully utilized by the service. A typical Azure Cache for Redis instance uses an architecture like this:
20
+
The Basic, Standard, and Premium tiers of Azure Cache for Redis were built on the community edition of Redis. This community edition of Redis has several significant limitations, including being single-threaded. This reduces performance significantly and makes scaling less efficient as more vCPUs aren't fully utilized by the service. A typical Azure Cache for Redis instance uses an architecture like this:
21
21
22
22
:::image type="content" source="media/architecture/cache-architecture.png" alt-text="Diagram showing the architecture of the Azure Cache for Redis offering.":::
23
23
@@ -39,7 +39,7 @@ This architecture enables both higher performance and also advanced features lik
39
39
40
40
## Clustering
41
41
42
-
Each Azure Managed Redis instance is internally configured to use clustering, across all tiers and SKUs, because it is based on Redis Enterprise, which is able to use multiple shards per node. That includes smaller instances that are only set up to use a single shard. Clustering is a way to divide the data in the Redis instance across the multiple Redis processes, also called "sharding." Azure Managed Redis offers three [cluster policies](#cluster-policies) that determine which protocol is available to Redis clients for connecting to the cache instance.
42
+
Each Azure Managed Redis instance is internally configured to use clustering, across all tiers and SKUs. Azure Managed Redis is based on Redis Enterprise, which is able to use multiple shards per node. That includes smaller instances that are only set up to use a single shard. Clustering is a way to divide the data in the Redis instance across the multiple Redis processes, also called "sharding." Azure Managed Redis offers three [cluster policies](#cluster-policies) that determine which protocol is available to Redis clients for connecting to the cache instance.
43
43
44
44
### Cluster policies
45
45
@@ -51,22 +51,22 @@ OSS clustering policy can't be used with the [RediSearch module](redis-modules.m
51
51
52
52
The OSS clustering protocol requires the client to make the correct shard connections. The initial connection is through port 10000. Connecting to individual nodes is done using ports in the 85XX range. The 85xx ports can change over time, and shouldn't be hardcoded into your application. Redis clients that support clustering use the [CLUSTER NODES](https://redis.io/commands/cluster-nodes/) command to determine the exact ports used for the primary and replica shards and make the shard connections for you.
53
53
54
-
The **Enterprise clustering policy** is a simpler configuration that utilizes a single endpoint for all client connections. Using the Enterprise clustering policy routes all requests to a single Redis node that is then used as a proxy, internally routing requests to the correct node in the cluster. The advantage of this approach is that it makes Azure Managed Redis look non-clustered to users. That means that Redis client libraries don’t need to support Redis Clustering to gain some of the performance advantages of Redis Enterprise, boosting backwards compatibility and making the connection simpler. The downside is that the single node proxy can be a bottleneck, in either compute utilization or network throughput.
54
+
The **Enterprise clustering policy** is a simpler configuration that utilizes a single endpoint for all client connections. Using the Enterprise clustering policy routes all requests to a single Redis node that is then used as a proxy, internally routing requests to the correct node in the cluster. The advantage of this approach is that it makes Azure Managed Redis look non-clustered to users. That means that Redis client libraries don’t need to support Redis Clustering to gain some of the performance advantages of Redis Enterprise. Using a single end point boosts backwards compatibility and makes connection simpler. The downside is that the single node proxy can be a bottleneck, in either compute utilization or network throughput.
55
55
56
56
The Enterprise clustering policy is the only one that can be used with the [RediSearch module](redis-modules.md). While the Enterprise cluster policy makes an Azure Managed Redis instance appear to be non-clustered to users, it still has some limitations with [Multi-key commands](#multi-key-commands).
57
57
58
58
The **Non-Clustered (preview)** clustering policy stores data on each node without sharding. It applies only to caches sized 25 GB and smaller. Scenarios for using Non-Clustered (preview) clustering policy include:
59
59
60
-
-If you are migrating from a Redis environment that’s non-sharded. For exampleBasic, Standard, and Premium non-sharded topologies of Azure Cache for Redis.
61
-
-If you need to run cross slot commands extensively and dividing data into shards would cause failures. For example, the MULTI commands.
62
-
-If you use Redis as message broker and does not need sharding.
60
+
-When migrating from a Redis environment that’s non-sharded. For example, the non-sharded topologies of Basic, Standard, and Premium SKUs of Azure Cache for Redis.
61
+
-When running cross slot commands extensively and dividing data into shards would cause failures. For example, the MULTI commands.
62
+
-When using Redis as message broker and doesn't need sharding.
63
63
64
64
The considerations for using Non-Clustered (Preview) policy are:
65
65
66
66
- It only applies to Azure Managed Redis tiers less than or equal to 25 GB.
67
67
- It’s not as performant as other clustering policies because CPUs can only multi-thread with Redis Enterprise software when it’s sharded.
68
68
- If you want to scale up your Azure Managed Redis cache, you must firs change the cluster policy.
69
-
- If you are moving from Basic, Standard, or Premium non-clustered topology, consider to using OSS clusters to accelerate performance. Non-clustered should only be considered if the application program can't work with either OSS or Enterprise topologies.
69
+
- If you're moving from Basic, Standard, or Premium non-clustered topology, consider to using OSS clusters to accelerate performance. Non-clustered should only be considered if the application program can't work with either OSS or Enterprise topologies.
70
70
71
71
### Scaling out or adding nodes
72
72
@@ -85,7 +85,7 @@ In Active-Active databases, multi-key write commands (`DEL`, `MSET`, `UNLINK`) c
85
85
Each SKU of Azure Managed Redis is configured to run a specific number of Redis server processes, _shards_ in parallel. The relationship between throughput performance, the number of shards, and number of vCPUs available on each instance is complicated. Adding shards generally increases performance as Redis operations can be run in parallel. However, if shards aren't able to run commands because no vCPUs are available to execute commands, performance can actually drop. The following table shows the sharding configuration for each Azure Managed Redis SKU. These shards are mapped to optimize the usage of each vCPU while reserving vCPU cycles for Redis Enterprise proxy, management agent, and OS system tasks which also affect performance.
86
86
87
87
>[!NOTE]
88
-
> The number of shards and vCPUs used on each SKU can change over time as performance is optimized by the Azure Managed Redis team.
88
+
> Azure Managed Redis optimizes performance over time by changing the number of shards and vCPUs used on each SKU.
@@ -123,7 +123,7 @@ On each Azure Managed Redis Instance, approximately 20% of the available memory
123
123
124
124
## Scaling down
125
125
126
-
Scaling down is not currently supported on Azure Managed redis. For more information, see [Limitations of scaling Azure Managed Redis](how-to-scale.md#limitations-of-scaling-azure-managed-redis).
126
+
Scaling down isn't currently supported on Azure Managed redis. For more information, see [Limitations of scaling Azure Managed Redis](how-to-scale.md#limitations-of-scaling-azure-managed-redis).
Configure the attributes in the first column with the values from your cache in the Microsoft Azure portal. Also, configure the other values you want. For instructions on accessing your cache properties, see [Configure Azure Managed Redis settings](configure.md#configure-azure-managed-redis-settings).
Copy file name to clipboardExpand all lines: articles/redis/best-practices-connection.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,6 +13,8 @@ appliesto:
13
13
14
14
# Connection resilience with Azure Managed Redis
15
15
16
+
In this article, we discuss how to make resilient connections to your cache.
17
+
16
18
## Retry commands
17
19
18
20
Configure your client connections to retry commands with exponential backoff. For more information, see [retry guidelines](/azure/architecture/best-practices/retry-service-specific#azure-cache-for-redis).
@@ -29,7 +31,7 @@ We recommend these TCP settings:
29
31
|---------|---------|
30
32
|`net.ipv4.tcp_retries2`| 5 |
31
33
32
-
For more information about the scenario, see [Connection does not re-establish for 15 minutes when running on Linux](https://github.com/StackExchange/StackExchange.Redis/issues/1848#issuecomment-913064646). While this discussion is about the _StackExchange.Redis_ library, other client libraries running on Linux are affected as well. The explanation is still useful and you can generalize to other libraries.
34
+
For more information about the scenario, see [Connection doesn't re-establish for 15 minutes when running on Linux](https://github.com/StackExchange/StackExchange.Redis/issues/1848#issuecomment-913064646). While this discussion is about the _StackExchange.Redis_ library, other client libraries running on Linux are affected as well. The explanation is still useful and you can generalize to other libraries.
33
35
34
36
## Using ForceReconnect with StackExchange.Redis
35
37
@@ -66,7 +68,7 @@ Avoid creating many connections at the same time when reconnecting after a conne
66
68
If you're reconnecting many client instances, consider staggering the new connections to avoid your new connections from being throttled.
67
69
68
70
> [!NOTE]
69
-
> When you use the _StackExchange.Redis_ client library, set `abortConnect` to `false` in your connection string. We recommend letting the `ConnectionMultiplexer` handle reconnection. For more information, see [_StackExchange.Redis_ best practices](management-faq.yml#stackexchangeredis-best-practices).
71
+
> When you use the _StackExchange.Redis_ client library, set `abortConnect` to `false` in your connection string. We recommend letting the `ConnectionMultiplexer` handle reconnection. For more information, see [_StackExchange.Redis_ best practices](management-faq.yml#stackexchangeredis-best-practices).
Copy file name to clipboardExpand all lines: articles/redis/best-practices-development.md
+8-6Lines changed: 8 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,6 +13,8 @@ appliesto:
13
13
14
14
# Development with Azure Managed Redis
15
15
16
+
In this article, we discus how to develop code for Azure Managed Redis.
17
+
16
18
## Connection resilience and server load
17
19
18
20
When developing client applications, be sure to consider the relevant best practices for [connection resilience](best-practices-connection.md) and [managing server load](best-practices-server-load.md).
@@ -59,18 +61,18 @@ Some Redis operations, like the [KEYS](https://redis.io/commands/keys) command,
59
61
60
62
## Choose an appropriate tier
61
63
62
-
Azure Managed Redis offers Memory Optimized, Balanced, Compute Optimized and Flash Optimized tiers. See more information on how to choose a tier[here](how-to-scale.md#performance-tiers).
64
+
Azure Managed Redis offers Memory Optimized, Balanced, Compute Optimized, and Flash Optimized tiers. For more information on how to choose a tier, see [How to scale](how-to-scale.md#performance-tiers).
63
65
We recommend performance testing to choose the right tier and validate connection settings. For more information, see [Performance testing](best-practices-performance.md).
64
66
65
67
## Choose an appropriate availability mode
66
68
67
-
Azure Managed Redis offers the option to enable or disable high availability configuration. When high availability mode is disabled, the data your AMR instance will not be replicated and hence your Redis instance will be unavailable during maintenance. All data in the AMR instance will also be lost in the event of planned or unplanned maintenance. We recommend disabling the high availability only for your development or test workloads. Performance of Redis instances with high availability could also be lower due to the lack of data replication which is crucial distribute load between primary and replica data shard.
69
+
Azure Managed Redis offers the option to enable or disable high availability configuration. When high availability mode is disabled, the data your AMR instance isn't replicated, and your Redis instance is unavailable during maintenance. All data in the AMR instance is lost during planned or unplanned maintenance. We recommend disabling the high availability only for your development or test workloads. Performance of Redis instances with high availability could also be lower due to the lack of data replication which is crucial distribute load between primary and replica data shard.
68
70
69
71
## Client in same region as Redis instance
70
72
71
73
Locate your Redis instance and your application in the same region. Connecting to a Redis in a different region can significantly increase latency and reduce reliability.
72
74
73
-
While you can connect from outside of Azure, it isn't recommended, especially when using Redis for accelerating your application or database performance.. If you're using Redis server as just a key/value store, latency might not be the primary concern.
75
+
While you can connect from outside of Azure, it isn't recommended, especially when using Redis for accelerating your application or database performance. If you're using Redis server as just a key/value store, latency might not be the primary concern.
74
76
75
77
## Rely on hostname not public IP address
76
78
@@ -80,15 +82,15 @@ The public IP address assigned to your AMR instance can change as a result of a
80
82
81
83
Azure Managed Redis requires TLS encrypted communications by default. TLS versions 1.2 and 1.3 are currently supported. If your client library or tool doesn't support TLS, then enabling unencrypted connections is possible.
82
84
83
-
## Monitor memory usage, CPU usage metrics, client connections and network bandwidth
85
+
## Monitor memory usage, CPU usage metrics, client connections, and network bandwidth
84
86
85
-
When using Azure Managed Redis instance in production, we recommend setting alerts for "Used Memory Percentage", "CPU" metrics, "Connected Clients". If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. See [when to scale](how-to-scale.md#when-to-scale) for more details.
87
+
When using Azure Managed Redis instance in production, we recommend setting alerts for **Used Memory Percentage**, **CPU** metrics, **Connected Clients**. If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. For more details, see [when to scale](how-to-scale.md#when-to-scale).
86
88
87
89
## Consider enabling Data Persistence or Data Backup
88
90
89
91
Redis is designed for ephemeral data by default, which means that in rare cases, your data can be lost due to various circumstances like maintenance or outages. If your application is sensitive to data loss, we recommend enabling data persistence or periodic data backup using data export operation.
90
92
91
-
The [data persistence](how-to-persistence.md) feature is designed to automatically provide a quick recovery point for data when a cache goes down. The quick recovery is made possible by storing the RDB or AOF file in a managed disk that is mounted to the cache instance. Persistence files on the disk aren't accessible to users or cannot be used by any other AMR instance.
93
+
The [data persistence](how-to-persistence.md) feature is designed to automatically provide a quick recovery point for data when a cache goes down. The quick recovery is made possible by storing the RDB or AOF file in a managed disk that is mounted to the cache instance. Persistence files on the disk aren't accessible to users or can't be used by any other AMR instance.
92
94
93
95
Many customers want to use persistence to take periodic backups of the data on their cache. We don't recommend that you use data persistence in this way. Instead, use the [import/export](how-to-import-export-data.md) feature. You can export copies of data in RDB format directly into your chosen storage account and trigger the data export as frequently as you require. This exported data can then be imported to any Redis instance. Export can be triggered either from the portal or by using the CLI, PowerShell, or SDK tools.
Copy file name to clipboardExpand all lines: articles/redis/best-practices-kubernetes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ Ensure that the pod running your client application is given enough CPU and memo
24
24
25
25
## Sufficient node resources
26
26
27
-
A pod running the client application can be affected by other pods running on the same node and throttle Redis connections or IO operations. So always ensure that the node on which your client application pods run have enough memory, CPU, and network bandwidth. Running low on any of these resources could result in connectivity issues.
27
+
Other pods running on the same node might affect the pod running the client application to throttle Redis connections or IO operations. Always ensure that the node on which your client application pods run have enough memory, CPU, and network bandwidth. Running low on any of these resources could result in connectivity issues.
28
28
29
29
## Linux-hosted client applications and TCP settings
Copy file name to clipboardExpand all lines: articles/redis/best-practices-memory-management.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,13 +13,15 @@ appliesto:
13
13
14
14
# Memory management for Azure Managed Redis
15
15
16
+
In this article, we discuss effective memory management of an Azure Managed Redis cache.
17
+
16
18
## Eviction policy
17
19
18
-
Choose an [eviction policy](https://redis.io/topics/lru-cache)that works for your application. The default policy for Azure Managed Redis is `volatile-lru`, which means that only keys that have a TTL value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the `allkeys-lru` policy.
20
+
Choose an [eviction policy](https://redis.io/topics/lru-cache)that works for your application. The default policy for Azure Managed Redis is `volatile-lru`, which means that only keys that have a TTL value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, then the system doesn't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then consider the `allkeys-lru` policy.
19
21
20
22
## Keys expiration
21
23
22
-
Set an expiration value on your keys. An expiration removes keys proactively instead of waiting until there's memory pressure. When eviction happens because of memory pressure, it can cause more load on your server. For more information, see the documentation for the [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
24
+
Set an expiration value on your keys. An expiration removes keys proactively instead of waiting until there's memory pressure. When eviction happens because of memory pressure, it can cause more load on your server. For more information, see the documentation for the [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
0 commit comments