Skip to content

Commit 9708dd3

Browse files
committed
acrolinx
1 parent d51663c commit 9708dd3

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

articles/redis/overview.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Azure Managed Redis can improve the performance and scalability of an applicatio
2424

2525
Redis brings a critical low-latency and high-throughput data storage solution to modern applications. Additionally, Redis is increasingly used for noncaching applications, including data ingestion, deduplication, messaging, [leaderboards](web-app-cache-aside-leaderboard.md), [semantic caching](tutorial-semantic-cache.md), and as a [vector database](overview-vector-similarity.md).
2626

27-
Azure Managed Redis can be deployed standalone, or it can be deployed along with other Azure app or database services, such as Azure Container Apps, Azure App Service, Azure Functions, Azure SQL, or Azure Cosmos DB.
27+
Azure Managed Redis can be deployed standalone, or deployed along with other Azure app or database services, such as Azure Container Apps, Azure App Service, Azure Functions, Azure SQL, or Azure Cosmos DB.
2828

2929
## Key scenarios
3030

@@ -36,8 +36,8 @@ Azure Managed Redis improves application performance by supporting common applic
3636
| [Content cache](aspnet-output-cache-provider.md) | Many web pages are generated from templates that use static content such as headers, footers, banners. These static items shouldn't change often. Using an in-memory cache provides quick access to static content compared to backend datastores. This pattern reduces processing time and server load, allowing web servers to be more responsive. It can allow you to reduce the number of servers needed to handle loads. Azure Managed Redis provides the Redis Output Cache Provider to support this pattern with ASP.NET.|
3737
| [Session store](aspnet-session-state-provider.md) | This pattern is commonly used with shopping carts and other user history data that a web application might associate with user cookies. Storing too much in a cookie can have a negative effect on performance as the cookie size grows and is passed and validated with every request. A typical solution uses the cookie as a key to query the data in a database. When you use an in-memory cache, like Azure Managed Redis, to associate information with a user is faster than interacting with a full relational database. |
3838
| [Vector similarity search](overview-vector-similarity.md) | A common AI use-case is to generate vector embeddings using a large language model (LLM). These vector embeddings need to be stored in a vector database and then compared to determine similarity. Azure Managed Redis has built-in functionality to both store and compare vector embeddings at high throughputs.|
39-
| [Semantic caching](tutorial-semantic-cache.md) | Using LLMs often introduces a high amount of latency (due to generation time) and cost (due to per token pricing) to an application. Caching can help solve these problems by storing the past output of an LLM so that it can quickly be retrieved again. However, because LLMs use natural language, this can be difficult for typical caches to handle. Semantic caches like Azure Managed Redis are capable of caching not just a specific query, but the semantic meaning of a query, allowing it to be used much more naturally with LLMs.|
40-
| [Deduplication](https://redis.io/solutions/deduplication/) | Often, you need to determine if an action already happened in a system, such as determining if a username is taken or if a customer was already sent an email. In Azure Managed Redis, bloom filters can be used to rapidly determine duplicates and prevent problems. |
39+
| [Semantic caching](tutorial-semantic-cache.md) | Using LLMs often introduces a high amount of latency (due to generation time) and cost (due to per token pricing) to an application. Caching can help solve these problems by storing the past output of an LLM so that it can quickly be retrieved again. However, because LLMs use natural language, storage can be difficult for typical caches to handle. Semantic caches like Azure Managed Redis are capable of caching not just a specific query, but the semantic meaning of a query, allowing it to be used much more naturally with LLMs.|
40+
| [Deduplication](https://redis.io/solutions/deduplication/) | Often, you need to determine if an action already happened in a system, such as determining if a username is taken or if a customer was already sent an email. In Azure Managed Redis, bloom filters can rapidly determine duplicates, and prevent problems. |
4141
| [Leaderboards](web-app-cache-aside-leaderboard.md) | Redis offers simple and powerful support for developing leaderboards of all kinds using the [sorted set](https://redis.io/solutions/leaderboards/) data structure. Additionally, using [active geo-replication](how-to-active-geo-replication.md) can allow one leaderboard to be shared globally. |
4242
| Job and message queuing | Applications often add tasks to a queue when the operations associated with the request take time to execute. Longer running operations are queued to be processed in sequence, often by another server. This method of deferring work is called task queuing. Azure Managed Redis provides a distributed queue to enable this pattern in your application.|
4343
| [PowerBI/Analytics Acceleration](https://techcommunity.microsoft.com/blog/analyticsonazure/how-to-use-redis-as-a-data-source-for-power-bi-with-redis-sql-odbc/3799471) | You can use the Redis ODBC driver to utilize Redis for BI, reporting, and analytics use-cases. Because Redis is typically much faster than relational databases, using Redis in this way can dramatically increase query responsiveness. |
@@ -67,8 +67,8 @@ One tier stores data both in-memory and on-disk:
6767
>
6868
6969
>[!IMPORTANT]
70-
> You can also use the [data persistence](how-to-persistence.md) feature to store data on-disk for the in-memory tiers. Data persistence stores a backup copy of data on-disk for quick recovery in case of a unexpected outage. This is different than the Flash Optimized tier, which is designed to store data on-disk for typical operations.
71-
> Storing some data on-disk using the Flash Optimized tier does not increase data resiliency. You may use data persistence on the Flash Optimized tier as well.
70+
> You can also use the [data persistence](how-to-persistence.md) feature to store data on-disk for the in-memory tiers. Data persistence stores a backup copy of data on-disk for quick recovery if you experience an unexpected outage. This is different than the Flash Optimized tier, which is designed to store data on-disk for typical operations.
71+
> Storing some data on-disk using the Flash Optimized tier doesn't increase data resiliency. You can use data persistence on the Flash Optimized tier as well.
7272
>
7373
7474
For instructions on how to scale between tiers and SKUs, see [Scale an Azure Managed Redis instance](how-to-scale.md).
@@ -105,11 +105,11 @@ The following table helps describe some of the features supported by tier:
105105
| [Update channel and Schedule updates](administration.md) | No | No | No | No |
106106

107107
> [!IMPORTANT]
108-
> The Balanced B0 and B1 SKU options do not support active geo-replication.
108+
> The Balanced B0 and B1 SKU options don't support active geo-replication.
109109
>
110110
111111
> [!IMPORTANT]
112-
> SLA is only available at GA, and is not available during preview.
112+
> SLA is only available at GA, and isn't available during preview.
113113
>
114114
115115
> [!NOTE]
@@ -118,18 +118,18 @@ The following table helps describe some of the features supported by tier:
118118
119119
### Other considerations when picking a tier
120120

121-
- **Network performance**: If you have a workload that requires high throughput, you might be bottlenecked by network bandwidth. You can increase bandwidth by moving up to a higher performance tier or by moving to a large instance size. Larger size instances have more bandwidth because of the underlying VM that hosts the cache. Higher bandwidth limits help you avoid network saturation that cause timeouts in your application. For more information on bandwidth performance, see [Performance testing](best-practices-performance.md)
121+
- **Network performance**: If you have a workload that requires high throughput, network bandwidth might cause a bottleneck. You can increase bandwidth by moving up to a higher performance tier or by moving to a large instance size. Larger size instances have more bandwidth because of the underlying VM that hosts the cache. Higher bandwidth limits help you avoid network saturation that cause time-outs in your application. For more information on bandwidth performance, see [Performance testing](best-practices-performance.md)
122122
- **Maximum number of client connections**: Each SKU has a maximum number of client connections. This limit increases with higher performance tiers and larger instances sizes. For more information on the limit for each SKU, see [Azure Managed Redis Pricing](https://aka.ms/amrpricing).
123123
- **High availability**: Azure Managed Redis provides multiple [high availability](high-availability.md) options. The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. For more information on the SLA, see the [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). It's possible to disable high availability in an Azure Managed Redis instance. This lowers the price but results in data loss and downtime. We only recommend disabling high availability for dev/test scenarios.
124124

125125
### Other pricing considerations
126126

127127
> [!IMPORTANT]
128-
> Azure Managed Redis Enterprise requires an IP address for each cache instance. Currently, the IP address charge is absorbed by Azure Managed Redis and not passed on to customers. This may change in the future. For more information, see [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/).
128+
> Azure Managed Redis Enterprise requires an IP address for each cache instance. Currently, the IP address charge is absorbed by Azure Managed Redis and not passed on to customers. This can change in the future. For more information, see [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/).
129129
>
130130
131131
> [!IMPORTANT]
132-
> Using active geo-replication will produce data transfer between Azure regions. These bandwidth charges are currently absorbed by Azure Managed Redis and not passed on to customers. This may change in the future. For more information, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
132+
> Using active geo-replication produces data transfer between Azure regions. These bandwidth charges are currently absorbed by Azure Managed Redis and not passed on to customers. This might change in the future. For more information, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
133133
>
134134
135135
### Availability by region

0 commit comments

Comments
 (0)