Skip to content

Commit 70f9658

Browse files
Replaced absolute links with relrefs
1 parent 59e5338 commit 70f9658

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

content/integrate/prometheus-with-redis-enterprise/observability.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ This can cause thrashing on the application side, a scenario where the cache is
129129

130130
This means that when your Redis database is using 100% of available memory, you need
131131
to measure the rate of
132-
[key evictions](https://redis.io/docs/latest/operate/rs/references/metrics/database-operations/#evicted-objectssec).
132+
[key evictions]({{< relref "/operate/rs/references/metrics/database-operations#evicted-objectssec" >}}).
133133

134134
An acceptable rate of key evictions depends on the total number of keys in the database
135135
and the measure of application-level latency. If application latency is high,
@@ -171,8 +171,8 @@ After your database reaches this 80% threshold, you should closely review the ra
171171

172172
|Issue | Possible causes | Remediation |
173173
| ------ | ------ | :------ |
174-
|Redis memory usage has reached 100% |This may indicate an insufficient Redis memory limit for your application's workload | For non-caching workloads (where eviction is unacceptable), immediately increase the memory limit for the database. You can accomplish this through the Redis Enterprise console or its API. Alternatively, you can contact Redis support to assist. For caching workloads, you need to monitor performance closely. Confirm that you have an [eviction policy](https://redis.io/docs/latest/operate/rs/databases/memory-performance/eviction-policy/) in place. If your application's performance starts to degrade, you may need to increase the memory limit, as described above. |
175-
|Redis has stopped accepting writes | Memory is at 100% and no eviction policy is in place | Increase the database's total amount of memory. If this is for a caching workload, consider enabling an [eviction policy](https://redis.io/docs/latest/operate/rs/databases/memory-performance/eviction-policy/). In addition, you may want to determine whether the application can set a reasonable TTL (time-to-live) on some or all of the data being written to Redis. |
174+
|Redis memory usage has reached 100% |This may indicate an insufficient Redis memory limit for your application's workload | For non-caching workloads (where eviction is unacceptable), immediately increase the memory limit for the database. You can accomplish this through the Redis Enterprise console or its API. Alternatively, you can contact Redis support to assist. For caching workloads, you need to monitor performance closely. Confirm that you have an [eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy" >}}) in place. If your application's performance starts to degrade, you may need to increase the memory limit, as described above. |
175+
|Redis has stopped accepting writes | Memory is at 100% and no eviction policy is in place | Increase the database's total amount of memory. If this is for a caching workload, consider enabling an [eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy" >}}). In addition, you may want to determine whether the application can set a reasonable TTL (time-to-live) on some or all of the data being written to Redis. |
176176
|Cache hit ratio is steadily decreasing | The application's working set size may be steadily increasing. Alternatively, the application may be misconfigured (for example, generating more than one unique cache key per cached item.) | If the working set size is increasing, consider increasing the memory limit for the database. If the application is misconfigured, review the application's cache key generation logic. |
177177

178178

@@ -231,7 +231,7 @@ excess inefficient Redis operations, and hot master shards.
231231
| ------ | ------ | :------ |
232232
|High CPU utilization across all shards of a database | This usually indicates that the database is under-provisioned in terms of number of shards. A secondary cause may be that the application is running too many inefficient Redis operations. | You can detect slow Redis operations by enabling the slow log in the Redis Enterprise UI. First, rule out inefficient Redis operations as the cause of the high CPU utilization. The Latency section below includes a broader discussion of this metric in the context of your application. If inefficient Redis operations are not the cause, then increase the number of shards in the database. |
233233
|High CPU utilization on a single shard, with the remaining shards having low CPU utilization | This usually indicates a master shard with at least one hot key. Hot keys are keys that are accessed extremely frequently (for example, more than 1000 times per second). | Hot key issues generally cannot be resolved by increasing the number of shards. To resolve this issue, see the section on Hot keys below. |
234-
| High Proxy CPU | There are several possible causes of high proxy CPU. First, review the behavior of connections to the database. Frequent cycling of connections, especially with TLS is enabled, can cause high proxy CPU utilization. This is especially true when you see more than 100 connections per second per thread. Such behavior is almost always a sign of a misbehaving application. Review the total number of operations per second against the cluster. If you see more than 50k operations per second per thread, you may need to increase the number of proxy threads. | In the case of high connection cycling, review the application's connection behavior. In the case of high operations per second, [increase the number of proxy threads](https://redis.io/docs/latest/operate/rs/references/cli-utilities/rladmin/tune/#tune-proxy). |
234+
| High Proxy CPU | There are several possible causes of high proxy CPU. First, review the behavior of connections to the database. Frequent cycling of connections, especially with TLS is enabled, can cause high proxy CPU utilization. This is especially true when you see more than 100 connections per second per thread. Such behavior is almost always a sign of a misbehaving application. Review the total number of operations per second against the cluster. If you see more than 50k operations per second per thread, you may need to increase the number of proxy threads. | In the case of high connection cycling, review the application's connection behavior. In the case of high operations per second, [increase the number of proxy threads]({{< relref "/operate/rs/references/cli-utilities/rladmin/tune#tune-proxy" >}}). |
235235
|High Node CPU | You will typically detect high shard or proxy CPU utilization before you detect high node CPU utilization. Use the remediation steps above to address high shard and proxy CPU utilization. In spite of this, if you see high node CPU utilization, you may need to increase the number of nodes in the cluster. | Consider increasing the number of nodes in the cluster and the rebalancing the shards across the new nodes. This is a complex operation and you should do it with the help of Redis support. |
236236
|High System CPU | Most of the issues above will reflect user-space CPU utilization. However, if you see high system CPU utilization, this may indicate a problem at the network or storage level. | Review network bytes in and network bytes out to rule out any unexpected spikes in network traffic. You may need perform some deeper network diagnostics to identify the cause of the high system CPU utilization. For example, with high rates of packet loss, you may need to review network configurations or even the network hardware. |
237237

@@ -375,7 +375,7 @@ See the [Cache hit ratio and eviction](#cache-hit-ratio-and-eviction) section fo
375375
## Key eviction rate
376376

377377
They **key eviction rate** is rate at which objects are being evicted from the database.
378-
See [eviction policy](https://redis.io/docs/latest/operate/rs/databases/memory-performance/eviction-policy/) for a discussion of key eviction and its relationship with memory usage.
378+
See [eviction policy]({{< relref "/operate/rs/databases/memory-performance/eviction-policy" >}}) for a discussion of key eviction and its relationship with memory usage.
379379

380380
Dashboard displaying object evictions - [Database Dashboard](https://github.com/redis-field-engineering/redis-enterprise-observability/blob/main/grafana/dashboards/grafana_v9-11/software/classic/database_dashboard_v9-11.json)
381381
{{< image filename="/images/playbook_eviction-expiration.png" alt="Dashboard displaying object evictions">}}
@@ -463,8 +463,8 @@ and block other operations. If you need to scan the keyspace, especially in a pr
463463

464464
The best way to discover slow operations is to view the slow log.
465465
The slow log is available in the Redis Enterprise and Redis Cloud consoles:
466-
* [Redis Enterprise slow log docs](https://redis.io/docs/latest/operate/rs/clusters/logging/redis-slow-log/)
467-
* [Redis Cloud slow log docs](https://redis.io/docs/latest/operate/rc/databases/view-edit-database/#other-actions-and-info)
466+
* [Redis Enterprise slow log docs]({{< relref "/operate/rs/clusters/logging/redis-slow-log" >}})
467+
* [Redis Cloud slow log docs]({{< relref "/operate/rc/databases/view-edit-database#other-actions-and-info" >}})
468468

469469
Redis Cloud dashboard showing slow database operations
470470
{{< image filename="/images/slow_log.png" alt="Redis Cloud dashboard showing slow database operations" >}}
@@ -496,7 +496,7 @@ To use the Redis CLI to identify hot keys:
496496
3. Finally, run `redis-cli --hotkeys`
497497

498498
You may also identify hot keys by sampling the operations against Redis.
499-
You can use do this by running the [MONITOR](https://redis.io/docs/latest/commands/monitor/) command
499+
You can use do this by running the [MONITOR]({{< relref "/commands/monitor" >}}) command
500500
against the high CPU shard. Because this is a potentially high-impact operation, you should only
501501
use this technique as a secondary option. For mission-critical databases, consider
502502
contacting Redis support for assistance.

0 commit comments

Comments
 (0)