You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/operate/rc/langcache/_index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ Using LangCache as a semantic caching service in Redis Cloud has the following b
32
32
-**Simpler Deployments**: Access our managed service via a REST API with automated embedding generation, configurable controls.
33
33
-**Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates.
34
34
35
-
### LLM Cost reduction with LangCache
35
+
### LLM cost reduction with LangCache
36
36
37
37
LangCache reduces your LLM costs by caching responses and avoiding repeated API calls. When a response is served from cache, you don’t pay for output tokens. Input token costs are typically offset by embedding and storage costs.
0 commit comments