You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/develop/ai/langcache/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,8 +33,8 @@ Using LangCache as a semantic caching service has the following benefits:
33
33
34
34
-**Lower LLM costs**: Reduce costly LLM calls by easily storing the most frequently-requested responses.
35
35
-**Faster AI app responses**: Get faster AI responses by retrieving previously-stored requests from memory.
36
-
-**Simpler Deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
37
-
-**Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates.
36
+
-**Simpler deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
37
+
-**Advanced cache management**: Manage data access, privacy, and eviction protocols. Monitor usage and cache hit rates.
0 commit comments