Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions content/develop/ai/langcache/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ Using LangCache as a semantic caching service has the following benefits:

- **Lower LLM costs**: Reduce costly LLM calls by easily storing the most frequently-requested responses.
- **Faster AI app responses**: Get faster AI responses by retrieving previously-stored requests from memory.
- **Simpler Deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
- **Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates.
- **Simpler deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
- **Advanced cache management**: Manage data access, privacy, and eviction protocols. Monitor usage and cache hit rates.

LangCache works well for the following use cases:

Expand Down