You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched existing ideas and did not find a similar one
I added a very descriptive title
I've clearly described the feature request and motivation for it
Feature request
There is no langchain_community.caches.DiskCache and I think there is a need for a simple LLM cache that persists across runs and doesn't require the use of a database. I think most LLM caching for ad hoc applications will use this if it exists.
Motivation
I need my cache to persist across runs for long running data augmentation pipelines.
Proposal (If applicable)
I am going to contribute an implementation I wrote for a langchain_commuity.caches.DiskCache cache that is similar to langchain_community.cache.InMemoryCache but uses an instance of diskcache.Cache in place of a Dict. The implementation isn't difficult and I think a lot of people will use this, particularly for long batch workflows such as my use case: data augmentation using an LLM.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
There is no
langchain_community.caches.DiskCache
and I think there is a need for a simple LLM cache that persists across runs and doesn't require the use of a database. I think most LLM caching for ad hoc applications will use this if it exists.Motivation
I need my cache to persist across runs for long running data augmentation pipelines.
Proposal (If applicable)
I am going to contribute an implementation I wrote for a
langchain_commuity.caches.DiskCache
cache that is similar tolangchain_community.cache.InMemoryCache
but uses an instance ofdiskcache.Cache
in place of aDict
. The implementation isn't difficult and I think a lot of people will use this, particularly for long batch workflows such as my use case: data augmentation using an LLM.Beta Was this translation helpful? Give feedback.
All reactions