Replies: 5 comments 6 replies
-
|
Not sure why this hasn't been discussed further - we came across the need for better caching this month as well. We're generating images and data asynchronously when a page loads. This isn't quite the issue when statically building the site, but it does make the build process longer, as well as local development. Having a way to say "based on the input, tell me if we need to recalculate stuff - otherwise, use the output from last time" would be very helpful and save tons of time and resources (our website now takes multiple minutes to build, including reusing the image cache). |
Beta Was this translation helpful? Give feedback.
-
|
This is definitely a valuable thing to have, and a great proposal in terms of its interface. I would suggest looking into integrating a well-known, standardized cache library instead. For example NestJS uses Cacheable which is a very mature implementation of (already mature) keyv cache adapters for different stores. It not only gives us all the adapters for different stores out of the box, but also has built-in stampede prevention, L1&L2 cache, cache nesting, revalidation, and many other niceities. NestJS relies on it A LOT, and managed to "wrap" it in different ways, to offer composite features like queues, remote procedure calls, blob cache etc. In fact it also is becoming a popular replacement for unstable_cache, in the Next.JS ecosystem, as their cache is crap, I use it with an in-memory LRU L1 cache and Upstash Redis as L2 cache, and use it both for data and component/page caching. My suggestion isn't to limit our interfaces in any way, but instead to take what is already there as a solid base, for writing our own wrappers, much like Nest did. |
Beta Was this translation helpful? Give feedback.
-
|
It would be nice to have this cache available to remark/rehype components as well. I'm using a Remark plugin to generate GitHub cards via the GitHub API, and I'd like to cache the JSON responses to avoid being rate limited. This is especially problematic while using the development hot-reloading server because editing markdown with a GitHub card sends many requests in a short amount of time. |
Beta Was this translation helpful? Give feedback.
-
|
I also need this feature a lot. Right now I have a build time of 100 seconds which is mostly just the fetching of images for all of the many pages even though they are already cached. |
Beta Was this translation helpful? Give feedback.
-
|
To clear some confusion here... I'm the author of this proposal. The reason why I suggested this is because, in my current SSG scenario for my website, I needed a cache to speed up building. I ended up implementing my own. I have various use-cases on my own website where I use it:
Without caching those, builds can take upwards of 5 minutes. With caching, subsequent builds are done in less than 20 seconds. The point of this proposal is to have a standard interface for this kind of work, with a default implementation that uses the filesystem. If you want to replace the implementation to use a key store, you'll be free to do so. However, in my SSG scenario, I do not want to rely on external infrastructure. My whole reason why I'm using Astro is that I can build my website and deploy on cheap blob-based hosting with minimal infrastructure, and not have to worry about scalability. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
Create a cache interface within Astro to simplify the development of integrations requiring binary blobs / JSON value caching, and of user space components requiring caching of binary blobs / JSON value.
Background & Motivation
At the moment, in user space, there is no way of effectively caching content that is costly to generate. One common use-case for this is the creation of Astro Endpoints to generate OpenGraph share images.
Integrations are also encouraged to use
config.cacheDirto cache results of expensive operations / transformation, but there is no guidance or standard way of doing so.astro:cachewould be a new core component available to both integrations and user space components. This integration would provide by default two adapters:NodeFsCacheAdapter: the default configured provider that allows interaction withconfig.cacheDir. Used at build time, and in user space forstatictargets.InMemoryCacheAdapter: Cache provider that stores values in-memory. Compatible with all targets/runtime.Third-party integrations would be able to provide and expose their own cache adapters (for example: AWS S3 Cache Adapter, Redis Cache Adapter, etc.).
Possible Use Cases
astro:assets– uniforming the caching mechanism forastro:assetstoastro:cache.canvaskit-wasmcould now cache their results for faster build times.Goals
Example
astro:cachewould export the following for use in user space and in integrations:Here is an example of a usage of a cache in user space. Imagine an endpoint that generates OpenGraph share images from content collection entries. Automatically caching the result of the generation during a build becomes trivial.
[slug]-share.png.tsAstro would provide two adapters by default... A
nodeFsCacheAdapterand amemoryCacheAdapter.In
astro.config.mjs, adapters could be configured per-namespace.getCacheandgetValueCachewould return the proper cache provider facade with the right adapter based on the configuration.Beta Was this translation helpful? Give feedback.
All reactions