-
Notifications
You must be signed in to change notification settings - Fork 61
Reverse proxy cache (API cache)
Pull request https://github.com/ecamp/ecamp3/pull/3610 introduces a reverse proxy cache in front of the API, in order to accelerate API responses.
[Work in progress]
[TO DO: describe general concept]
Currently, only a limited number of endpoints has caching enabled.
[To do: include link to code, where the list of enabled endpoints can be seen]
When using http://localhost:3000 during local development, Varnish is bypassed, which means all API responses are coming directly from API platform (uncached).
In order to test caching, use http://localhost:3004. Now, all requests will be routed via Varnish. Requests to frontend, mail, etc. will also be routed via Varnish but will be ignored for caching purposes (=pass).
Even if only using http://localhost:3000, the http-cache container (i.e. varnish) has to be up and running. Otherwise, tag invalidation requests during create/update/delete operations will not be successful and the API will error. If you want to disable all cache functionality in the API, you can set API_CACHE_ENABLED=false
in your .env
file.
During development, you might need to purge the cache regularly, in order to test new code or when checkout out new branches. In order to purge the cache completely, it is easiest to destroy the http-cache container and restart varnish:
docker compose stop http-cache
docker compose rm http-cache
docker compose up -d
Alternatively, you can open a shell into the running container and then ban all tags:
docker compose exec -ti http-cache /bin/bash
varnishadm 'ban req.url ~ .'
Using the shell into the container, you can also use other varnish commands, such as varnishlog -g raw
to output raw log stream or varnishreload
to reload the config after changes to the VCL files.
For deployment, the api cache can be enabled/disabled via value setting apiCache.enabled
. The Github workflows as well as the manual helm deployment scripts will look for environment variable API_CACHE_ENABLED
in order to populate apiCache.enabled
.
Other than the localhost setup, the reverse proxy cache on deployment sits in front of the API only, so other requests (e.g. frontend, etc.) will never be routed via Varnish.
The deployment configuration also includes 2 sidecars for varnish:
- prometheus-exporter: useful to collect metrics/statistics about varnish and display in Grafana
- varnishncsa: logging of requests (1 line per request) in order to debug problems or to to derive hit/miss/pass statistics for each endpoint
In addition to the sidecars, the deployment configuration includes the following abilities:
- Automatic recreation of the Pod for each new commit (
rollme
). This will purge the cache. - Separate port for tag invalidation (
.Values.apiCache.varnishPurgePort
). This port is accessible within the Kubernetes deployment only and not accessible from outside.
Invalidation of cache tags only works, when data operation (CRUD) is being done via API. For the unlikely case, that data is changed manually directly on the database, affected cache tags need to be purged manually (or ban complete cache alternatively).
- Home
- Installation
- Domain Object Model
- API
- Design
- Localization and translations
- Architecture
-
Testing guide
- API testing (TBD)
- Frontend testing
- E2E testing (TBD)
- Deployment
- Debugging