Replies: 3 comments
-
I think this is basically a distributed lock problem, so then we would acquire a lock on the cache key before invoking the factory, so using DistributedLock.Redis as RedLock.Net doesn't seem to be maintained anymore we get something like this, though the semantics are a bit off..
So we have the usual critical section double lock problem, ideally we want this after the L2 check but before the call to the factory. Issues:
|
Beta Was this translation helpful? Give feedback.
-
Hi all, it's something in my backlog, see this comment for more details, including why it's not so immediate to do. Hope this helps. |
Beta Was this translation helpful? Give feedback.
-
Sorry missed that when I did my search, think I have working solution for now |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Does anyone have some sample code or ideas that would help with avoiding a cache stampede with multiple nodes?
We have multiple nodes for scalability/resiliency and we call some external services which are expensive and not idempotent and we get charged per request.
What I was thinking might work was using the L2 cache to record who is the master node for a particular cache key - this code is called before the factory and then blocks if the current node is not master, but I don't know how likely/easy this is.
BTW I did a diagram for my team showing the different caches that we have which you might find interesting...
Beta Was this translation helpful? Give feedback.
All reactions