-
Notifications
You must be signed in to change notification settings - Fork 519
Description
Expected Behavior
The docs state creating a new Promise and new key in the batchLoadFn for every load of the same key
. When loading large datasets that can contain large amounts of duplicates this means we could call load(1)
100k times and we should get 1 total promise.
Current Behavior
The source code reveals that whether or not we use the cache, we still end up creating a new promise. https://github.com/graphql/dataloader/blob/main/src/index.js#L92 .
Possible Solution
Why not just return the cachedPromise
?
Context
It looks like the change originates from 06c403b . It's been many years, but the reasoning behind that change feels misguided maybe? As the code stands right now, isn't the value of "caching promises" entirely eliminated because we end up creating a new promise for every load
call. So if 50k records all relate to the same record that means 50k promises, rather than 1. The trade-off for that loss is that we can re-use cache from an earlier call, but is that really used enough to warrant that behavior? The current code only provides value in the event we have a partially primed cache, where some objects are cached and others are not, ensuring that subsequent loads end up on the same tick. That feels like a pretty edgy-ish behavior that almost should be solved using different batch loading techniques.