-
Notifications
You must be signed in to change notification settings - Fork 9
Description
##Context
The current GraphQL implementation has Fetcher/DeferredResolver for batching, but it is underutilized:
- Only 4-5 fields use
.defer()across the 3 existing fetchers (teamsFetcher,apisFetcher,usersFetcher) - Approximately 15 direct
findByIdcalls in the resolvers bypass batching (N+1) - The existing fetchers perform
Future.sequence(ids.map(findById))— each ID generates an individual DB query - No
tenantsFetchereven though there are 6 non-batched tenant searches
On a large cluster of objects, this can consume all database connections and block other users.
Actions
1. Implement batch queries in the DataStore
Expose the findByIds(ids: Seq[Id]) methods using the Seq[Ids] provided to replace the N individual queries with a single one.
2. Update existing Fetchers
Replace Future.sequence(ids.map(findById)) with a call to findByIds in teamsFetcher, apisFetcher, and usersFetcher.
3. Create new fetchers
Add a dedicated fetcher for the 6 tenant searches that are currently not batched. Think to create another fetchers (for plans or another items)
4. Convert direct resolvers to .defer()
Migrate the approximately 15 findById calls in the resolvers to .defer() calls on the appropriate fetcher.
5. (Optional) Configure maxBatchSizeOpt on the fetchers
Limit batch sizes to avoid excessively large $in queries.
Affected Files
app/fr/maif/daikoku/domain/SchemaDefinition.scala— fetchers and resolversDataStore/ repos layer — addingfindByIdsmethods