Skip to content

High memory usage and slow to update large numbers of parent documents #33

@mfen

Description

@mfen

The underlying meteor-collection-hooks performs two fetch()s on update() for any collection with an after.update hook defined.

  • the first to get the ids of all docs matching the selector
  • the second to iterate over these docs post-update and fire the after hooks

This can be both slow for large numbers of docs, as well as expensive on memory as it is doing a fetch() of all docs rather than iterating over the cursor.

This denormalize package adds a parentCollection.after.update hook, but also calls parentCollection.update() in any relevant childCollection mutation hook to maintain the caches. The result is that on any child document mutation there is a chain of hooks causing the 2x fetch(), e.g.:

childCollection.update()
  -> childCollection.after.update()
    -> parentCollection.update()
      -> parentCollection.after.update() // 2x fetch()

Related: Meteor-Community-Packages/meteor-collection-hooks#259

Suggestion

Denormalize simply needs to do an parentCollection.updateMany() to update the caches without the extra pre-fetching of ids the hooks do to support arbitrary selectors. Perhaps this package should wrap the Mongo API mutators directly, similar to how the hooks tie in, so as to avoid the chain of hook logic. The one downside I see is exactly that: are there other hooks expected to be chained by the cache update, or even chained denormalization, which this would break? Perhaps this could be an opt-in alternative for maintaining the cache if the user does not require triggering hooks/chaining via the cache update.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions