Skip to content

Conversation

toddbaert
Copy link
Member

@toddbaert toddbaert commented Oct 7, 2025

Adds "debounce" hook.

This is a utility "meta" hook, which can be used to effectively debounce or rate limit other hooks based on various parameters.
This can be especially useful for certain UI frameworks and SDKs that frequently re-render and re-evaluate flags (React, Angular, etc).

The hook maintains a simple expiring cache with a fixed max size and keeps a record of recent evaluations based on a user-defined key-generation function (keySupplier).

Simply wrap your hook with the debounce hook by passing it a constructor arg, and then configure the remaining options.
In the example below, we wrap a logging hook so that it only logs a maximum of once a minute for each flag key, no matter how many times that flag is evaluated.

const debounceHook = new DebounceHook<string>(loggingHook, {
  debounceTime: 60_000,             // how long to wait before the hook can fire again
  maxCacheItems: 100,               // max amount of items to keep in the cache; if exceeded, the oldest item is dropped
});

// add the hook globally
OpenFeature.addHooks(debounceHook);

// or at a specific client
client.addHooks(debounceHook);

⚠️ Initially I implemented this with an LRU cache, but after Gemini pointed out some non-LRU behavior with my cache, I realized we don't actually want an LRU cache here, so I renamed it... we explicitly don't want to update recency on retrieval of items (something fundamental to LRUs) because that would mean that if a flag with the same cache key is evaluated over and over, it would NEVER fire the hook again; we simply want to rate limit hook side effects, not prevent them forever.

@toddbaert toddbaert requested review from a team as code owners October 7, 2025 19:35
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new debounce hook, which is a great utility. The implementation is mostly solid, but I've found a critical issue in the LRU cache implementation that could lead to incorrect cache eviction. I've also identified a few configuration issues with file paths that might break the build or IDE features. Additionally, there are some minor improvements for documentation, tests, and robustness that I've suggested. Overall, this is a great addition once these issues are addressed.

@toddbaert toddbaert marked this pull request as draft October 7, 2025 21:14
@toddbaert toddbaert force-pushed the feat/add-debounce-hook branch from 47789bc to 5aa11ce Compare October 8, 2025 01:11
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@toddbaert
Copy link
Member Author

/gemini review

@toddbaert toddbaert marked this pull request as ready for review October 8, 2025 01:14
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new debounce hook, which is a great utility for rate-limiting hook executions in environments with frequent re-renders. The implementation is solid, using a fixed-size expiring cache to manage debouncing. I've identified a few areas for improvement, mainly related to configuration paths and documentation clarity, including a couple of high-severity path issues in project.json and tsconfig.base.json that could affect the monorepo tooling. I've also suggested improvements to a test case to make it clearer and more accurate. Overall, this is a well-executed feature.

@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@toddbaert toddbaert force-pushed the feat/add-debounce-hook branch 2 times, most recently from d073cf7 to e2586a1 Compare October 8, 2025 02:09
Copy link
Member

@guidobrei guidobrei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job. Only 2 minor comments.

Copy link
Member

@lukas-reining lukas-reining left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, I left some nits you can ignore.
I have question regarding hook data that I would like to clarify before approving :)

}

before(hookContext: HookContext, hookHints?: HookHints) {
this.maybeSkipAndCache(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to consider hookContext.hookData here. As the stages are cached individually, the hook data would be missing if before is evaluated from cache, while isfinally relies on it and is not cached yet.
The new OTEL Hooks in #1372 (which should even work in the web-sdk now, which we might not want to debounce but nevermind :D) rely on the Hook data, and an example like: https://github.com/open-feature/js-sdk/blob/main/packages/web/test/hooks-data.spec.ts#L45 would suffer from it.

I think that would especially apply in cases where we have thrashing on the cache as then there could be very large gaps in TTL between the stages resulting in a case where before is cached and after is not, calling after with empty hook data.

I see two options:

  1. Caching them all together instead of individually could fix this? Then we would not care about hook data in debounced cases.
  2. We could cache the hook data and see if the former stage has cached hook data. But I think this could become problematic as there might not only be one hook data set because of different combinations of hook data from (non-)cached before, error and after stages that can all contribute to the hook data observed in the finally stage.

In the end, I think I would prefer to option 1, as I think the set of stages should always be the same when cached, which it mostly "nearly" is due to similar (not same) TTLs if there are no evictions.

Does that make sense or am I getting it wrong @toddbaert?

Copy link
Member Author

@toddbaert toddbaert Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.

I think I also agree with 1...

@beeme1mr ?

Copy link
Member Author

@toddbaert toddbaert Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've changed the options so there's only a single (optional) cacheKeySupplier function. The cached entry now has a result for each stage - this is needed because we want all stages to cache and expire at the same time as a single entry to prevent disjunction.

/**
* Function to generate the cache key for the before stage of the wrapped hook.
* If the cache key is found in the cache, the hook stage will not run.
* If not defined, the DebounceHook will no-op for this stage (inner hook will always run for this stage).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we default to using the 'flagKey'? It seems like that would be the most common use case. It looks like the hook doesn't do anything if you don't set the beforeCacheKeySupplier, but it's optional. I think we should make it either required or set a default. My preference would be to set a default.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make it either required or set a default. My preference would be to set a default.

We can do this, but the awkward part is that then it would debounce every stage, and if you want to NOT debounce one, you'd have to explicitly assign undefined.

Why don't we default to using the 'flagKey'?

I think flag key is almost certainly NOT what you'd want for after/finally hooks - you would almost certainly want to inolve the value/variant as well... but then we have to ask which of those we should use, considering variant isn't always present. All these questions is why I didn't provide a default.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment has implications here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Especially in light of @lukas-reining 's comment above, I'm starting to agree with a single supplier, and providing a default just based on flag-key.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done this... () => flagKey is now the default.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to target the web only, please make that very clear at the top of the file.

We could consider making this work on the server and client, like @lukas-reining did for the telemetry hooks. Since this hook is so flexible, I could see some use cases on the server, specifically when running it with a logging hook.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya I see no reason why this can't be for both, I can make that change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great!
We need to cache the return value of the before hook too then. But if we cache all stages together as proposed for the other issue it does not matter anymore :)

Copy link
Member Author

@toddbaert toddbaert Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is actually possible.

The problem is that server hooks are possibly async, so to cache the return value of the before hook (and also to generally handle async errors) the inner hook as to be awaited (forcing the debounce hook to be async) - but the web doesn't support async hooks. It's basically the same reason we don't have a shared hook implementation in the web/server SDKs (only a BaseHook type) - the async support of server is incompatible with the web hook. This isn't a problem for the OTel hooks because they are fire-and-forget no nothing is actually awaited.

I COULD have a shared base version of this package and server/web version that shared most code, but like I said... I think the same thing preventing us from having a single Hook in both SDKs prevents us from creating a single version of this hook for both SDKs.

Open to suggestions here, or to be told I'm wrong.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe there is a way... 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK - I've done this, a bit of tricky async code but totally possible. In the test suite I instantiate and wrap both web SDK and server SDK hooks, and test that they work async and sync, in the server case.

@toddbaert
Copy link
Member Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Thank you for adding the debounce hook. This is a great utility for performance-sensitive applications. I've reviewed the code and found a few issues, including some critical bugs in the caching logic and a few configuration problems. I've also suggested a refactoring to simplify the implementation and improve maintainability. Please take a look at my comments.

@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@toddbaert
Copy link
Member Author

/gemini review

toddbaert and others added 12 commits October 9, 2025 16:19
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: Lukas Reining <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: Lukas Reining <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
@toddbaert toddbaert force-pushed the feat/add-debounce-hook branch from 366a198 to 4d53cea Compare October 9, 2025 20:19
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new debounce hook, a useful utility for rate-limiting hook executions. The implementation is generally well-done, with good test coverage for various scenarios including SDK compatibility. However, I've identified a critical bug in the handling of asynchronous hooks that needs to be addressed. Additionally, there are a few configuration issues in project.json and tsconfig.base.json related to file paths that will likely cause build or tooling problems. I've also pointed out a discrepancy in package.json regarding peer dependencies and suggested a refactoring to improve the clarity of the caching logic. Lastly, there's a minor typo in the README.

@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 9, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 9, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 9, 2025
@toddbaert toddbaert force-pushed the feat/add-debounce-hook branch from 03697d0 to 477c601 Compare October 9, 2025 20:31
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 9, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 9, 2025
@toddbaert
Copy link
Member Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new debounce hook, a utility for rate-limiting other hooks, which is a great addition. The implementation is mostly solid, including a custom FixedSizeExpiringCache. However, I've identified a critical issue in the asynchronous handling within the hook that could lead to unhandled promise rejections. Additionally, there are a few configuration and documentation inconsistencies that should be addressed to ensure correctness and maintainability.

Comment on lines +183 to +198
if (maybePromiseOrContext && typeof maybePromiseOrContext.then === 'function') {
// async hook result; cache after promise resolves
maybePromiseOrContext
.then((maybeContext) => {
this.cacheSuccess(cacheKey, stage, got, maybeContext);
return maybeContext;
})
.catch((error) => {
this.cacheError(cacheKey, stage, got, error);
throw error;
});
} else {
// sync hook result; cache now
this.cacheSuccess(cacheKey, stage, got, maybePromiseOrContext as void | EvaluationContext);
}
return maybePromiseOrContext;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There's a critical issue in how asynchronous hook results are handled. The promise returned by hookCallback is not being properly chained. The .then() and .catch() calls are 'fire-and-forget', meaning the original promise is returned without the caching logic attached to its resolution path. If the promise rejects, the throw error inside your .catch() will lead to an unhandled promise rejection.

To fix this, you should return the new promise created by the .then().catch() chain for async hooks, and handle the sync case separately.

Suggested change
if (maybePromiseOrContext && typeof maybePromiseOrContext.then === 'function') {
// async hook result; cache after promise resolves
maybePromiseOrContext
.then((maybeContext) => {
this.cacheSuccess(cacheKey, stage, got, maybeContext);
return maybeContext;
})
.catch((error) => {
this.cacheError(cacheKey, stage, got, error);
throw error;
});
} else {
// sync hook result; cache now
this.cacheSuccess(cacheKey, stage, got, maybePromiseOrContext as void | EvaluationContext);
}
return maybePromiseOrContext;
if (maybePromiseOrContext && typeof maybePromiseOrContext.then === 'function') {
// async hook result; cache after promise resolves
return maybePromiseOrContext
.then((maybeContext) => {
this.cacheSuccess(cacheKey, stage, got, maybeContext);
return maybeContext;
})
.catch((error) => {
this.cacheError(cacheKey, stage, got, error);
throw error;
});
} else {
// sync hook result; cache now
this.cacheSuccess(cacheKey, stage, got, maybePromiseOrContext as void | EvaluationContext);
return maybePromiseOrContext;
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it just me or are these equivalent?

Signed-off-by: Todd Baert <[email protected]>
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 9, 2025
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 9, 2025
@toddbaert
Copy link
Member Author

@beeme1mr @lukas-reining ready for re-review... I'm a bit worried about the Gemini comment here though. Either it's broken or I've been working too long.

});
});

describe('SDK compatibility', () => {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this describe I explicitly test compatibility with both SDKs, including async/sync cases with returned context.

Comment on lines +3 to +4
import type { Hook as WebSdkHook } from '@openfeature/web-sdk';
import type { Hook as ServerSdkHook } from '@openfeature/server-sdk';
Copy link
Member Author

@toddbaert toddbaert Oct 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import both types of hooks for testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants