Skip to content

Conversation

aclarembeau
Copy link

@aclarembeau aclarembeau commented Jul 11, 2025

Add configurable timeout option for consume operations

Hello :) And thank you for the impressive work the team have done on this amazing library.
This is my first contribution, so, feel free to tell me if I did anything wrong. I'm happy to improve the format / documentation / code of this PR.

What did I do

Adds a new timeout parameter to the consume method that allows operations to automatically fail after a specified number of milliseconds, providing better control over time-critical rate limiting scenarios.

Motivation

Rate limiting is often time-critical, and not all providers offer reliable global timeouts that can guarantee request processing within specific time constraints. This enhancement ensures that applications can implement robust fallback strategies when rate limiting operations take too long, improving overall system reliability and user experience.

Small note

In the original version of the MR (commit 1), I managed to achieve this without an else branch, as in JS, promises run as soon as they are defined. But, after a bit of thinking, I added an explicit else branch, which I found a bit easier to read (commit 2), even though this is only needed for readability.

@animir
Copy link
Owner

animir commented Jul 12, 2025

Hi @aclarembeau Thank you. It is interesting concept.
We should think how often users meet this obstacle. Is it an edge case?

From your experience what databases don't provide request timeout options?

@aclarembeau
Copy link
Author

I can give redis as an example. If you use node flexible rate limiter as a rate limit solution for a low latency api, and your redis instance is not responding, you may have to wait for a full TCP keep alive + 1 TCP read timeout in order for redis to change its state and become non responsive. As this option is not configurable per instance on node redis, it means your public request may have to wait a few minutes before being processed (for instance by the insurance rate limiter). Thanks to this option, we can introduce a timeout for providers, like redis, that don't have a notion of per-request timeout.

An alternative solution would have been to bring a timeout only for redis, but I suspect other engines may benefit from it too (for instance I don't see such a timeout in the options for postgres in the lib).

Thank you

@atetta
Copy link

atetta commented Jul 14, 2025

@aclarembeau Thank you for the explanation.
I checked node-redis documentation. They recently introduced an option for that. Take a look at this PR redis/node-redis#3008
ioredis has commandTimeout option.
pg has query_timeout option.
My sense is that timeouts should be handled on the level of database client or/and server. Not on the level of 3rd party package.

What do you think?

@aclarembeau
Copy link
Author

Oh, indeed, if individual clients support it, there is no need to have a global timeout :) In that case, we can close this PR
Thank you

@aclarembeau
Copy link
Author

After having a second look, it seems that unfortunately, the node-redis options are not giving enough control:

Citing their change:
Add command timeout in the command options. It can be set via the global options when creating a client or per command when using the generic sendCommand function
Timeout only works while the command is still pending to be sent. Once a command is sent over the wire, the timeout is deleted and is never going to trigger. This decision could be changed if we deem its not good enough

That means if the command is sent, but pending, even with a redis timeout, the command will hang

@animir
Copy link
Owner

animir commented Jul 16, 2025

@aclarembeau Hi.

Once a command is sent over the wire

Can command be sent to stale Redis server? I presume it won't accept TCP packages in this case. I don't know for a fact though.
Anyway, I think we should avoid complicating 3rd party package (rate-limiter-flexible) core code to cover specific database case.

I suggest slightly more isolated approach.
There could be a new wrapper for a limiter, e.g. RLWrapperTimeouts (plural in case there will be specific timeouts in the future) The maybe could be called callTimeoutMs or any other you think could be suitable. Keep in mind there could be more timeouts added in the future.
Take a look at RLWrapperBlackAndWhite for inspiration.
RLWrapperTimeouts would follow the same limiter interface defined in index.d.ts:

export class RLWrapperBlackAndWhite extends RateLimiterAbstract {
    constructor(opts: IRLWrapperBlackAndWhiteOptions);
}

Thoughts?

@aclarembeau
Copy link
Author

You can have a situation where the command is issued, but the response never comes back, if the machine dies in the middle of the process.

I'm not against the wrapper approach. That would be a nice way to extend the behavior of that library, but I'm wondering: is this something that is currently possible with the exposed API, or, we need to implement some kind of wrapper strategy?

Thank you

@animir
Copy link
Owner

animir commented Jul 17, 2025

Good question.
There is no requirements for wrappers now.

A wrapper should provide a way to set limiter, e.g.

const limiterWithTimeouts = new RLWrapperTimeouts({
  limiter: new RateLimiterRedis({ storeClient: redisClient }),
  callTimeoutMs: 1000,
})

It would be good to have implemented all seven methods required by RateLimiterAbstract consume, penalty, reward, etc.
That's basically it.

@florian-schunk
Copy link

Hi, I would also be interested in this feature. What is the status right now?

@animir
Copy link
Owner

animir commented Aug 7, 2025

@aclarembeau HI, is it in your nearest plans to continue?

@animir
Copy link
Owner

animir commented Aug 23, 2025

@florian-schunk Hi, it seems like aclarembeau isn't going to implement the wrapper.
Would you like to create it based on these comments #315 (comment) and #315 (comment)?

@animir animir marked this pull request as draft September 5, 2025 09:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants