Skip to content

Conversation

@albertoperdomo2
Copy link

Summary

This PR adds a new rate type incremental to the suite of benchmarks. This new rate type starts at a specified initial rate --start-rate and linearly increases the request rate over time by --increment-factor, offering an optional rate limit --rate-limit to cap the maximum rate. This simulates, as the name implies, load ramp up along time.

Reimplementation of #291 after the new release.

Details

  • Added new CLI flags mentioned before: --start-rate, --increment-factor and --rate-limit.
  • Added a new profile IncrementalProfile class.
  • Implemented AsyncIncrementalStrategy scheduler strategy that handles the logic and includes the optional initial burst.

Test Plan

Related Issues

  • Resolves #

  • "I certify that all code in this PR is my own, except as noted below."

Use of AI

  • Includes AI-assisted code completion
  • Includes code generated by an AI application
  • Includes AI-generated tests (NOTE: AI written tests should have a docstring that includes ## WRITTEN BY AI ##)

@sjmonson
Copy link
Collaborator

@albertoperdomo2 Should #291 be closed?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally we don't like to add top-level arguments that only map to a specific use-case. Many of the other components have a "kwargs" argument so it might make sense to add a --profile-kwargs. Also --rate needs to map to something so maybe make it increment_factor (or start_rate; whichever it makes more sense to sweep over).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option might be to modify --rate to take tuples of (start_rate, increment_factor).

cc: @markurtz @jaredoconnell to weigh in on this.

else:
increment = 1.0 / next_rate

self._process_offset += increment
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strategies are shared across threads/processes, any assignments need to be atomic or locked with self._processes_lock. See other Strategies for examples.

@albertoperdomo2 albertoperdomo2 force-pushed the feature/incremental-load-reimpl branch from 6cb8c60 to ad95da4 Compare November 20, 2025 08:09
@jaredoconnell
Copy link
Collaborator

Do you think your requirements regarding this could be handled with the rampup_duration field in concurrent, or throughput strategies, or by maybe implementing rampup_duration inside of constant?
How important is the initial burst feature?

@albertoperdomo2
Copy link
Author

@jaredoconnell I think the best bet would be to implement it via rampup_duration inside of constant, but maybe @sjmonson has something else to add. IMO this PR can be closed given #549 as I have left it unattended for way to long. Thanks Jared and Sam!

@jaredoconnell
Copy link
Collaborator

Superseeded by #549

jaredoconnell added a commit that referenced this pull request Jan 23, 2026
## Summary

Simply allows a linear rampup of the constant rate profile.

## Test Plan

The simplest test is to run a short constant test with 4 requests per
second, with a long rampup. You can see how it ramps as expected.
There are also new tests.

## Related Issues

Fulfills part of the goals of #428 

---

- [x] "I certify that all code in this PR is my own, except as noted
below."

## Use of AI

- [ ] Includes AI-assisted code completion
- [x] Includes code generated by an AI application
- [x] Includes AI-generated tests (NOTE: AI written tests should have a
docstring that includes `## WRITTEN BY AI ##`)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants