-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Open
Description
Feature Description
We have a scenario where we want to replay data recorded in a CSV file.
The file size is in the hundreds of gigabytes so it cannot be stored in a SharedArray
We have looked at sharing the CSV and having each VU handle a record modulo the VU ID.
Two problems with this approach:
- The VU number needs to be fixed - our scenario sees VUs varing from the hundreds to the tens of thousands depending on the inputs.
- We need to process the files sequentially as the rows are related.
Suggested Solution (optional)
Two solutions can be thought of
- Pushing the data to k6 using a TCP socket
- Polling a kafka stream. As we have tens of thousands of VUs, we would need the implementation to share a single connection to Kafka.
Already existing or connected issues / PRs (optional)
No response
Reactions are currently unavailable