We have (olmv0) / might have (olmv1) commands that need to perform multiple operations as part of their logic, eg:
- when deleting/updating selected resources
- when waiting for deleted/updated resources to reach expected state
In those situations, it might be a good idea to at least provide an option for the user to run those operations concurrently/in parallel using fixed pool of workers to speed things up and at the same time avoid triggering Kubernetes API rate limits.
Context: #219 (comment)
Depends in part on how we want to handle waiting for resources: #220 (comment) and how we'll approach filtering options