-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Is your feature request related to a problem? Please describe.
When using many LLM rules, performance can be slow due to sequential processing of rules.
Describe the solution you'd like
To improve performance, a configuration option for RuleEngine should be available to enable concurrent processing of rules. This will have to consider the dependency chain to ensure that rules are fired in dependent order, but allow concurrent processing independent rules.
For LLM rules, especially large rulesets, this should also consider how to leverage model provider batch APIs to reduce cost and avoid rate limiting.
Describe alternatives you've considered
Fewer LLM rules, but that defeats the point of Vulcan.
Additional context
This feature is needed for scaling.