Skip to content

Concurrent rule evaluation #29

@segfly

Description

@segfly

Is your feature request related to a problem? Please describe.
When using many LLM rules, performance can be slow due to sequential processing of rules.

Describe the solution you'd like
To improve performance, a configuration option for RuleEngine should be available to enable concurrent processing of rules. This will have to consider the dependency chain to ensure that rules are fired in dependent order, but allow concurrent processing independent rules.

For LLM rules, especially large rulesets, this should also consider how to leverage model provider batch APIs to reduce cost and avoid rate limiting.

Describe alternatives you've considered
Fewer LLM rules, but that defeats the point of Vulcan.

Additional context
This feature is needed for scaling.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions