Skip to content

[FEATURE] Unified flagd-evaluator PoC (Java/Python/...) – reduced maintenance, PoC, and reference #1842

@aepfli

Description

@aepfli

Requirements

This issue summarizes the initial efforts around the new Rust-based "flagd-evaluator" PoC and the integration of this unified implementation with Java and Python SDKs.

References:

Highlights:

  • The flagd-evaluator is now a WASM-compiled Rust implementation, designed for reuse across SDKs and languages via WASM adapters.
  • Recognized reduction in maintenance burden by having a single unified logic core, replacing divergent and duplicated implementations in each language SDK.
  • The logic for the evaluator was mainly copied and adapted from the existing Rust flagd provider, acknowledging original authorship and design origins.

Disclaimer:
These efforts are strictly a Proof of Concept (PoC) and side project explorations, intended to validate the unified evaluator approach.
All implementations referenced are experimental, incomplete, and require further evaluation, polish, and design discussion.
Please do not consider any of the PoC integrations production-ready! These are provided to open up community feedback and provoke discussion on future direction.
More rigorous review and additional time should be invested into making these approaches robust.

Next Steps / Callouts:

  • Seeking feedback from maintainers and users on this approach.
  • Questions around performance, API flexibility, and safety/validation still need thorough investigation.
  • Not intended for production without further maturation.

Summary: This unified approach should lead to easier updates, reduced maintenance, and more consistent behavior across SDKs. Community input is welcome at this early stage.

Performance

Benchmark Comparison: WASM vs Native JsonLogic in Java

Comprehensive JMH benchmarks comparing the new WASM evaluator against the existing native Java JsonLogic resolver:

Scenario Old (JsonLogic) New (WASM) Notes
Simple Flag (no targeting) 0.019 µs 3.88 µs WASM call overhead
Empty Context 0.019 µs 3.94 µs Same as simple flag
Targeting No Match (small ctx) 3.66 µs 14.2 µs ~4x slower
Targeting Match (1000+ ctx entries) 345 µs 1592 µs ~5x slower

Key Findings:

  • WASM has ~3-4µs baseline overhead per call (unavoidable)
  • For complex targeting with large contexts, context serialization dominates
  • Short-circuit optimization helps significantly for flags without targeting
  • Pre-allocated buffers eliminate per-evaluation allocation overhead

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions