Skip to content

Performance Benchmark Analysis: Establishing Competitive Positioning #1112

@taras

Description

@taras

Summary

This issue captures a comprehensive analysis of Effection's performance benchmarks and proposes a plan to establish credibility and competitive positioning for potential adopters.

Current Benchmark Results

Ran on: 2026-02-14
System: macOS (darwin)
Deno: 2.x

Basic Recursion (100 depth × 100 iterations at leaf)

Library Avg Time (ms) vs Effection
Effection 1.55 baseline
Effect.js 1.17 1.3x faster
co 0.56 2.8x faster
async/await 0.47 3.3x faster
RxJS 0.40 3.9x faster

Recursive Events (100 depth, 100 events dispatched)

Library Avg Time (ms) vs Effection
Native addEventListener 9.4 16.5x faster
RxJS 130.6 1.2x faster
Effection 154.7 baseline
Effect.js 563.9 3.6x slower

Key Findings

Positive

  • Effection beats Effect.js in both benchmarks (1.3x and 3.6x respectively)
  • ✅ Effection is competitive with RxJS on event handling
  • ✅ Benchmark infrastructure exists and is well-designed

Areas for Improvement

  • ⚠️ Only average time is reported — no statistical metrics (std dev, p50, p95)
  • ⚠️ No warmup runs to stabilize JIT
  • ⚠️ Benchmarks measure synthetic recursion depth, not real-world patterns
  • ⚠️ No memory/allocation benchmarks
  • ⚠️ No "cost of correctness" benchmarks that show Effection's value proposition

Benchmark Consistency Analysis

Question: Are benchmarks testing consistently across frameworks?

✅ Consistent

  • All recursion benchmarks test the same pattern (depth N, 100 Promise.resolve() at leaf)
  • All event benchmarks test the same cascade pattern (depth N, 100 events from top)

⚠️ Minor Inconsistencies

  1. Native addEventListener uses await Promise.resolve() instead of sleep(0) between events
  2. All non-Effection benchmarks are wrapped in Effection's call() or action() — adds small baseline overhead
  3. Effect.js uses Effect.sleep(0) which may not be timing-equivalent to other sleep implementations

The Narrative Problem

Current benchmarks tell an incomplete story. They focus on raw throughput which doesn't showcase Effection's value proposition:

"Effection provides structured concurrency guarantees (cleanup, cancellation, error propagation, resource safety) that async/await doesn't have. The overhead is the cost of those guarantees."

We need benchmarks that measure the cost of achieving correctness manually vs. using Effection.

Proposed Improvements

Phase 1: Infrastructure

  • Add statistical metrics: min, max, stdDev, p50, p95, p99
  • Add warmup runs (discard first N runs to stabilize JIT)
  • Add --json flag for machine-readable output
  • Add --memory flag for heap profiling

Phase 2: "Cost of Correctness" Benchmarks

These showcase Effection's value proposition:

Benchmark What It Measures
Cancellation cascade Time to halt N concurrent tasks vs manual AbortController
Error boundary containment Catching errors in concurrent operations without leaking resources
Resource lifecycle under failure Cleanup guarantee when exceptions occur mid-operation
Context propagation Effection Context vs AsyncLocalStorage vs manual threading

Phase 3: Real-World Pattern Benchmarks

Benchmark Pattern
HTTP request fan-out Spawn N requests, cancel on first success
Connection pool lifecycle Acquire/use/release under concurrent load
Stream processing pipeline Source → transform → sink throughput
Timeout with cleanup Operation that must clean up regardless of outcome

Phase 4: Memory Benchmarks

  • Heap allocation per spawn() call
  • GC pause impact under load
  • Long-running subscribe/unsubscribe memory stability

Phase 5: Documentation & Narrative

  • Performance documentation page
  • "When to use Effection" decision guide based on perf characteristics
  • Competitive comparison matrix (Effection vs Effect.js vs RxJS vs async/await)

Phase 6: CI Integration

  • GitHub Actions workflow to run benchmarks on PRs
  • Regression detection (warn if >5% slower)
  • Historical tracking

Potential Performance Optimization Areas

Identified during code analysis:

  1. Hot path in Reducer.reduce(): scope.expect() calls on every instruction could be cached
  2. Delimiter level tracking: Could optimize for common case (no interruption)
  3. Priority queue overhead: Simpler queue for flat parallelism patterns
  4. Context access patterns: Cascading lookup could benefit from caching

Related

/cc @cowboyd

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions