-
-
Notifications
You must be signed in to change notification settings - Fork 1
Add compile-time safety to prevent commands from subscribing to streams without covering all event types #309
Description
Problem
Two commands can declare the same stream but use different event enums. If a command's Event associated type doesn't cover all event types that could appear on that stream, EventStore::read_stream will return EventStoreError::DeserializationFailed at runtime. There is no compile-time mechanism to catch this.
The standard pattern today is "one canonical event enum per stream, enforced by convention." This works, but the compiler cannot help verify it — the Event trait has no associated type or bound connecting a StreamId to a specific set of event variants.
How it manifests
eventcore-postgres's read_stream correctly errors on deserialization failure:
let event = serde_json::from_value(payload).map_err(|error| {
EventStoreError::DeserializationFailed {
stream_id: stream_id.clone(),
detail: error.to_string(),
}
})?;However, InMemoryEventStore::read_stream uses Box<dyn Any> downcasting with filter_map:
boxed_events
.iter()
.filter_map(|boxed| boxed.downcast_ref::<E>())
.cloned()
.collect()This silently skips events that don't match type E, masking the bug during testing. A command that passes all tests against InMemoryEventStore can fail in production against postgres when it encounters an event type its enum doesn't cover.
Suggested approach
Associate an event type with stream IDs at the type level, for example via an associated type on a Stream trait:
trait Stream {
type Events: Event;
}Then CommandStreams / StreamDeclarations could use this to verify at compile time that a command's Event type is compatible with (or identical to) the stream's canonical event type. Commands declaring a stream would need to prove their event enum matches the stream's Events type.
This is a significant API change and may require rethinking how StreamId and StreamDeclarations work, but it would close a real correctness gap that currently relies entirely on developer discipline.
Secondary issue: InMemoryEventStore diverges from postgres
Independently of the compile-time safety question, InMemoryEventStore::read_stream and PostgresEventStore::read_stream have different failure semantics:
| Store | read_stream on unrecognized event |
Mechanism |
|---|---|---|
eventcore-postgres |
Errors (DeserializationFailed) |
serde_json::from_value + ? |
eventcore-memory |
Silently skips | downcast_ref::<E>() + filter_map |
These should be consistent. Consider making the in-memory store also error when a stored event cannot be converted to the requested type, so that tests using the in-memory backend surface the same failures that would occur in production.
Note that EventReader::read_events correctly uses filter_map(.ok()) in both backends — the divergence is specific to EventStore::read_stream.