-
Notifications
You must be signed in to change notification settings - Fork 33
Description
From discussion with @cjpatton, here's an idea for a possible future trait change. The Vdaf and Aggregator traits require several associated types for different messages, and most of them are required to implement either Decode or ParameterizedDecode.
Vdaf::AggregationParam: DecodeVdaf::PublicShare: ParameterizedDecode<Self>Vdaf::InputShare: for<'a> ParameterizedDecode<(&'a Self, usize)>Vdaf::OutputShare: for<'a> ParameterizedDecode<(&'a Self, &'a Self::AggregationParam)>Vdaf::AggregateShare: for<'a> ParameterizedDecode<(&'a Self, &'a Self::AggregationParam)>Aggregator::PrepareShare: ParameterizedDecode<Self::PrepareState>Aggregator::PrepareMessage: ParameterizedDecode<Self::PrepareState>
The Aggregator::PrepareState associated type has no Decode/ParameterizedDecode trait bounds, but it must be deserializable one way or another in order for an aggregator that's implemented as a distributed system to use the VDAF.
Lastly, Vdaf::Measurement and Vdaf::AggregateResult have no such trait bound, and don't need to traverse network connections or be written to disk.
These trait boundaries were arrived at through iteration, and notably had to be changed in order to implement Poplar1. It would be better if we redesigned this in a more principled way, starting from what set of information participants will have a priori when they need to decode different messages. This would prevent the need for future changes motivated by other VDAFs (or, in the alternative, avoid us having to use partially-self-describing serialization if we don't make trait changes for a new VDAF that needs more information).
The various existing decoding parameters include the instantiated VDAF, the aggregator's ID (as a usize, though now the spec requires that it fit in one byte), the aggregation parameter, and the prepare state. Note that the prepare state implicitly provides the current aggregation round. Here's a first cut of a maximal set of decoding parameters:
| Message | Instantiated VDAF | Aggregator ID | Aggregation parameter | Prepare state or round |
|---|---|---|---|---|
PublicShare |
✓ | ✗ | ✗ | ✗ |
InputShare |
✓ | ✓ | ✗ | ✗ |
AggregationParam |
✓ | ✗ | ✗ | ✗ |
PrepareState1 |
✓ | ✓ | ✓ | ? |
PrepareShare |
✓ | ✓ | ✓ | ✓ |
PrepareMessage |
✓ | ✗ | ✓ | ✓ |
OutputShare |
✓ | ✓ | ✓ | ✗ |
AggregateShare |
✓ | ✓ | ✓ | ✗ |
Fleshing out decoding parameters could also let us clean up an awkward data flow in Poplar1's prepare state: decoding routines for various types need to switch on the field size (inner vs. leaf), and some determine this directly from the level in the aggregation parameter, while others use an enum discriminant in the prepare state, because that's the only decoding parameter provided.
On the other hand, note that the VDAF spec states that prep_next() takes only the PrepState and PrepMessage as input. While aggregators will always have access to the aggregation parameter by the time they execute prep_next(), requiring that it be passed as a decoding parameter may make implementations thread it through functions it wasn't passed to previously.
I think at some point I previously argued against taking the current round as a decoding parameter to decode a prepare state, because requiring users to store round numbers and prepare state blobs next to each other was more complicated than storing prepare state blobs that included their own indication of the round, if needed. One open question I have is whether we should still provide the entire prepare state when decoding prepare shares and prepare messages, instead of just the round number. It seems unlikely but possible that a multi-round VDAF could need to remember something from a previous preparation round until deserialization of a subsequent message from the other aggregator.
In some cases, implementations may reuse one type in multiple associated types. However, opportunities to do so may be limited by the deserialization trait implementations. If two messages need to be deserialized sightly differently, but they get the same decoding parameters passed as context, then the same type can't be used for both. We could fix this so implementations never overlap, and make the intent of ParameterizedDecode implementations easier to understand, by adding a zero-sized type to the decoding parameter, indicating what sort of message is being decoded. (similar to the typestate pattern) For example we could declare struct PrepareMessageToken;, and then use the trait bound type PrepareMessage: for<'a> ParameterizedDecode<(&'a Self, u8, &'a Self::AggregationParam, &'a Self::PrepareState, PrepareMessageToken)>;.
Footnotes
-
Note that this associated type isn't required to be deserializable by the trait bounds. ↩