-
Notifications
You must be signed in to change notification settings - Fork 58
[feat, multicast] Multicast Group Support #9091
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: zl/ip-pool-multicast-support
Are you sure you want to change the base?
Conversation
7eec6c5
to
889f1ef
Compare
Introduces end-to-end multicast group support across control plane and sled-agent, integrated with IP pool extensions required for supporting multicast workflows. This work enables project-scoped multicast groups with lifecycle-driven dataplane programming and exposes an API for operating multicast groups over instances. Highlights: - DB: new multicast_group tables; member lifecycle management - API: multicast group/member CRUD; source IP validation; VPC/project hierarchy integration with default VNI fallback - Control plane: RPW reconcilers for groups/members; sagas for dataplane updates atomically at the group level; instance lifecycle hooks and piggybacking - Dataplane: Dendrite DPD switch programming via trait abstraction; DPD client used in tests - Sled agent: multicast-aware instance management; network interface configuration for multicast traffic; cross-version testing; OPTE stubs present - Tests: comprehensive integration suites under nexus/tests/integration_tests/multicast/ Components: - Database schema: external and underlay multicast groups; member/instance association tables - Control plane modules: multicast group management, member lifecycle, dataplane abstraction; RPW reconcilers to ensure convergence - API layer: endpoints and validation; default-VNI semantics when VPC not provided - Sled agent: OPTE stubs and compatibility shims for older agents Workflows Implemented: 1. Instance lifecycle integration: - "Create" -> resolve VPC/VNI (or default), validate source IPs, create memberships, enqueue group ensure RPW - "Start" -> program dataplane via ensure/update sagas; activate member flows after switch ack - "Stop" -> deactivate dataplane membership; retain DB membership for fast restart - "Delete" -> remove instance memberships; group deletion is explicit - "Migrate" -> deactivate on source sled; activate on target; idempotent with ordering guarantees - Restart/recovery -> RPWs reconcile desired state; compensations clean up partial programming 2. RPW reconciliation: - ensure dataplane switches match database state - handle sled migrations and state transitions - Eventual consistency with retry logic Migrations: - Apply schema changes in schema/crdb/multicast-group-support/up01.sql (and update dbinit.sql) - Bump schema versions accordingly API/Compatibility: - OpenAPI updated: openapi/nexus.json, openapi/sled-agent/sled-agent-5.0.0-89f1f7.json - Contains a version change (to v5) as InstanceEnsureBody has been modified to include multicast_groups associated with an instance in the underlying sled config - Regenerate clients where applicable References: - RFD 488: https://rfd.shared.oxide.computer/rfd/488 - IP Pool extensions: #9084 - Dendrite PRs (based on recency): * oxidecomputer/dendrite#132 * oxidecomputer/dendrite#109 * oxidecomputer/dendrite#14 Follow-ups include: - OPTE integration - commtest extension - omdb commands are tracked in issues - pool and group stats
889f1ef
to
04dfa49
Compare
…sed on config Being that we still have OPTE and Maghemite updates to come for statically routed multicast, we gate RPW and Saga actions behind runtime configuration ("on" for tests). API calls are tagged "experimental."
8c6215e
to
ca242df
Compare
@internet-diglett, others, I added "feature-gating" to this PR, as well as "experimental" tagging for the new entrypoints. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few API questions to start out with.
// TODO: #3604 Consider using `SwitchLocation` type instead of `Name` for `LoopbackAddressCreate.switch_location` | ||
/// The location of the switch within the rack this loopback address will be | ||
/// configured on. | ||
/// configupred on. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
|
||
async fn multicast_group_create( | ||
rqctx: RequestContext<ApiContext>, | ||
query_params: Query<params::ProjectSelector>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should multicast groups be specific to projects? If I'm reading the group address allocation correctly, once a multicast group object is created, the address that gets allocated to that group then belongs to the multicast group object. If that multicast group object is owned by a project, that means no other projects can receive traffic on that group. Is that the current model? I can see a world where users would want group membership to transcend projects or even transcend silos since IP pools can be linked to multiple silos.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rcgoodfellow, I was following up on what we had with External IPs, essentially, but agree that it should be more open in role/scope. I guess I can attach this to the fleet for now, right? I had this initially, but I should have stuck to it.
.await | ||
} | ||
|
||
async fn multicast_group_member_add( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a functional difference between adding an instance through this endpoint versus updating the instance's multicast groups field? Same question applies to other group management endpoints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rcgoodfellow No functional difference except hooking into instance-specific updates. I allowed both versions to match concepts with other IP resources. Should we restrict it to just the multicast API, meaning, keep this one and not the instance interface variant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to add documentation that discusses both endpoints.
.await | ||
} | ||
|
||
async fn instance_multicast_group_join( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question to above, what is the functional difference between this, a straight up instance update and the instance-agnostic group membership management endpoints?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented on in #9091 (comment).
Note: Review IP Pool extensions first.
Introduces end-to-end multicast group support across control plane and sled-agent, integrated with IP pool extensions required for supporting multicast workflows. This work enables project-scoped multicast groups with lifecycle-driven dataplane programming and exposes an API for operating multicast groups over instances.
This currently points to #9084, which is needed for this to work.
Highlights:
Components:
Workflows Implemented:
Instance lifecycle integration:
RPW reconciliation:
Migrations:
API/Compatibility:
include multicast_groups associated with an instance in the underlying sled config
References:
Follow-ups include: