Skip to content

Decision: Improve v2 bridge? #218

@Diaphteiros

Description

@Diaphteiros

Description

For a better understanding, let's start with some diagrams that show the v1 and v2 architecture (only showing the parts that are relevant for this issue).

Rectangular elements correspond to k8s resources, diamond-shaped ones are controllers, and round bubbles stand for abstract responsibilities.

v1 Architecture

Image

v2 Architecture

Image

v1 Architecture with Bridge Logic (Current State)

Image

The idea behind the current bridge logic is that each piece of bridge logic replaces the original logic of a single component, with keeping the responsibilities the way they were. The APIServer component is responsible for shoot management, its v1 logic manages shoots directly, while the bridge v2 logic delegates this to ClusterProviders by using ClusterRequest and AccessRequest resources. For service components, the bridge logic always just generates the respective v2 service resource and transforms its status back so that the status of the v1 resource looks as expected.

Proposal

When the bridge logic for the APIServer component was implemented, the MCP v2 controller did not exist yet. Now that it does, we have the option to do something like this:
Image

Instead of generating ClusterRequest and AccessRequest resources, the MCP v1 controller transforms the MCP v1 resource into an MCP v2 resource. The MCP v2 controller then takes care of cluster management as well as authentication and authorization.

The v1 APIServer, Authentication, and Authorization resources wouldn't do anything then, they would just be needed to keep the system compatible with v1 services (this is what the * means).

Improved Bridge Logic: Advantages

With this approach, v1 becomes simply a facade in front of the equivalent v2 resources. To migrate an MCP v1 that has the bridge logic activated for all its components to v2, we simply need to get rid of the v1 resources - the bridge logic is actually using a complete v2 system under the hood.
This means that migration is easy and low-risk, because the bridge logic is basically the migration logic (most of it, at least).

Improved Bridge Logic: Disadvantages

The major disadvantage of moving to the proposed approach is that it is effort to implement. This is not necessarily complex, but it is probably still a lot of effort, since we need to fake the v1 APIServer, Authentication, and Authorization resources. Furthermore, the MCP v2 controller is really generic and just splits its resource into multiple components - changing this logic for some, but not all, components may be a bit tricky.

Consequences

If we decide to not invest this effort now due to our timeline, we will have slightly more effort with the migration later on, since we have to migrate from a v1 APIServer resource with ClusterRequest and AccessRequest in combination with the v1 Authentication and Authorization resources to a v2 MCP resource, which takes over all of their combined responsibilities.

We recently decided that services need to put a finalizer on their corresponding MCP v2 resource. This is required, so that the MCP v2 controller can trigger the deletion of the services when the MCP itself is deleted.
Our current bridge logic does not use an MCP v2 resource. For our v2 service providers to be compatible with both v2 architecture and v1 bridge logic, we would need to either write the service providers in a way that they don't fail when they can't find the MCP v2 resource (although we actually want this to happen in v2 architecture), or we need to make the bridge logic create a dummy MCP v2 resource that doesn't do anything except for taking the service finalizers.


Basically, it boils down to two options:
Low effort now, but we have actually three architectures with the bridge logic being somewhere in between v1 and v2, and migrating will be a little bit more complex later on.
Or more effort now, but the bridge logic is actually using full v2 under the hood, which avoids special logic and makes migration easy later on.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/open-mcpAll ManagedControlPlane related issues

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions