diff --git a/docs/developers/getting-started.md b/docs/developers/00-getting-started.md similarity index 100% rename from docs/developers/getting-started.md rename to docs/developers/00-getting-started.md diff --git a/docs/developers/clusterproviders.md b/docs/developers/clusterproviders.md new file mode 100644 index 0000000..a7104c9 --- /dev/null +++ b/docs/developers/clusterproviders.md @@ -0,0 +1,235 @@ +# Cluster Providers + +A *ClusterProvider* is one of the three provider types in the openMCP architecture (the other two being *PlatformService* and *ServiceProvider*). ClusterProviders are responsible for managing kubernetes clusters and access to them, based on our [cluster API](https://github.com/openmcp-project/openmcp-operator/tree/main/api/clusters/v1alpha1). + +This document aims to describe the tasks of a ClusterProvider and the contract that it needs to fulfill in order to work within the openMCP ecosystem. + +## Deploying a ClusterProvider + +ClusterProviders are usually deployed via the [provider deployment](./provider_deployment.md) mechanism and need to stick to the corresponding contract. + +## Implementing a ClusterProvider + +### Provider Configuration + +Most ClusterProviders will probably require some form of configuration. Since the provider deployment does not allow passing in configuration via an argument to the binary directly, they need to read the configuration from a k8s resource. Depending on the provider, it might even allow multiple configuration resources and/or reconcile them instead of just reading them statically. + +### Cluster Profiles + +Out of the configuration(s), the ClusterProvider has to generate `ClusterProfile` resources. They serve as some kind of service discovery and look like this: +```yaml +apiVersion: clusters.openmcp.cloud/v1alpha1 +kind: ClusterProfile +metadata: + name: default.gardener.mcpd-gcp-large +spec: + providerConfigRef: + name: mcpd-gcp-large + providerRef: + name: gardener + supportedVersions: + - version: 1.33.3 + - deprecated: true + version: 1.33.2 + - version: 1.32.7 + - deprecated: true + version: 1.32.6 + - deprecated: true + version: 1.32.5 + - deprecated: true + version: 1.32.4 + - deprecated: true + version: 1.32.3 + - deprecated: true + version: 1.32.2 +``` + +`spec.providerRef` is the name of the ClusterProvider that created this `ClusterProfile`. It should be filled with the value that the provider received via its [`--provider-name`](./provider_deployment.md#arguments) argument. + +`spec.providerConfigRef` is the name of the provider configuration that is responsible for this profile. Whether this refers to an actual k8s resource, an internal value or just a static string depends on the provider implementation. It is used as a label value though and therefore has to match the corresponding regex. + +`spec.supportedVersions` is a list of kubernetes versions that are supported by this provider for this profile. + +> The name of the ClusterProfile can be freely chosen. In this example, it follows the format `X.Y.Z`, where `X` is the environment name, `Y` is the name of the ClusterProvider, and `Z` is the name of the provider configuration that created this profile. A naming scheme like this avoids potential conflicts between multiple ClusterProviders (or multiple instances of the same ClusterProvider). + +`ClusterProfile` resources are cluster-scoped and do not have a status. + +Note that each ClusterProvider must at least generate one `ClusterProfile` in order to be usable. + +### Cluster Management + +The main purpose of ClusterProviders is the management of k8s clusters. Each ClusterProvider therefore needs a controller that reconciles the `Cluster` resource, which looks like this: +```yaml +apiVersion: clusters.openmcp.cloud/v1alpha1 +kind: Cluster +metadata: + annotations: + clusters.openmcp.cloud/providerinfo: foobar + labels: + clusters.openmcp.cloud/k8sversion: 1.31.11 + clusters.openmcp.cloud/provider: gardener + name: my-cluster + namespace: my-namespace +spec: + kubernetes: + version: 1.32.8 + profile: default.myprovider.myprofile + purposes: + - my-purpose + tenancy: Shared +``` + +Some information about the different fields: +- The `clusters.openmcp.cloud/k8sversion` and `clusters.openmcp.cloud/provider` labels are not set by default. The cluster provider can populate them to allow for easier filtering or better column information in `kubectl get`. + - Note that `spec.kubernetes.version` contains a desired k8s version, which does not have to match the actual k8s version that is displayed in the label. +- The `clusters.openmcp.cloud/providerinfo` annotation can be used to hold additional provider-specific information. It is displayed as a column on `kubectl get -o wide`. +- `spec.kubernetes.version` can contain a desired k8s version. If not set, the provider has to derive it from its configuration. The provider can decide to either throw an error or choose a version if an invalid/unsupported version is specified. +- `spec.profile` is the most important field for a ClusterProvider. It references the `ClusterProfile` that should be used for this cluster. + - The referenced profile contains a reference to the ClusterProvider it belongs to. Since multiple ClusterProviders can run in parallel, this allows a ClusterProvider to determine whether it is responsible for this cluster resource or not. + - **ClusterProviders must only ever act on `Cluster` resources that reference profiles belonging to themselves!** + - The profile is immutable. + - This can also contain further configuration, e.g. for the Gardener ClusterProvider, each provider configuration (which is referenced in the profile) can specify a different Gardener landscape and/or project to use. +- `spec.purposes` and `spec.tenancy` are mostly relevant for the scheduler and usually don't need to be evaluated by the ClusterProvider. + +#### Reconciliation Logic + +Before doing anything in a reconciliation, the ClusterProvider needs to check whether it is responsible for the `Cluster` resource or not. For this, it has to check if it created the `ClusterProfile` that is referenced in `spec.profile` itself or if it was created by a different ClusterProvider. It can either keep track of created `ClusterProfile` resources internally or compare `spec.providerRef.name` in the profile to its own name (passed in via the `--provider-name` argument). If the name differs, another ClusterProvider is responsible for this resource and the ClusterProvider must not touch it. + +The rest of the reconciliation logic is pretty much provider specific: If the `Cluster` resource has a deletion timestamp, delete the k8s cluster and everything that belongs to it and then remove the finalizer. Otherwise, ensure that there is a finalizer on the `Cluster` resource and create/update the actual k8s cluster. + +#### Status Reporting + +Since creating, updating, or deleting k8s clusters can easily take several minutes, reporting the current status is very important here. It is recommended to make good use of the conditions that are part of the status. ClusterProviders must adhere to the [general status reporting rules](./general.md#status-reporting). + +In addition to the common status, the `Cluster` status contains a few more fields that can be set by the ClusterProvider: +- `apiServer` should be filled with the k8s cluster's apiserver endpoint, as soon as it is known. +- `providerStatus` can hold arbitrary data and is meant for provider-specific information. Using it is optional and no other controller will evaluate the contents of this field. + +Note that any kind of kubeconfig should not be part of the cluster's status - access to the cluster is managed via `AccessRequest` resources. + +### Access Management + +ClusterProviders are not only responsible for creating and deleting k8s clusters, but also for managing access to their clusters. Controllers and human users can request access to a cluster by creating an `AccessRequest` resource which looks like this: +```yaml +apiVersion: clusters.openmcp.cloud/v1alpha1 +kind: AccessRequest +metadata: + name: my-access + namespace: my-namespace + labels: + # ClusterProviders must only act on AccessRequests where these two labels are set + # and the value of the first one matches their own provider name. + clusters.openmcp.cloud/provider: myprovider + clusters.openmcp.cloud/profile: default.myprovider.myprofile +spec: + clusterRef: # optional, takes precedence over requestRef if set + name: my-cluster + namespace: foo + + requestRef: # optional, at least one of clusterRef and requestRef must be set + name: my-request + namespace: bar + + token: # either token or oidc + permissions: + - name: foo # optional, not required usually + namespace: test # optional, results in Role if set and in ClusterRole otherwise + rules: + - apiGroups: + - "*" + resources: + - "*" + verbs: + - "*" + roleRefs: + - kind: ClusterRole + name: my-clusterrole + + oidc: # either token or oidc + name: my-oidc-provider + issuer: https://oidc.example.com + clientID: my-client-id + usernameClaim: sub # optional + usernamePrefix: "my-user:" + groupsClaim: group # optional + groupsPrefix: "my-group:" + extraScopes: + - foo + roleBindings: + - subjects: + - kind: User + name: foo + - kind: Group + name: bar + roleRefs: + - kind: ClusterRole + name: my-cluster-role + - kind: Role + name: my-role + namespace: default + roles: + - name: my-admin + rules: + - apiGroups: + - "*" + resources: + - "*" + verbs: + - "*" +``` + +Note that, while the example shows both, an `AccessRequest` must have exactly one of `spec.token` and `spec.oidc` set, not both. + +#### Token-based Access + +If `spec.token` is set, a token-based access is requested. The ClusterProvider is expected to create a `ServiceAccount`, create `Role` (if `namespace` is not empty) and `ClusterRole` (if `namespace` is empty) resources for each entry in `spec.token.permissions`, and create `RoleBinding` and `ClusterRoleBinding` resources for each entry in `spec.token.permissions` and each entry in `spec.token.roleRefs`. + +Since token-based access is based on standard RBAC and TokenRequest APIs, it should work on any k8s cluster and is expected to be supported by every ClusterProvider. + +#### OIDC-based Access + +If `spec.oidc` is set, OIDC-based access is requested. Most fields within `spec.oidc` are required for setting up the trust relationship. +`extraScopes` is meant to be used for the `oidc-login` kubectl plugin that handles OIDC authentication. +`roleBindings` specifies (Cluster)RoleBindings that should be created, while `roles` can be used to construct additonal (Cluster)Roles. + +Note that not every ClusterProvider might support OIDC-based access and requesting it could result in an error or a denied request. + +> The `spec.oidc` field contains a nested struct named `OIDCProviderConfig` that has a `Default()` method. Whenever reading data from this field, it is strongly recommended to have run the `Default()` method first, because it will take care of setting some defaults, such as appending a `:` suffix to the username and groups prefixes, if it doesn't exist. + +#### The Preparation of AccessRequests + +From a 'raw' `AccessRequest`, it is not immediately obvious which ClusterProvider is responsible: +If `spec.clusterRef` is not set, first the `ClusterRequest` that is referenced in `spec.requestRef` needs to be fetched. From there, the `Cluster` needs to be fetched, which again leads to the `ClusterProfile` and only then the provider knows whether it is responsible or not. + +To avoid having to implement this flow in every ClusterProvider and have all ClusterProviders executing it whenever any `AccessRequest` changes, there exists a 'generic' AccessRequest controller that takes over this task. This generic controller reacts _only_ on `AccessRequest` resources that do not have both the `clusters.openmcp.cloud/provider` _and_ the `clusters.openmcp.cloud/profile` labels. +It modifies the `AccessRequest` in the following way: +- It adds the `clusters.openmcp.cloud/provider` label with the provider name (extracted from the `ClusterProfile`) as value. +- It adds the `clusters.openmcp.cloud/profile` label with the `ClusterProfile` name as value. +- If `spec.clusterRef` is empty, it resolves the `ClusterRequest` reference and fills `spec.clusterRef` with the information from the ClusterRequest's status. + +This means that the AccessRequest controller in a ClusterProvider must only act on AccessRequests that have both of the aforementioned labels set. They can then expect `spec.clusterRef` to be set and don't need to check for `spec.requestRef`. + +It is recommended to use [event filtering](./general.md#event-filtering) to avoid reconciling AccessRequests that belong to another provider or have not yet been prepared by the generic controller. The controller-utils library contains a `HasLabelPredicate` filter that can be used for both, verifying existence of a label as well as checking if it has a specific value: +```go +import ( + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/predicate" + ctrlutils "github.com/openmcp-project/controller-utils/pkg/controller" + clustersv1alpha1 "github.com/openmcp-project/openmcp-operator/api/clusters/v1alpha1" +) + +// SetupWithManager sets up the controller with the Manager. +func (r *AccessRequestReconciler) SetupWithManager(mgr ctrl.Manager) error { + return ctrl.NewControllerManagedBy(mgr). + For(&clustersv1alpha1.AccessRequest{}). + WithEventFilter(predicate.And( + // this checks whether the provider label exists and has the correct value + // 'providerName' holds the value that was passed into the ClusterProvider via the '--provider-name' argument + ctrlutils.HasLabelPredicate(clustersv1alpha1.ProviderLabel, providerName), + // this just checks whether the label exists, independent from its value + ctrlutils.HasLabelPredicate(clustersv1alpha1.ProfileLabel, ""), + // + )). + Complete(r) +} +``` diff --git a/docs/developers/general.md b/docs/developers/general.md new file mode 100644 index 0000000..8a018ce --- /dev/null +++ b/docs/developers/general.md @@ -0,0 +1,116 @@ +# General Controller Guidelines + +This document contains some general guidelines for contributing code to openMCP controllers. The goal is to align the coding and make all controllers look and behave similarly. + +## Reconcile Logic + +### Operation Annotations + +The option to manually trigger or disable reconciliation for specific objects has been shown to be useful in the past. There are two operation annotations which should be supported by each controller: + +- `openmcp.cloud/operation: reconcile` + - This annotation is expected to trigger a reconciliation and then be removed by the reconciling controller. + - If the reconcile logic contains 'shortcuts' that check if something needs to be done and skip it otherwise, the annotation should cause these checks to always result in the code being executed instead of skipped. +- `openmcp.cloud/operation: ignore` + - Resources with this annotation should not be reconciled. Simply abort the reconciliation, if this annotation is found. + +The following code snippet can be used as a template for the desired behavior: +```go +import ( + apiconst "github.com/openmcp-project/openmcp-operator/api/constants" + ctrlutils "github.com/openmcp-project/controller-utils/pkg/controller" +) + +// within the Reconcile method: + // handle operation annotation + hadReconcileAnnotation := false // only required if the information whether the reconciliation was triggered manually is relevant for the reconcile logic + if obj.GetAnnotations() != nil { + op, ok := obj.GetAnnotations()[apiconst.OperationAnnotation] + if ok { + switch op { + case apiconst.OperationAnnotationValueIgnore: + log.Info("Ignoring resource due to ignore operation annotation") + return reconcile.Result{}, nil + case apiconst.OperationAnnotationValueReconcile: + log.Debug("Removing reconcile operation annotation from resource") + if err := ctrlutils.EnsureAnnotation(ctx, myClient, obj, apiconst.OperationAnnotation, "", true, ctrlutils.DELETE); err != nil { + return reconcile.Result{}, fmt.Errorf("error removing operation annotation: %w", err) + } + hadReconcileAnnotation = true + } + } + } +``` + +### Status Reporting + +Each resource that is reconciled by a controller should include the *common status* in its own status: +```go +import ( + commonapi "github.com/openmcp-project/openmcp-operator/api/common" +) + +type MyStatus struct { + commonapi.Status `json:",inline"` + + // add more status fields if required +} +``` + +The common status contains the following fields that should be updated during reconciliation: +- `observedGeneration` + - The value of this field should be set to the value of `metadata.generation` during each reconciliation, independent of whether the reconciliation was successful or resulted in an error. + - Updating the field should be skipped if the resource has the ignore operation annotation. +- `conditions` + - This is a list of conditions. It uses the same condition type that k8s also uses for its core resources, e.g. pods. + - The condition's `type` field works like a key and should be unique among the list. + - Old conditions should not be deleted when updating the condition list. Each condition has an `observedGeneration` field that maps the condition to the object generation it was created for. + - The condition's `status` field should be either `True`, `False`, or `Unknown`. +- `phase` + - The phase aggregates the resource's state into a single string. It is useful for being displayed as an additional printer column for `kubectl get`. + - Unless there is a good reason for it, it should always contain one of the following values: + - `Terminating`, if the resource is being deleted (= has a non-zero deletion timestamp) + - `Ready` if the resource is not being deleted and all of its conditions are `True` + - `Progressing` otherwise + - This means that there should be at least one non-`True` condition explaining what is currently happening if the phase is `Progressing`. + +> The [controller-utils library](https://github.com/openmcp-project/controller-utils) contains helper functions for updating conditions or even the whole status. See the [documentation](https://github.com/openmcp-project/controller-utils/blob/main/docs/libs/status.md) for further information. + +### Event Filtering + +Not a hard requirement and also strongly depends on the purpose of the controller, but often it is useful to use the controller-runtime's ability to filter the events which cause a reconciliation. For example, often a resource's status is only modified by the controller reconciling it and a resource is mostly reconciled by only a single controller. In this case, changes to the status do not need to trigger a reconciliation, because they are already a result of one. In many cases, restricting reconciliation triggers to generation changes (which usually correspond to changes to a resource's `spec`) works well. + +The following snippet can be used as a template: +```go +import ( + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/predicate" + ctrlutils "github.com/openmcp-project/controller-utils/pkg/controller" +) + +// SetupWithManager sets up the controller with the Manager. +func (r *MyReconciler) SetupWithManager(mgr ctrl.Manager) error { + return ctrl.NewControllerManagedBy(mgr). + For(&mypackage.MyObjType{}). + WithEventFilter(predicate.And( + predicate.Or( + predicate.GenerationChangedPredicate{}, + ctrlutils.DeletionTimestampChangedPredicate{}, + ctrlutils.GotAnnotationPredicate(openmcpconst.OperationAnnotation, openmcpconst.OperationAnnotationValueReconcile), + ctrlutils.LostAnnotationPredicate(openmcpconst.OperationAnnotation, openmcpconst.OperationAnnotationValueIgnore), + ), + predicate.Not( + ctrlutils.HasAnnotationPredicate(openmcpconst.OperationAnnotation, openmcpconst.OperationAnnotationValueIgnore), + ), + )). + Complete(r) +} +``` + +This example restricts reconciliation triggers to the following events: +- The resource generation changed, indicating a change to the `spec` of the resource. + - Note that kubernetes seems to increase the generation exactly if some part of a field named `spec` changed. For resources without a `spec` (e.g. secrets), the generation is not increased if the payload changes. If a resource contains additional top-level fields next to `spec`, modifications to them might not cause a generation increase either. +- The resource got a deletion timestamp, meaning its deletion was triggered. +- The resource got the `openmcp.cloud/operation` annotation with value `reconcile` (or it had the annotation before and its value was changed to `reconcile`). + +If the resource has the `openmcp.cloud/operation` annotation with value `ignore`, a reconciliation is never triggered, even if the generation was increased or the deletion was triggered. diff --git a/docs/developers/provider_deployment.md b/docs/developers/provider_deployment.md new file mode 100644 index 0000000..ff3ba07 --- /dev/null +++ b/docs/developers/provider_deployment.md @@ -0,0 +1,229 @@ +# Provider Deployment + +The openMCP architecture knows three different kinds of providers: +- `ClusterProviders` manage kubernetes clusters and access to them +- `PlatformServices` provide landscape-wide service functionalities +- `ServiceProviders` provide the actual services that can be consumed by customers via the ManagedControlPlanes + +All providers can automatically be deployed via the corresponding provider resources: `ClusterProvider`, `PlatformService`, and `ServiceProvider`. The [openmcp-operator](https://github.com/openmcp-project/openmcp-operator) is responsible for these resources. + +For now, the spec of all three provider kinds looks exactly the same, which is why they are all explained together. +All of them are cluster-scoped resources. +This is a `ClusterProvider` resource as an example: +```yaml +apiVersion: openmcp.cloud/v1alpha1 +kind: ClusterProvider +metadata: + name: gardener +spec: + image: ghcr.io/openmcp-project/images/cluster-provider-gardener:v0.4.0 + verbosity: INFO +``` + +## Common Provider Contract + +This section explains the contract that provider implementations must follow for the deployment to work. + +### Executing the Binary + +Further information on how the provider binary is executed can be found below. + +#### Image + +Each provider implementation must provide a container image with the provider binary set as an entrypoint. + +#### Subcommands + +The provider binary must take two subcommands: +- `init` initializes the provider. This usually means deploying CRDs for custom resources used by the controller(s). + - The `init` subcommand is executed as a job once whenever the deployed version of the provider changes. +- `run` runs the actual controller(s) required for the provider. + - The `run` subcommand is executed in a pod as part of a deployment. + - The pods with the `run` command are only started after the init job has successfully run through. + - It may be run multiple times in parallel (high-availability), so the provider implementation should support this, e.g. via leader election. + +#### Arguments + +Both subcommands take the same arguments, which are explained below. These arguments will always be passed into the provider. +- `--environment` *any lowercase string* + - The *environment* argument is meant to distinguish between multiple environments (=platform clusters) watching the same onboarding cluster. For example, there could be a public environment and another fenced one - both watch the same resources on the same cluster, but only one of them is meant to react on each resource, depending on its configuration. + - Most setups will probably use only a single environment. + - Will likely be set to the landscape name (e.g. `canary`, `live`) most of the time. +- `--provider-name` *any lowercase string* + - This argument contains the name of the k8s provider resource from which this pod was created. + - If ever multiple instances of the same provider are deployed in the same landscape, this value can be used to differentiate between them. +- `--verbosity` or `-v` *enum: ERROR, INFO, or DEBUG* + - This value specifies the desired logging verbosity for the provider. + +#### Environment Variables + +The following environment variables can be expected to be set: +- `POD_NAME` + - Name of the pod the provider binary runs in. +- `POD_NAMESPACE` + - Namespace of the pod the provider binary runs in. +- `POD_IP` + - IP address of the pod the provider binary runs in. +- `POD_SERVICE_ACCOUNT_NAME` + - Name of the service account that is used to run the provider. + +#### Customizations + +While it is possible to customize some aspects of how the provider binary is executed, such as adding additional environment variables, overwriting the subcommands, adding additional arguments, etc., this should only be done in exceptional cases to keep the complexity of setting up an openMCP landscape as low as possible. + +### Configuration + +Passing configuration into the provider binary via a command-line argument is not desired. If the provider requires configuration of some kind, it is expected to read it from one or more k8s resources, potentially even running a controller to reconcile these resources. The `init` subcommand can be used to register the CRDs for the configuration resources, although this leads to the disadvantage of the configuration resource only been known after the provider has already been started, which can cause problems with gitOps (or similar deployment methods that deploy all resources at the same time). + +### Tips and Tricks + +#### Getting Access to the Onboarding Cluster + +Providers generally live in the platform cluster, so they can simply access it by using the in-cluster configuration. Getting access to the onboarding cluster is a little bit more tricky: First, the `Cluster` resource of the onboarding cluster itself or any `ClusterRequest` pointing to it is required. The provider can simply create its own `ClusterRequest` with purpose `onboarding` - a little trick that is possible due to the shared nature of the onboarding cluster, all requests to it will result in a reference to the same `Cluster`. Then, the provider needs to create an `AccessRequest` with the desired permissions and wait until it is ready. This will result in a secret containing a kubeconfig for the onboarding cluster. + +This flow is already implemented in the library function [`CreateAndWaitForCluster](https://github.com/openmcp-project/openmcp-operator/blob/v0.11.2/lib/clusteraccess/clusteraccess.go#L387). + +### Examples + +Basically, the `ClusterProvider` from the example above will result in the following `Job` and `Deployment` (redacted to the more relevant fields): +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + openmcp.cloud/provider-generation: "8" + openmcp.cloud/provider-kind: ClusterProvider + openmcp.cloud/provider-name: gardener + generation: 1 + labels: + app.kubernetes.io/component: init-job + app.kubernetes.io/instance: gardener + app.kubernetes.io/managed-by: openmcp-operator + app.kubernetes.io/name: ClusterProvider + name: gardener-init + namespace: cp-gardener + ownerReferences: + - apiVersion: openmcp.cloud/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: ClusterProvider + name: gardener + uid: cea97d05-34f3-4d12-865d-79fc6f84ff72 +spec: + backoffLimit: 4 + completionMode: NonIndexed + completions: 1 + manualSelector: false + parallelism: 1 + podReplacementPolicy: TerminatingOrFailed + selector: + matchLabels: + batch.kubernetes.io/controller-uid: 90418f79-da36-4787-b339-ff5f3d95417b + suspend: false + template: + metadata: + labels: + app.kubernetes.io/component: init-job + app.kubernetes.io/instance: gardener + app.kubernetes.io/managed-by: openmcp-operator + app.kubernetes.io/name: ClusterProvider + job-name: gardener-init + spec: + containers: + - args: + - init + - --environment=default + - --verbosity=DEBUG + - --provider-name=gardener + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + - name: POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP + - name: POD_SERVICE_ACCOUNT_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.serviceAccountName + image: ghcr.io/openmcp-project/images/cluster-provider-gardener:v0.4.0 + name: init + serviceAccount: gardener-init + serviceAccountName: gardener-init +``` +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: gardener + namespace: cp-gardener + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: gardener + app.kubernetes.io/managed-by: openmcp-operator + app.kubernetes.io/name: ClusterProvider + ownerReferences: + - apiVersion: openmcp.cloud/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: ClusterProvider + name: gardener + uid: cea97d05-34f3-4d12-865d-79fc6f84ff72 +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: gardener + app.kubernetes.io/managed-by: openmcp-operator + app.kubernetes.io/name: ClusterProvider + template: + metadata: + labels: + app.kubernetes.io/component: controller + app.kubernetes.io/instance: gardener + app.kubernetes.io/managed-by: openmcp-operator + app.kubernetes.io/name: ClusterProvider + spec: + containers: + - args: + - run + - --environment=default + - --verbosity=DEBUG + - --provider-name=gardener + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + - name: POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP + - name: POD_SERVICE_ACCOUNT_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.serviceAccountName + image: ghcr.io/openmcp-project/images/cluster-provider-gardener:v0.4.0 + name: gardener + serviceAccount: gardener + serviceAccountName: gardener +``` diff --git a/docs/operators/getting-started.md b/docs/operators/00-getting-started.md similarity index 100% rename from docs/operators/getting-started.md rename to docs/operators/00-getting-started.md diff --git a/docs/users/getting-started.md b/docs/users/00-getting-started.md similarity index 100% rename from docs/users/getting-started.md rename to docs/users/00-getting-started.md