Skip to content

Commit c85829d

Browse files
committed
link to the official multicluster-provider example and fix hiearchy documentation
On-behalf-of: SAP <[email protected]> Signed-off-by: Simon Bein <[email protected]>
1 parent fb0b655 commit c85829d

File tree

1 file changed

+20
-157
lines changed

1 file changed

+20
-157
lines changed

docs/content/concepts/workspaces/workspace-initialization.md

Lines changed: 20 additions & 157 deletions
Original file line numberDiff line numberDiff line change
@@ -24,12 +24,22 @@ spec:
2424
2525
Each initializer has a unique name, which gets automatically generated using `<workspace-path-of-WorkspaceType>:<WorkspaceType-name>`. So for example, if you were to apply the aforementioned WorkspaceType on the root workspace, your initializer would be called `root:example`.
2626

27-
Since `WorkspaceType.spec.initializer` is a boolean field, each WorkspaceType comes with a single initializer by default. However each WorkspaceType inherits the initializers of its parent workspaces. As a result, it is possible to have multiple initializers on a WorkspaceType, but you will need to nest them.
28-
Here is a example:
27+
Since `WorkspaceType.spec.initializers` is a boolean field, each WorkspaceType comes with a single initializer by default. However each WorkspaceType inherits the initializers of its parent WorkspaceType. As a result, it is possible to have multiple initializers on a WorkspaceType using [WorkspaceType Extension](../../concepts/workspaces/workspace-types.md#workspace-type-extensions-and-constraints)
2928

30-
1. In `root` workspace, create a new WorkspaceType called `parent`. You will receive a `root:parent` initializer
31-
2. In the newly created `parent` workspace, create a new WorkspaceType `child`. You will receive a `root:parent:child` initializer
32-
3. Whenever a new workspace is created in the child workspace, it will receive both the `root:parent` as well as the `root:parent:child` initializer
29+
In the following example, `child` inherits the initializers of `parent`. As a result, child workspaces will have the `root:child` and `root:parent` initializers set.
30+
31+
```yaml
32+
apiVersion: tenancy.kcp.io/v1alpha1
33+
kind: WorkspaceType
34+
metadata:
35+
name: child
36+
spec:
37+
initializer: true
38+
extend:
39+
with:
40+
- name: parent
41+
path: root
42+
```
3343

3444
### Enforcing Permissions for Initializers
3545

@@ -66,7 +76,7 @@ roleRef:
6676

6777
## Writing Custom Initialization Controllers
6878

69-
### Responsibilities Of Custom Intitialization Controllers
79+
### Responsibilities Of Custom Initialization Controllers
7080

7181
Custom Initialization Controllers are responsible for handling initialization logic for custom WorkspaceTypes. They interact with kcp by:
7282

@@ -88,163 +98,16 @@ $ kubectl get workspacetype example -o yaml
8898
...
8999
status:
90100
virtualWorkspaces:
91-
- url: https://<front-proxy-url>/services/initializingworkspaces/root:example
101+
- url: https://<shard-url>/services/initializingworkspaces/root:example
92102
```
93103

94-
You can use this url to construct a kubeconfig for your controller. To do so, use the url directly as the `cluster.server` in your kubeconfig and provide a user with sufficient permissions (see [Enforcing Permissions for Initializers](#enforcing-permissions-for-initializers))
104+
You can use this url to construct a kubeconfig for your controller. To do so, use the url directly as the `cluster.server` in your kubeconfig and provide the subject with sufficient permissions (see [Enforcing Permissions for Initializers](#enforcing-permissions-for-initializers))
95105

96106
### Code Sample
97107

98108
When writing a custom initializer, the following needs to be taken into account:
99109

100-
* We strongly recommend to use the kcp [initializingworkspace multicluster-provider](github.com/kcp-dev/multicluster-provider) to build your custom initializer
110+
* We strongly recommend to use the kcp [initializingworkspace multicluster-provider](https://github.com/kcp-dev/multicluster-provider) to build your custom initializer
101111
* You need to update LogicalClusters using patches; They cannot be updated using the update api
102112

103-
Keeping this in mind, you can use the following example as a starting point for your intitialization controller
104-
105-
=== "reconcile.go"
106-
107-
```Go
108-
package main
109-
110-
import (
111-
"context"
112-
"slices"
113-
114-
"github.com/go-logr/logr"
115-
kcpcorev1alpha1 "github.com/kcp-dev/kcp/sdk/apis/core/v1alpha1"
116-
"github.com/kcp-dev/kcp/sdk/apis/tenancy/initialization"
117-
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
118-
"sigs.k8s.io/controller-runtime/pkg/cluster"
119-
"sigs.k8s.io/controller-runtime/pkg/reconcile"
120-
mcbuilder "sigs.k8s.io/multicluster-runtime/pkg/builder"
121-
mcmanager "sigs.k8s.io/multicluster-runtime/pkg/manager"
122-
mcreconcile "sigs.k8s.io/multicluster-runtime/pkg/reconcile"
123-
)
124-
125-
type Reconciler struct {
126-
Log logr.Logger
127-
InitializerName kcpcorev1alpha1.LogicalClusterInitializer
128-
ClusterGetter func(context.Context, string) (cluster.Cluster, error)
129-
}
130-
131-
func (r *Reconciler) Reconcile(ctx context.Context, req mcreconcile.Request) (reconcile.Result, error) {
132-
log := r.Log.WithValues("clustername", req.ClusterName)
133-
log.Info("Reconciling")
134-
135-
// create a client scoped to the logical cluster the request came from
136-
cluster, err := r.ClusterGetter(ctx, req.ClusterName)
137-
if err != nil {
138-
return reconcile.Result{}, err
139-
}
140-
client := cluster.GetClient()
141-
142-
lc := &kcpcorev1alpha1.LogicalCluster{}
143-
if err := client.Get(ctx, req.NamespacedName, lc); err != nil {
144-
return reconcile.Result{}, err
145-
}
146-
147-
// check if your initializer is still set on the logicalcluster
148-
if slices.Contains(lc.Status.Initializers, r.InitializerName) {
149-
150-
// your logic to initialize a Workspace goes here
151-
log.Info("Starting to initialize cluster")
152-
153-
// after your initialization is done, don't forget to remove your initializer.
154-
// You will need to use patch, to update the LogicalCluster
155-
patch := ctrlclient.MergeFrom(lc.DeepCopy())
156-
lc.Status.Initializers = initialization.EnsureInitializerAbsent(r.InitializerName, lc.Status.Initializers)
157-
if err := client.Status().Patch(ctx, lc, patch); err != nil {
158-
return reconcile.Result{}, err
159-
}
160-
}
161-
162-
return reconcile.Result{}, nil
163-
}
164-
165-
func (r *Reconciler) SetupWithManager(mgr mcmanager.Manager) error {
166-
return mcbuilder.ControllerManagedBy(mgr).
167-
For(&kcpcorev1alpha1.LogicalCluster{}).
168-
Complete(r)
169-
}
170-
```
171-
172-
=== "main.go"
173-
174-
```Go
175-
package main
176-
177-
import (
178-
"context"
179-
"fmt"
180-
"log/slog"
181-
"os"
182-
"strings"
183-
184-
"github.com/go-logr/logr"
185-
kcpcorev1alpha1 "github.com/kcp-dev/kcp/sdk/apis/core/v1alpha1"
186-
"github.com/kcp-dev/multicluster-provider/initializingworkspaces"
187-
"golang.org/x/sync/errgroup"
188-
"k8s.io/client-go/kubernetes/scheme"
189-
"k8s.io/client-go/tools/clientcmd"
190-
ctrl "sigs.k8s.io/controller-runtime"
191-
"sigs.k8s.io/controller-runtime/pkg/manager"
192-
mcmanager "sigs.k8s.io/multicluster-runtime/pkg/manager"
193-
)
194-
195-
// glue and setup code
196-
func main() {
197-
if err := execute(); err != nil {
198-
fmt.Println(err)
199-
os.Exit(1)
200-
}
201-
}
202-
func execute() error {
203-
// your kubeconfig here
204-
kubeconfigpath := "<your-kubeconfig>"
205-
206-
config, err := clientcmd.BuildConfigFromFlags("", kubeconfigpath)
207-
if err != nil {
208-
return err
209-
}
210-
211-
// since the initializers name is is the last part of the hostname, we can take it from there
212-
initializerName := config.Host[strings.LastIndex(config.Host, "/")+1:]
213-
214-
provider, err := initializingworkspaces.New(config, initializingworkspaces.Options{InitializerName: initializerName})
215-
if err != nil {
216-
return err
217-
}
218-
219-
logger := logr.FromSlogHandler(slog.NewTextHandler(os.Stderr, nil))
220-
ctrl.SetLogger(logger)
221-
222-
mgr, err := mcmanager.New(config, provider, manager.Options{Logger: logger})
223-
if err != nil {
224-
return err
225-
}
226-
227-
// add the logicalcluster scheme
228-
if err := kcpcorev1alpha1.AddToScheme(scheme.Scheme); err != nil {
229-
return err
230-
}
231-
232-
r := Reconciler{
233-
Log: mgr.GetLogger().WithName("initializer-controller"),
234-
InitializerName: kcpcorev1alpha1.LogicalClusterInitializer(initializerName),
235-
ClusterGetter: mgr.GetCluster,
236-
}
237-
238-
if err := r.SetupWithManager(mgr); err != nil {
239-
return err
240-
}
241-
mgr.GetLogger().Info("Setup complete")
242-
243-
// start the provider and manager
244-
g, ctx := errgroup.WithContext(context.Background())
245-
g.Go(func() error { return provider.Run(ctx, mgr) })
246-
g.Go(func() error { return mgr.Start(ctx) })
247-
248-
return g.Wait()
249-
}
250-
```
113+
Keeping this in mind, you can use the [multicluster-provider initializingworkspaces example](https://github.com/kcp-dev/multicluster-provider/tree/main/examples/initializingworkspaces) as a starting point for your initialization controller

0 commit comments

Comments
 (0)