Skip to content

Commit 1aeedaa

Browse files
authored
20250401-kubecon-london/workshop: various docs fixes (#13)
On-behalf-of: SAP robert.vasek@sap.com Signed-off-by: Robert Vasek <robert.vasek@clyso.com>
1 parent 88e4597 commit 1aeedaa

File tree

6 files changed

+96
-73
lines changed

6 files changed

+96
-73
lines changed

20250401-kubecon-london/workshop/00-prerequisites/README.md

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ In this chapter we'll set up our workshop-dedicated development environment.
99

1010
Start by cloning the git repository we'll refer to throughout the workshop, and will be the place for the binaries, scripts and kubeconfigs we will create as we move forward.
1111

12-
Important: We will need 4 terminal windows for long running programs & interactions to the same underlaying machine during this workshop.
12+
**Important:** We will need 4 terminal windows for long running programs & interactions to the same underlying machine during this workshop.
1313

1414
```shell
15-
git clone https://github.com/kcp-dev/contrib.git
16-
cd 20250401-kubecon-london/workshop/
15+
git clone https://github.com/kcp-dev/contrib.git kcp-contrib
16+
cd kcp-contrib/20250401-kubecon-london/workshop
1717
```
1818

1919
Now, let's see what's inside.
@@ -24,13 +24,16 @@ Now, let's see what's inside.
2424
* `03-dynamic-providers/`
2525
* `clean-all.sh`
2626

27-
Notice the exercises in directories `<Sequence number>-<Exercise name>`. These are to be visited in sequence, and to complete one, all previous exercises need to be completed first to bring the system into the desired state. While it's best if you try to follow the tasks by yourself, if you ever get stuck, you can finish an exercise by running the scripts inside the respective exercise directory.
27+
Notice the exercises in directories `<Sequence number>-<Exercise name>`. These are the rules:
2828

29-
Also take a note of the `clean-all.sh` script. If you ever get stuck and want to reset, run it and it will clean up and stop processes and containers used in the exercises.
29+
1. exercises need to be visited in sequence. To complete one, all previous exercises need to be completed first.
30+
2. Are you stuck? While it's best if you try to follow the tasks by yourself, if you ever get stuck, you can finish an exercise by running the scripts inside the respective exercise directory.
31+
3. Something broke? If you ever need to reset, run `clean-all.sh` to clean up.
32+
4. Finished an exercise? High-five! Each exercise directory has a script `99-highfive.sh`. Run it to check-in your progress!
3033

3134
## Get your bins
3235

33-
This one is easy. During the workshop we will make use of these programs:
36+
Ready for a warm-up? In this quick exercise we are going to install programs we'll be using:
3437

3538
* [kcp](https://github.com/kcp-dev/kcp/releases/latest),
3639
* kcp's [api-syncagent](https://github.com/kcp-dev/api-syncagent/releases/latest),
@@ -39,15 +42,15 @@ This one is easy. During the workshop we will make use of these programs:
3942
* [kubectl](https://kubernetes.io/docs/tasks/tools/),
4043
* and, [kubectl-krew](https://krew.sigs.k8s.io/docs/user-guide/setup/install/).
4144

42-
You may visit the links, download and extract the respective binaries to a new directory called `bin/` in the workshop's root (e.g., `$WORKSHOP/bin/kubectl`). If you already have some of these installed and available in your `$PATH`, you may skip them--just make sure they are up-to-date.
43-
44-
Alternatively, we've prepared a script that does just that:
45+
Install them all in one go by running the following script:
4546

4647
```shell
4748
00-prerequisites/01-install.sh
4849
```
4950

50-
If you're going the manual way, please make sure the file names are stripped of OS and arch names they may contain (e.g. `mv kubectl-krew-linux_amd64 kubectl-krew`), as we'll refer to them using their system-agnostic names later on.
51+
Inspect it first, and you'll see that it `curl`s files from the GitHub releases of the respective project repositories, and stores them in `bin/`, inside our current working directory.
52+
53+
Alternatively, you may install the binaries manually. If you already have some of them installed and available in your `$PATH`, you may skip them--just make sure they are up-to-date. If you choose to go the manual way, please make sure the file names are stripped of any OS and arch names they may contain (e.g. `mv kubectl-krew-linux_amd64 kubectl-krew`), as we'll refer to them using their system-agnostic names later on.
5154

5255
And that's it!
5356

20250401-kubecon-london/workshop/01-deploy-kcp/README.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ kcp may be deployed via a [Helm chart](https://github.com/kcp-dev/helm-charts),
2424
=== "Fish"
2525

2626
```fish
27-
set WORKSHOP_ROOT (git rev-parse --show-toplevel)/20250401-kubecon-london/workshop
28-
set PATH $WORKSHOP_ROOT/bin $PATH
27+
set -gx WORKSHOP_ROOT (git rev-parse --show-toplevel)/20250401-kubecon-london/workshop
28+
set -gx PATH $WORKSHOP_ROOT/bin $PATH
2929
```
3030

3131
Starting kcp in standalone mode is as easy as typing `kcp start` and pressing Enter.
@@ -34,27 +34,28 @@ Starting kcp in standalone mode is as easy as typing `kcp start` and pressing En
3434
cd $WORKSHOP_ROOT && kcp start
3535
```
3636

37-
3837
You should see the program running indefinitely, and outputting its logs--starting with some errors that should clean up in a couple of seconds as the different controllers start up. Leave the terminal window open, as we will keep using this kcp instance throughout the duration of the workshop. In this mode, all kcp's state is in-memory only. That means exiting the process (by, for example, pressing _Ctrl+C_ in this terminal), will lose all its etcd contents.
3938

4039
Once kcp's output seems stable, we can start making simple kubectl calls against it. `kcp start` creates a hidden directory `.kcp`, where it places its kubeconfig and the certificates.
4140

42-
Open a new terminal (termianl 2, same 01-deploy-kcp directory) now.
43-
4441
!!! Important
4542

43+
Open a **second shell** and `cd` into workshop's directory now.
44+
4645
=== "Bash/ZSH"
4746

4847
```shell
4948
export WORKSHOP_ROOT="$(git rev-parse --show-toplevel)/20250401-kubecon-london/workshop"
49+
export PATH="${WORKSHOP_ROOT}/bin:${PATH}"
5050
export KUBECONFIG="${WORKSHOP_ROOT}/.kcp/admin.kubeconfig"
5151
```
5252

5353
=== "Fish"
5454

5555
```fish
56-
set WORKSHOP_ROOT (git rev-parse --show-toplevel)/20250401-kubecon-london/workshop
57-
set KUBECONFIG $WORKSHOP_ROOT/.kcp/admin.kubeconfig"
56+
set -gx WORKSHOP_ROOT (git rev-parse --show-toplevel)/20250401-kubecon-london/workshop
57+
set -gx PATH $WORKSHOP_ROOT/bin $PATH
58+
set -gx KUBECONFIG $WORKSHOP_ROOT/.kcp/admin.kubeconfig"
5859
```
5960

6061
The following command should work now:
@@ -90,4 +91,5 @@ If there were no errors, you may continue with the next exercise.
9091
### Cheat-sheet
9192

9293
You may fast-forward through this exercise by running:
94+
9395
* `01-deploy-kcp/01-start-kcp.sh` in a new terminal

20250401-kubecon-london/workshop/02-explore-workspaces/README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -82,9 +82,7 @@ kubectl ws use :
8282
kubectl get ws
8383
```
8484

85-
These are the workspaces we created, and they represent logical separation of resources in the cluster.
86-
87-
We haven't seen `ws use` yet. Using this command, you move into a different workspace in the tree of workspaces, much like `cd` moves you into a different directory described by a path. In case of workspaces, a path too may be relative or absolute, where `:` is the path separator, and `:` alone denotes the root of the tree.
85+
We haven't seen `ws use` yet. Using this command you move into a different workspace in the tree of workspaces, much like `cd` moves you into a different directory described by a path. In the case of workspaces, a path too may be relative or absolute, where `:` is the path separator, and `:` alone denotes the root of the tree.
8886

8987
```shell
9088
kubectl ws use :
@@ -101,7 +99,7 @@ kubectl create configmap test --from-literal=test=two
10199
kubectl get configmap test -o json
102100
```
103101

104-
Notice how even though these two ConfigMaps have the same name `test`, and are in the same namespace `default`, they are actually two distinct objects. They live in two different workspaces, and are completely separate.
102+
Notice how even though these two ConfigMaps have the same name `test`, and are in the same namespace `default`, they are actually two distinct objects. They live in two different workspaces, and are completely separate. **Workspaces represent logical separation of resources in the cluster.**
105103

106104
We've created a few workspaces now, and already it's easy to lose sight of what is where. Say hello to `ws tree`:
107105

@@ -169,16 +167,18 @@ Starting with the first one, `APIResourceSchema`:
169167
kubectl get apiresourceschema -o json
170168
```
171169

172-
Try to skim through the YAML output and you'll notice that it is almost identical to a definition of a CRD. Unlinke a CRD however, `APIResourceSchema` instance does not have a backing API server, and instead it simply describes an API that we can pass around and refer to. By decoupling the schema definition from serving, API owners can be more explicit about API evolution.
170+
Try to skim through the YAML output and you'll notice that it is almost identical to a definition of a CRD. Unlike a CRD however, `APIResourceSchema` instance does not have a backing API server, and instead it simply describes an API that we can pass around and refer to. By decoupling the schema definition from serving, API owners can be more explicit about API evolution.
173171

174172
```shell
175173
kubectl get apiexport cowboys -o yaml
176174
```
177175

178176
Take a note of the following properties in the output:
179-
* `.spec.latestResourceSchemas`: refers to specific versions of `APIResourceSchema` objects,
177+
178+
* `.spec.latestResourceSchemas`: lists which APIResourceSchemas we are exporting,
180179
* `.spec.permissionClaims`: describes resource permissions that our API depends on. These are the permissions that we, the service provider, want the consumer to grant us,
181-
* `.status.virtualWorkspaces[].url`: the URL where the provider can access the granted resources.
180+
* `.status.virtualWorkspaces[].url`: a Kubernetes endpoint to access all resources that belong to this export, across all consumers.
181+
182182
```yaml
183183
# Stripped down example output of `kubectl get apiexport` command above.
184184
spec:
@@ -195,7 +195,7 @@ status:
195195
196196
### Service consumer
197197
198-
With the provider in place, let's create two consumers in their own workspaces, starting with "wild-west":
198+
With the provider in place, let's shift into the role of a consumer. Actually, two consumers, in their own workspaces! Let's start with the first one, named "wild-west":
199199
200200
```shell
201201
kubectl ws use :
@@ -288,7 +288,7 @@ kubectl ws :root:providers:cowboys
288288
kubectl get apiexport cowboys -o json | jq '.status.virtualWorkspaces[].url'
289289
```
290290

291-
Using that URL, we can confirm that only the resources we have agreed on are available to the workspaces.
291+
Using that URL, we can confirm that we have access to the resources the consumers have agreed to:
292292

293293
```shell-session
294294
$ kubectl -s 'https://192.168.32.7:6443/services/apiexport/1ctnpog1ny8bnud6/cowboys/clusters/*' api-resources
@@ -318,7 +318,7 @@ From that, you can already start imagining what a workspace-aware controller ope
318318
Finished? High-five! Check-in your completion with:
319319

320320
```shell
321-
../02-explore-workspaces/99-highfive.sh
321+
02-explore-workspaces/99-highfive.sh
322322
```
323323

324324
If there were no errors, you may continue with the next exercise.

0 commit comments

Comments
 (0)