Skip to content

Commit 517cc81

Browse files
committed
hard wrap at 100 characters
Signed-off-by: Matthew Fisher <[email protected]>
1 parent 230214d commit 517cc81

File tree

1 file changed

+63
-27
lines changed

1 file changed

+63
-27
lines changed

content/en/docs/install/linode-kubernetes-engine.md

Lines changed: 63 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -5,38 +5,53 @@ date: 2024-07-23
55
tags: [Installation]
66
---
77

8-
This guide walks through the process of installing and configuring SpinKube on Linode Kubernetes Engine (LKE).
8+
This guide walks through the process of installing and configuring SpinKube on Linode Kubernetes
9+
Engine (LKE).
910

1011
## Prerequisites
1112

12-
This guide assumes that you have an Akamai Linode account that is configured and has sufficient permissions for creating a new LKE cluster.
13+
This guide assumes that you have an Akamai Linode account that is configured and has sufficient
14+
permissions for creating a new LKE cluster.
1315

1416
You will also need recent versions of `kubectl` and `helm` installed on your system.
1517

1618
## Creating an LKE Cluster
1719

18-
LKE has a managed control plane, so you only need to create the pool of worker nodes. In this tutorial, we will create a 2-node LKE cluster using the smallest available worker nodes. This should be fine for installing SpinKube and running up to around 100 Spin apps.
20+
LKE has a managed control plane, so you only need to create the pool of worker nodes. In this
21+
tutorial, we will create a 2-node LKE cluster using the smallest available worker nodes. This should
22+
be fine for installing SpinKube and running up to around 100 Spin apps.
1923

20-
You may prefer to run a larger cluster if you plan on mixing containers and Spin apps, because containers consume substantially more resources than Spin apps do.
24+
You may prefer to run a larger cluster if you plan on mixing containers and Spin apps, because
25+
containers consume substantially more resources than Spin apps do.
2126

22-
In the Linode web console, click on `Kubernetes` in the right-hand navigation, and then click `Create Cluster`.
27+
In the Linode web console, click on `Kubernetes` in the right-hand navigation, and then click
28+
`Create Cluster`.
2329

2430
![LKE Creation Screen Described Below](../lke-spinkube-create.png)
2531

2632
You will only need to make a few choices on this screen. Here's what we have done:
27-
* We named the cluster `spinkube-lke-1`. You should name it according to whatever convention you prefer
33+
* We named the cluster `spinkube-lke-1`. You should name it according to whatever convention you
34+
prefer
2835
* We chose the `Chicago, IL (us-ord)` region, but you can choose any region you prefer
2936
* The latest supported Kubernetes version is `1.30`, so we chose that
30-
* For this testing cluster, we chose `No` on `HA Control Plane` because we do not need high availability
31-
* In `Add Node Pools`, we added two `Dedicated 4 GB` simply to show a cluster running more than one node. Two nodes is sufficient for Spin apps, though you may prefer the more traditional 3 node cluster. Click `Add` to add these, and ignore the warning about minimum sizes.
37+
* For this testing cluster, we chose `No` on `HA Control Plane` because we do not need high
38+
availability
39+
* In `Add Node Pools`, we added two `Dedicated 4 GB` simply to show a cluster running more than one
40+
node. Two nodes is sufficient for Spin apps, though you may prefer the more traditional 3 node
41+
cluster. Click `Add` to add these, and ignore the warning about minimum sizes.
3242

3343
Once you have set things to your liking, press `Create Cluster`.
3444

35-
This will take you to a screen that shows the status of the cluster. Initially, you will want to wait for all of your `Node Pool` to start up. Once all of the nodes are online, download the `kubeconfig` file, which will be named something like `spinkube-lke-1-kubeconfig.yaml`.
45+
This will take you to a screen that shows the status of the cluster. Initially, you will want to
46+
wait for all of your `Node Pool` to start up. Once all of the nodes are online, download the
47+
`kubeconfig` file, which will be named something like `spinkube-lke-1-kubeconfig.yaml`.
3648

37-
> The `kubeconfig` file will have the credentials for connecting to your new LKE cluster. Do not share that file or put it in a public place.
49+
> The `kubeconfig` file will have the credentials for connecting to your new LKE cluster. Do not
50+
> share that file or put it in a public place.
3851
39-
For all of the subsequent operations, you will want to use the `spinkube-lke-1-kubeconfig.yaml` as your main Kubernetes configuration file. The best way to do that is to set the environment variable `KUBECONFIG` to point to that file:
52+
For all of the subsequent operations, you will want to use the `spinkube-lke-1-kubeconfig.yaml` as
53+
your main Kubernetes configuration file. The best way to do that is to set the environment variable
54+
`KUBECONFIG` to point to that file:
4055

4156
```console
4257
$ export KUBECONFIG=/path/to/spinkube-lke-1-kubeconfig.yaml
@@ -67,25 +82,34 @@ users:
6782
token: REDACTED
6883
```
6984

70-
This shows us our cluster config. You should be able to cross-reference the `lkeNNNNNN` version with what you see on your Akamai Linode dashboard.
85+
This shows us our cluster config. You should be able to cross-reference the `lkeNNNNNN` version with
86+
what you see on your Akamai Linode dashboard.
7187

7288
## Install SpinKube Using Helm
7389

74-
At this point, [install SpinKube with Helm](installing-with-helm). As long as your `KUBECONFIG` environment variable is pointed at the correct cluster, the installation method documented there will work.
90+
At this point, [install SpinKube with Helm](installing-with-helm). As long as your `KUBECONFIG`
91+
environment variable is pointed at the correct cluster, the installation method documented there
92+
will work.
7593

7694
Once you are done following the installation steps, return here to install a first app.
7795

7896
## Creating a First App
7997

80-
We will use the `spin kube` plugin to scaffold out a new app. If you run the following command and the `kube` plugin is not installed, you will first be prompted to install the plugin. Choose `yes` to install.
98+
We will use the `spin kube` plugin to scaffold out a new app. If you run the following command and
99+
the `kube` plugin is not installed, you will first be prompted to install the plugin. Choose `yes`
100+
to install.
81101

82-
We'll point to an existing Spin app, a [Hello World program written in Rust](https://github.com/fermyon/spin/tree/main/examples/http-rust), compiled to Wasm, and stored in GitHub Container Registry (GHCR):
102+
We'll point to an existing Spin app, a [Hello World program written in
103+
Rust](https://github.com/fermyon/spin/tree/main/examples/http-rust), compiled to Wasm, and stored in
104+
GitHub Container Registry (GHCR):
83105

84106
```console
85107
$ spin kube scaffold --from ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0 > hello-world.yaml
86108
```
87109

88-
> Note that Spin apps, which are WebAssembly, can be [stored in most container registries](https://developer.fermyon.com/spin/v2/registry-tutorial) even though they are not Docker containers.
110+
> Note that Spin apps, which are WebAssembly, can be [stored in most container
111+
> registries](https://developer.fermyon.com/spin/v2/registry-tutorial) even though they are not
112+
> Docker containers.
89113
90114
This will write the following to `hello-world.yaml`:
91115

@@ -107,10 +131,11 @@ $ kubectl apply -f hello-world.yaml
107131
spinapp.core.spinoperator.dev/spin-rust-hello created
108132
```
109133

110-
With SpinKube, SpinApps will be deployed as `Pod` resources, so we can see the app using `kubectl get pods`:
134+
With SpinKube, SpinApps will be deployed as `Pod` resources, so we can see the app using `kubectl
135+
get pods`:
111136

112137
```console
113-
$ kubectl get pods
138+
$ kubectl get pods
114139
NAME READY STATUS RESTARTS AGE
115140
spin-rust-hello-f6d8fc894-7pq7k 1/1 Running 0 54s
116141
spin-rust-hello-f6d8fc894-vmsgh 1/1 Running 0 54s
@@ -120,7 +145,9 @@ Status is listed as `Running`, which means our app is ready.
120145

121146
## Making An App Public with a NodeBalancer
122147

123-
By default, Spin apps will be deployed with an internal service. But with Linode, you can provision a [NodeBalancer](https://www.linode.com/docs/products/networking/nodebalancers/) using a `Service` object. Here is a `hello-world-service.yaml` that provisions a `nodebalancer` for us:
148+
By default, Spin apps will be deployed with an internal service. But with Linode, you can provision
149+
a [NodeBalancer](https://www.linode.com/docs/products/networking/nodebalancers/) using a `Service`
150+
object. Here is a `hello-world-service.yaml` that provisions a `nodebalancer` for us:
124151

125152
```yaml
126153
apiVersion: v1
@@ -143,9 +170,11 @@ spec:
143170
sessionAffinity: None
144171
```
145172

146-
When LKE receives a `Service` whose `type` is `LoadBalancer`, it will provision a NodeBalancer for you.
173+
When LKE receives a `Service` whose `type` is `LoadBalancer`, it will provision a NodeBalancer for
174+
you.
147175

148-
> You can customize this for your app simply by replacing all instances of `spin-rust-hello` with the name of your app.
176+
> You can customize this for your app simply by replacing all instances of `spin-rust-hello` with
177+
> the name of your app.
149178

150179
We can create the NodeBalancer by running `kubectl apply` on the above file:
151180

@@ -154,18 +183,21 @@ $ kubectl apply -f hello-world-nodebalancer.yaml
154183
service/spin-rust-hello-nodebalancer created
155184
```
156185

157-
Provisioning the new NodeBalancer may take a few moments, but we can get the IP address using `kubectl get service spin-rust-hello-nodebalancer`:
186+
Provisioning the new NodeBalancer may take a few moments, but we can get the IP address using
187+
`kubectl get service spin-rust-hello-nodebalancer`:
158188

159189
```console
160190
$ get service spin-rust-hello-nodebalancer
161191
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
162192
spin-rust-hello-nodebalancer LoadBalancer 10.128.235.253 172.234.210.123 80:31083/TCP 40s
163193
```
164194

165-
The `EXTERNAL-IP` field tells us what the NodeBalancer is using as a public IP. We can now test this out over the Internet using `curl` or by entering the URL `http://172.234.210.123/hello` into your browser.
195+
The `EXTERNAL-IP` field tells us what the NodeBalancer is using as a public IP. We can now test this
196+
out over the Internet using `curl` or by entering the URL `http://172.234.210.123/hello` into your
197+
browser.
166198

167199
```console
168-
$ curl 172.234.210.123/hello
200+
$ curl 172.234.210.123/hello
169201
Hello world from Spin!
170202
```
171203

@@ -176,10 +208,14 @@ To delete this sample app, we will first delete the NodeBalancer, and then delet
176208
```console
177209
$ kubectl delete service spin-rust-hello-nodebalancer
178210
service "spin-rust-hello-nodebalancer" deleted
179-
$ kubectl delete spinapp spin-rust-hello
211+
$ kubectl delete spinapp spin-rust-hello
180212
spinapp.core.spinoperator.dev "spin-rust-hello" deleted
181213
```
182214

183-
> If you delete the NodeBalancer out of the Linode console, it will not automatically delete the `Service` record in Kubernetes, which will cause inconsistencies. So it is best to use `kubectl delete service` to delete your NodeBalancer.
215+
> If you delete the NodeBalancer out of the Linode console, it will not automatically delete the
216+
> `Service` record in Kubernetes, which will cause inconsistencies. So it is best to use `kubectl
217+
> delete service` to delete your NodeBalancer.
184218

185-
If you are also done with your LKE cluster, the easiest way to delete it is to log into the Akamai Linode dashboard, navigate to `Kubernetes`, and press the `Delete` button. This will destroy all of your worker nodes and deprovision the control plane.
219+
If you are also done with your LKE cluster, the easiest way to delete it is to log into the Akamai
220+
Linode dashboard, navigate to `Kubernetes`, and press the `Delete` button. This will destroy all of
221+
your worker nodes and deprovision the control plane.

0 commit comments

Comments
 (0)