Skip to content

Commit 230214d

Browse files
authored
Merge pull request #206 from technosophos/feat/lke
Add instructions for Linode Kubernetes Engine (LKE)
2 parents a070f37 + 832b2aa commit 230214d

File tree

2 files changed

+185
-0
lines changed

2 files changed

+185
-0
lines changed
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
---
2+
title: Installing on Linode Kubernetes Engine (LKE)
3+
description: This guide walks you through the process of installing SpinKube on [LKE](https://www.linode.com/docs/products/compute/kubernetes/).
4+
date: 2024-07-23
5+
tags: [Installation]
6+
---
7+
8+
This guide walks through the process of installing and configuring SpinKube on Linode Kubernetes Engine (LKE).
9+
10+
## Prerequisites
11+
12+
This guide assumes that you have an Akamai Linode account that is configured and has sufficient permissions for creating a new LKE cluster.
13+
14+
You will also need recent versions of `kubectl` and `helm` installed on your system.
15+
16+
## Creating an LKE Cluster
17+
18+
LKE has a managed control plane, so you only need to create the pool of worker nodes. In this tutorial, we will create a 2-node LKE cluster using the smallest available worker nodes. This should be fine for installing SpinKube and running up to around 100 Spin apps.
19+
20+
You may prefer to run a larger cluster if you plan on mixing containers and Spin apps, because containers consume substantially more resources than Spin apps do.
21+
22+
In the Linode web console, click on `Kubernetes` in the right-hand navigation, and then click `Create Cluster`.
23+
24+
![LKE Creation Screen Described Below](../lke-spinkube-create.png)
25+
26+
You will only need to make a few choices on this screen. Here's what we have done:
27+
* We named the cluster `spinkube-lke-1`. You should name it according to whatever convention you prefer
28+
* We chose the `Chicago, IL (us-ord)` region, but you can choose any region you prefer
29+
* The latest supported Kubernetes version is `1.30`, so we chose that
30+
* For this testing cluster, we chose `No` on `HA Control Plane` because we do not need high availability
31+
* In `Add Node Pools`, we added two `Dedicated 4 GB` simply to show a cluster running more than one node. Two nodes is sufficient for Spin apps, though you may prefer the more traditional 3 node cluster. Click `Add` to add these, and ignore the warning about minimum sizes.
32+
33+
Once you have set things to your liking, press `Create Cluster`.
34+
35+
This will take you to a screen that shows the status of the cluster. Initially, you will want to wait for all of your `Node Pool` to start up. Once all of the nodes are online, download the `kubeconfig` file, which will be named something like `spinkube-lke-1-kubeconfig.yaml`.
36+
37+
> The `kubeconfig` file will have the credentials for connecting to your new LKE cluster. Do not share that file or put it in a public place.
38+
39+
For all of the subsequent operations, you will want to use the `spinkube-lke-1-kubeconfig.yaml` as your main Kubernetes configuration file. The best way to do that is to set the environment variable `KUBECONFIG` to point to that file:
40+
41+
```console
42+
$ export KUBECONFIG=/path/to/spinkube-lke-1-kubeconfig.yaml
43+
```
44+
45+
You can test this using the command `kubectl config view`:
46+
47+
```
48+
$ kubectl config view
49+
apiVersion: v1
50+
clusters:
51+
- cluster:
52+
certificate-authority-data: DATA+OMITTED
53+
server: https://REDACTED.us-ord-1.linodelke.net:443
54+
name: lke203785
55+
contexts:
56+
- context:
57+
cluster: lke203785
58+
namespace: default
59+
user: lke203785-admin
60+
name: lke203785-ctx
61+
current-context: lke203785-ctx
62+
kind: Config
63+
preferences: {}
64+
users:
65+
- name: lke203785-admin
66+
user:
67+
token: REDACTED
68+
```
69+
70+
This shows us our cluster config. You should be able to cross-reference the `lkeNNNNNN` version with what you see on your Akamai Linode dashboard.
71+
72+
## Install SpinKube Using Helm
73+
74+
At this point, [install SpinKube with Helm](installing-with-helm). As long as your `KUBECONFIG` environment variable is pointed at the correct cluster, the installation method documented there will work.
75+
76+
Once you are done following the installation steps, return here to install a first app.
77+
78+
## Creating a First App
79+
80+
We will use the `spin kube` plugin to scaffold out a new app. If you run the following command and the `kube` plugin is not installed, you will first be prompted to install the plugin. Choose `yes` to install.
81+
82+
We'll point to an existing Spin app, a [Hello World program written in Rust](https://github.com/fermyon/spin/tree/main/examples/http-rust), compiled to Wasm, and stored in GitHub Container Registry (GHCR):
83+
84+
```console
85+
$ spin kube scaffold --from ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0 > hello-world.yaml
86+
```
87+
88+
> Note that Spin apps, which are WebAssembly, can be [stored in most container registries](https://developer.fermyon.com/spin/v2/registry-tutorial) even though they are not Docker containers.
89+
90+
This will write the following to `hello-world.yaml`:
91+
92+
```yaml
93+
apiVersion: core.spinoperator.dev/v1alpha1
94+
kind: SpinApp
95+
metadata:
96+
name: spin-rust-hello
97+
spec:
98+
image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0"
99+
executor: containerd-shim-spin
100+
replicas: 2
101+
```
102+
103+
Using `kubectl apply`, we can deploy that app:
104+
105+
```console
106+
$ kubectl apply -f hello-world.yaml
107+
spinapp.core.spinoperator.dev/spin-rust-hello created
108+
```
109+
110+
With SpinKube, SpinApps will be deployed as `Pod` resources, so we can see the app using `kubectl get pods`:
111+
112+
```console
113+
$ kubectl get pods
114+
NAME READY STATUS RESTARTS AGE
115+
spin-rust-hello-f6d8fc894-7pq7k 1/1 Running 0 54s
116+
spin-rust-hello-f6d8fc894-vmsgh 1/1 Running 0 54s
117+
```
118+
119+
Status is listed as `Running`, which means our app is ready.
120+
121+
## Making An App Public with a NodeBalancer
122+
123+
By default, Spin apps will be deployed with an internal service. But with Linode, you can provision a [NodeBalancer](https://www.linode.com/docs/products/networking/nodebalancers/) using a `Service` object. Here is a `hello-world-service.yaml` that provisions a `nodebalancer` for us:
124+
125+
```yaml
126+
apiVersion: v1
127+
kind: Service
128+
metadata:
129+
name: spin-rust-hello-nodebalancer
130+
annotations:
131+
service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
132+
labels:
133+
core.spinoperator.dev/app-name: spin-rust-hello
134+
spec:
135+
type: LoadBalancer
136+
ports:
137+
- name: http
138+
port: 80
139+
protocol: TCP
140+
targetPort: 80
141+
selector:
142+
core.spinoperator.dev/app.spin-rust-hello.status: ready
143+
sessionAffinity: None
144+
```
145+
146+
When LKE receives a `Service` whose `type` is `LoadBalancer`, it will provision a NodeBalancer for you.
147+
148+
> You can customize this for your app simply by replacing all instances of `spin-rust-hello` with the name of your app.
149+
150+
We can create the NodeBalancer by running `kubectl apply` on the above file:
151+
152+
```console
153+
$ kubectl apply -f hello-world-nodebalancer.yaml
154+
service/spin-rust-hello-nodebalancer created
155+
```
156+
157+
Provisioning the new NodeBalancer may take a few moments, but we can get the IP address using `kubectl get service spin-rust-hello-nodebalancer`:
158+
159+
```console
160+
$ get service spin-rust-hello-nodebalancer
161+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
162+
spin-rust-hello-nodebalancer LoadBalancer 10.128.235.253 172.234.210.123 80:31083/TCP 40s
163+
```
164+
165+
The `EXTERNAL-IP` field tells us what the NodeBalancer is using as a public IP. We can now test this out over the Internet using `curl` or by entering the URL `http://172.234.210.123/hello` into your browser.
166+
167+
```console
168+
$ curl 172.234.210.123/hello
169+
Hello world from Spin!
170+
```
171+
172+
## Deleting Our App
173+
174+
To delete this sample app, we will first delete the NodeBalancer, and then delete the app:
175+
176+
```console
177+
$ kubectl delete service spin-rust-hello-nodebalancer
178+
service "spin-rust-hello-nodebalancer" deleted
179+
$ kubectl delete spinapp spin-rust-hello
180+
spinapp.core.spinoperator.dev "spin-rust-hello" deleted
181+
```
182+
183+
> If you delete the NodeBalancer out of the Linode console, it will not automatically delete the `Service` record in Kubernetes, which will cause inconsistencies. So it is best to use `kubectl delete service` to delete your NodeBalancer.
184+
185+
If you are also done with your LKE cluster, the easiest way to delete it is to log into the Akamai Linode dashboard, navigate to `Kubernetes`, and press the `Delete` button. This will destroy all of your worker nodes and deprovision the control plane.
165 KB
Loading

0 commit comments

Comments
 (0)