Skip to content

Commit 97d79a7

Browse files
authored
Update documentation for setting up edge side (#13)
Signed-off-by: Josh Minor <[email protected]> Signed-off-by: Josh Minor <[email protected]>
1 parent e9ba021 commit 97d79a7

File tree

4 files changed

+161
-128
lines changed

4 files changed

+161
-128
lines changed

README.md

Lines changed: 59 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -3,21 +3,17 @@
33
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/smarter)](https://artifacthub.io/packages/search?repo=smarter)
44
## This demo makes the following assumptions about your environment
55

6-
In the case you wish to deploy the demo we assume you have done the following:
7-
- You should have a cloud-based k3s master dedicated for edge deployment (we will refer to this as k3s-edge-master) before proceeding any further
8-
- if you don't have a k3s-edge-master, you can follow [these instructions](./k3s-edge-master.md)
6+
In this guide we assume you have done the following:
7+
- You should have a cloud-based k3s server dedicated for edge deployment (we will refer to this as k3s-edge-server) before proceeding any further
8+
- if you don't have a k3s-edge-server, you can follow [these instructions](./k3s-edge-server.md)
99
- You should also have an installed InfluxDB and Grafana instance in a separate kubernetes cluster
10-
- these may be installed on a second cloud node, with its own k3s master, we will refer to this as the cloud-data-node
11-
- if you don't have a cloud-data-node, you can follow [these instructions](./cloud-data-node.md)
12-
- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-master
13-
- instructions for installing a SMARTER image on Xavier AGX 16GB or Rpi4 are available [here](http://gitlab.com/arm-research/smarter/smarter-yocto)
14-
- instructions for registering an arbitrary arm64 node running a **64 bit kernel and user space with docker installed** are available [here](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node`
15-
- You will need a KUBECONFIG that is authenticated against the k3s-edge-master on the Dev machine (where you intend to run these commands)
16-
- Using our provided node images, your nodes should automatically register with the edge k3s master. You can verify this by running `kubectl get nodes -o wide`
10+
- these may be installed on a second cloud node, with its own k3s server, we will refer to this as the cloud-data-node
11+
- if you don't have a cloud-data-node, you can follow [these instructions](./k3s-cloud-server.md)
12+
- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-server
13+
- instructions for registering a node running a **64 bit kernel and user space** are available [here](./k3s-edge-server.md#Joining a k3s edge node to the cluster)
1714

1815
**Hardware:**
19-
- Xavier AGX or Raspberry Pi 4 using our [Smarter Yocto Images](http://gitlab.com/arm-research/smarter/smarter-yocto) (release > v0.6.4.1)
20-
- Rpi4 4GB running Ubuntu 19.10 (can be provisioned using smarter edge setup convenience script found in the scripts directory) or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space with docker installed** should work. For instructions on how to register a **non-yocto** node, you can follow [these instructions](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node`. Note that **Ubuntu 20.04** on the RPI 4 will **not** work, please use **19.10**
16+
- Rpi4 4GB running any debian based OS or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space** should work.
2117
- PS3 Eye Camera (or Linux compatible web cam with audio) serving both audio and video data (other USB cameras with microphones may work). Microphone **MUST** support 16KHz sampling rate.
2218
- A development machine (your work machine) setup to issue kubectl commands to your edge k3s cluster
2319
- (optional) PMS7003 Air Quality Sensor connected over Serial USB to USB port
@@ -27,55 +23,58 @@ In the case you wish to deploy the demo we assume you have done the following:
2723
- Dev machine running kubectl client 1.25
2824
- git, curl must also be installed
2925
- K3s server version 1.25
30-
- Node running docker > 18.09
3126

3227
**Connectivity:**
33-
- You must be able to reach your node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work
34-
- The node must be able to reach your k3s-edge-master and cloud-data-node via IP
28+
- You must be able to reach your edge node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work
29+
- The node must be able to reach your k3s-edge-server and cloud-data-node via IP
3530

36-
## Smarter k3s server configuration
31+
## Deploy demo
32+
- To deploy the base system components common to all edge nodes, as well as the demo applications, we opt to use **Helm v3**. To install helm on the device which you are managing your k3s edge cluster with, you can follow the guide [here](https://helm.sh/docs/intro/install/#from-script).
33+
- Ensure in your environment that your kubeconfig is set properly. As a quick sanity check you can run:
34+
```bash
35+
kubectl cluster-info
36+
```
37+
and you should get a message: `Kubernetes control plane is running at https://<k3s edge server ip>:<k3s edge server port`
38+
- Tell helm to add the smarter repo, such that you can deploy our charts:
39+
```bash
40+
helm repo add smarter https://smarter-project.github.io/documentation
41+
```
42+
- Use the helm chart on https://github.com/smarter-project/documentation/chart to install CNI, DNS and device-manager. This can be done by
43+
```bash
44+
helm install my-smarter-edge smarter/smarter-edge --wait
45+
```
46+
- With the smarter-edge chart installed, you can verify that all the base pods are ready by running:
47+
```bash
48+
kubectl get pods -A -o wide
49+
```
50+
- Now we deploy our demo by first applying the helm chart for the demo:
51+
```bash
52+
helm install my-smarter-demo smarter/smarter-demo --namespace smarter --create-namespace
53+
```
54+
- At this point applications will be installed into the cluster, but no pods will come up as running, as the nodes rely on node labels to be set for the application pods to run.
55+
- Label you nodes by running the following:
56+
```bash
57+
export NODE_NAME=<your node name>
58+
kubectl label node $NODE_NAME smarter-fluent-bit=enabled
59+
kubectl label node $NODE_NAME smarter-gstreamer=enabled
60+
kubectl label node $NODE_NAME smarter-pulseaudio=enabled
61+
kubectl label node $NODE_NAME smarter-inference=enabled
62+
kubectl label node $NODE_NAME smarter-image-detector=enabled
63+
kubectl label node $NODE_NAME smarter-audio-client=enabled
64+
```
65+
- At this point all on your target node you should see each of the above workloads running once the node has pulled down the images. You can monitor your cluster as each pod becomes ready by running:
66+
```bash
67+
kubectl get pods -A -w
68+
```
69+
- With all nodes running successfully, if you are on the same network as your edge node, you can navigate a browser to the IP of the edge node, and see the image detector running on your camera feed in real time.
70+
- To terminate the demo, you can simply unlabel the node for each workload:
71+
```bash
72+
export NODE_NAME=<your node name>
73+
kubectl label node $NODE_NAME smarter-fluent-bit-
74+
kubectl label node $NODE_NAME smarter-gstreamer-
75+
kubectl label node $NODE_NAME smarter-pulseaudio-
76+
kubectl label node $NODE_NAME smarter-inference-
77+
kubectl label node $NODE_NAME smarter-image-detector-
78+
kubectl label node $NODE_NAME smarter-audio-client-
79+
```
3780

38-
- Use the helm chart on https://gitlab.com/smarter-project/documentation/chart to install CNI, DNS and device-manager
39-
```bash
40-
helm install --namespace smarter --create-namespace smarter-edge chart
41-
```
42-
- Use the helm chart on each of the modules. Remember to use the namespace and the correct labels. The individual charts do not install on devices automatically, they require labels.
43-
44-
## To setup your registered edge node from your development machine
45-
Plugin USB camera. You should be able to see the camera at `/dev/video0`.
46-
47-
The demo assumes that your microphone is assigned to card 2 device 0. On Jetson platforms the first usb microphone is automatically assigned to card 2 device 0, however on the **non-yocto** rpi4 devices this is not the default for instance. To fix this you must create the file `/etc/modprobe.d/alsa-base.conf` with the contents:
48-
```
49-
options snd_usb_audio index=2,3
50-
options snd_usb_audio id="Mic1","Mic2"
51-
```
52-
53-
On the rpi4 with **Ubuntu**, you must also append the text `cgroup_memory=1 cgroup_enable=memory` to the file:
54-
```
55-
- `/boot/firmware/nobtcmd.txt` if Ubuntu 19.10
56-
- `/boot/firmware/cmdline.txt` if Ubuntu 20.04
57-
```
58-
59-
Do not install docker using snap with **Ubuntu** instead install by running:
60-
```bash
61-
sudo apt update && sudo apt install docker.io
62-
```
63-
64-
Then reboot the system.
65-
66-
If you are running on a **Xavier**(on the **non-yocto** build), **Xavier NX**, or a **Nano**, open the file `/etc/docker/daemon.json` on the device and ensure that the default runtime is set to nvidia. The file should look as follows:
67-
```bash
68-
{
69-
"default-runtime": "nvidia",
70-
"runtimes": {
71-
"nvidia": {
72-
"path": "nvidia-container-runtime",
73-
"runtimeArgs": []
74-
}
75-
}
76-
}
77-
```
78-
79-
For Single Tenant deployment instructions read [Here](./SingleTenantREADME.md)
80-
81-
For Virtual Tenant deployment instructions read [Here](./VirtualTenantREADME.md)
Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
# Overview
2-
This document will help you run a Smarter k3s master
2+
This document will help you run a Smarter k3s server
33

44
# Running on docker
55

66
## System requirements
77

8-
### k3s cloud master
8+
### k3s cloud server
99
* Local linux box, AWS EC2 VM instance or Google Cloud Platform GCE VM instance
1010
* OS: Ubuntu 18.04 or later
1111
* Architecture: aarch64 or amd64
1212
* CPU: at least 1vcpu
1313
* RAM: At least 3.75GB
1414
* Storage: At least 10GB
15-
* Multiple k3s cloud masters can be run in a single server if different server ports are used (HOSTPORT).
15+
* Multiple k3s cloud servers can be run in a single server if different server ports are used (HOSTPORT).
1616

1717
### EKS or equivalent
1818
* A k8s equivalent cluster
@@ -27,13 +27,13 @@ This document will help you run a Smarter k3s master
2727

2828
Make sure you open the ports from the k3s cloud cluster that edge devices need to access. The k3s server port should also be open to enable control of the k3s server
2929

30-
## Setting k3s master up
30+
## Setting k3s server up
3131

3232
[k3s](https://github.com/k3s-io/k3s) repository and [Rancher docker hub](https://hub.docker.com/r/rancher/k3s/) provide docker images and artifacts (k3s) allowing k3s to run as container.
33-
This repository provides the file [k3s-cloud-start.sh](https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s master
33+
This repository provides the file [k3s-cloud-start.sh](./scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s server
3434
Execute the following command to download the file:
3535
```
36-
wget https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-cloud-start.sh
36+
wget https://raw.githubusercontent.com/smarter-project/documentation/main/scripts/k3s-cloud-start.sh
3737
```
3838

3939
A few options should be set on the script either by environment variables or editing the script.
@@ -43,11 +43,11 @@ execute the script:
4343
./k3s-cloud-start.sh
4444
```
4545

46-
The script will create another local script that can be used to restart k3s if necessary, the script is called start_k3s_\<instance name\>.sh.
47-
The files token.\<instance name\> and kube.\<instance name\>.config contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file).
46+
The script will create another local script that can be used to restart k3s if necessary, the script is called `start_k3s_<instance name>.sh`.
47+
The files `token.<instance name>` and `kube.<instance name>.config` contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file).
4848
*NOTE*: Is is important K3S_VERSION on client matches the server otherwise things are likely not to work
49-
The k3s-start.sh downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a kubectl-\<instance name\>.sh script that emulates a kubectl with the correct credentials.
50-
The file env-\<instance name\>.sh create an alias for kubectl and adds the KUBECONFIG enviroment variable.
49+
The `k3s-start.sh` downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a `kubectl-<instance name>.sh` script that emulates a kubectl with the correct credentials.
50+
The file `env-<instance name>.sh` creates an alias for kubectl and adds the KUBECONFIG enviroment variable.
5151

5252
# Joining a k3s node
53-
To join an node which does not use our yocto build. Copy the kube_cloud_install-\<instance name\>.sh to the node and execute it. The script is already configured to connect to the server \<instance name\>.
53+
To join an node to the cloud cluster, the `kube_cloud_install-<instance name>.sh` to the node and execute it. The script is already configured to connect to the server `<instance name>`.

k3s-edge-master.md

Lines changed: 0 additions & 57 deletions
This file was deleted.

0 commit comments

Comments
 (0)