You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## This demo makes the following assumptions about your environment
5
5
6
-
In the case you wish to deploy the demo we assume you have done the following:
7
-
- You should have a cloud-based k3s master dedicated for edge deployment (we will refer to this as k3s-edge-master) before proceeding any further
8
-
- if you don't have a k3s-edge-master, you can follow [these instructions](./k3s-edge-master.md)
6
+
In this guide we assume you have done the following:
7
+
- You should have a cloud-based k3s server dedicated for edge deployment (we will refer to this as k3s-edge-server) before proceeding any further
8
+
- if you don't have a k3s-edge-server, you can follow [these instructions](./k3s-edge-server.md)
9
9
- You should also have an installed InfluxDB and Grafana instance in a separate kubernetes cluster
10
-
- these may be installed on a second cloud node, with its own k3s master, we will refer to this as the cloud-data-node
11
-
- if you don't have a cloud-data-node, you can follow [these instructions](./cloud-data-node.md)
12
-
- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-master
13
-
- instructions for installing a SMARTER image on Xavier AGX 16GB or Rpi4 are available [here](http://gitlab.com/arm-research/smarter/smarter-yocto)
14
-
- instructions for registering an arbitrary arm64 node running a **64 bit kernel and user space with docker installed** are available [here](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node`
15
-
- You will need a KUBECONFIG that is authenticated against the k3s-edge-master on the Dev machine (where you intend to run these commands)
16
-
- Using our provided node images, your nodes should automatically register with the edge k3s master. You can verify this by running `kubectl get nodes -o wide`
10
+
- these may be installed on a second cloud node, with its own k3s server, we will refer to this as the cloud-data-node
11
+
- if you don't have a cloud-data-node, you can follow [these instructions](./k3s-cloud-server.md)
12
+
- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-server
13
+
- instructions for registering a node running a **64 bit kernel and user space** are available [here](./k3s-edge-server.md#Joining a k3s edge node to the cluster)
17
14
18
15
**Hardware:**
19
-
- Xavier AGX or Raspberry Pi 4 using our [Smarter Yocto Images](http://gitlab.com/arm-research/smarter/smarter-yocto) (release > v0.6.4.1)
20
-
- Rpi4 4GB running Ubuntu 19.10 (can be provisioned using smarter edge setup convenience script found in the scripts directory) or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space with docker installed** should work. For instructions on how to register a **non-yocto** node, you can follow [these instructions](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node`. Note that **Ubuntu 20.04** on the RPI 4 will **not** work, please use **19.10**
16
+
- Rpi4 4GB running any debian based OS or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space** should work.
21
17
- PS3 Eye Camera (or Linux compatible web cam with audio) serving both audio and video data (other USB cameras with microphones may work). Microphone **MUST** support 16KHz sampling rate.
22
18
- A development machine (your work machine) setup to issue kubectl commands to your edge k3s cluster
23
19
- (optional) PMS7003 Air Quality Sensor connected over Serial USB to USB port
@@ -27,55 +23,58 @@ In the case you wish to deploy the demo we assume you have done the following:
27
23
- Dev machine running kubectl client 1.25
28
24
- git, curl must also be installed
29
25
- K3s server version 1.25
30
-
- Node running docker > 18.09
31
26
32
27
**Connectivity:**
33
-
- You must be able to reach your node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work
34
-
- The node must be able to reach your k3s-edge-master and cloud-data-node via IP
28
+
- You must be able to reach your edge node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work
29
+
- The node must be able to reach your k3s-edge-server and cloud-data-node via IP
35
30
36
-
## Smarter k3s server configuration
31
+
## Deploy demo
32
+
- To deploy the base system components common to all edge nodes, as well as the demo applications, we opt to use **Helm v3**. To install helm on the device which you are managing your k3s edge cluster with, you can follow the guide [here](https://helm.sh/docs/intro/install/#from-script).
33
+
- Ensure in your environment that your kubeconfig is set properly. As a quick sanity check you can run:
34
+
```bash
35
+
kubectl cluster-info
36
+
```
37
+
and you should get a message: `Kubernetes control plane is running at https://<k3s edge server ip>:<k3s edge server port`
38
+
- Tell helm to add the smarter repo, such that you can deploy our charts:
- At this point applications will be installed into the cluster, but no pods will come up as running, as the nodes rely on node labels to be set for the application pods to run.
- At this point all on your target node you should see each of the above workloads running once the node has pulled down the images. You can monitor your cluster as each pod becomes ready by running:
66
+
```bash
67
+
kubectl get pods -A -w
68
+
```
69
+
- With all nodes running successfully, if you are on the same network as your edge node, you can navigate a browser to the IP of the edge node, and see the image detector running on your camera feed in real time.
70
+
- To terminate the demo, you can simply unlabel the node for each workload:
- Use the helm chart on each of the modules. Remember to use the namespace and the correct labels. The individual charts do not install on devices automatically, they require labels.
43
-
44
-
## To setup your registered edge node from your development machine
45
-
Plugin USB camera. You should be able to see the camera at `/dev/video0`.
46
-
47
-
The demo assumes that your microphone is assigned to card 2 device 0. On Jetson platforms the first usb microphone is automatically assigned to card 2 device 0, however on the **non-yocto** rpi4 devices this is not the default for instance. To fix this you must create the file `/etc/modprobe.d/alsa-base.conf` with the contents:
48
-
```
49
-
options snd_usb_audio index=2,3
50
-
options snd_usb_audio id="Mic1","Mic2"
51
-
```
52
-
53
-
On the rpi4 with **Ubuntu**, you must also append the text `cgroup_memory=1 cgroup_enable=memory` to the file:
54
-
```
55
-
- `/boot/firmware/nobtcmd.txt` if Ubuntu 19.10
56
-
- `/boot/firmware/cmdline.txt` if Ubuntu 20.04
57
-
```
58
-
59
-
Do not install docker using snap with **Ubuntu** instead install by running:
60
-
```bash
61
-
sudo apt update && sudo apt install docker.io
62
-
```
63
-
64
-
Then reboot the system.
65
-
66
-
If you are running on a **Xavier**(on the **non-yocto** build), **Xavier NX**, or a **Nano**, open the file `/etc/docker/daemon.json` on the device and ensure that the default runtime is set to nvidia. The file should look as follows:
67
-
```bash
68
-
{
69
-
"default-runtime": "nvidia",
70
-
"runtimes": {
71
-
"nvidia": {
72
-
"path": "nvidia-container-runtime",
73
-
"runtimeArgs": []
74
-
}
75
-
}
76
-
}
77
-
```
78
-
79
-
For Single Tenant deployment instructions read [Here](./SingleTenantREADME.md)
80
-
81
-
For Virtual Tenant deployment instructions read [Here](./VirtualTenantREADME.md)
This document will help you run a Smarter k3s master
2
+
This document will help you run a Smarter k3s server
3
3
4
4
# Running on docker
5
5
6
6
## System requirements
7
7
8
-
### k3s cloud master
8
+
### k3s cloud server
9
9
* Local linux box, AWS EC2 VM instance or Google Cloud Platform GCE VM instance
10
10
* OS: Ubuntu 18.04 or later
11
11
* Architecture: aarch64 or amd64
12
12
* CPU: at least 1vcpu
13
13
* RAM: At least 3.75GB
14
14
* Storage: At least 10GB
15
-
* Multiple k3s cloud masters can be run in a single server if different server ports are used (HOSTPORT).
15
+
* Multiple k3s cloud servers can be run in a single server if different server ports are used (HOSTPORT).
16
16
17
17
### EKS or equivalent
18
18
* A k8s equivalent cluster
@@ -27,13 +27,13 @@ This document will help you run a Smarter k3s master
27
27
28
28
Make sure you open the ports from the k3s cloud cluster that edge devices need to access. The k3s server port should also be open to enable control of the k3s server
29
29
30
-
## Setting k3s master up
30
+
## Setting k3s server up
31
31
32
32
[k3s](https://github.com/k3s-io/k3s) repository and [Rancher docker hub](https://hub.docker.com/r/rancher/k3s/) provide docker images and artifacts (k3s) allowing k3s to run as container.
33
-
This repository provides the file [k3s-cloud-start.sh](https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s master
33
+
This repository provides the file [k3s-cloud-start.sh](./scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s server
34
34
Execute the following command to download the file:
A few options should be set on the script either by environment variables or editing the script.
@@ -43,11 +43,11 @@ execute the script:
43
43
./k3s-cloud-start.sh
44
44
```
45
45
46
-
The script will create another local script that can be used to restart k3s if necessary, the script is called start_k3s_\<instance name\>.sh.
47
-
The files token.\<instance name\> and kube.\<instance name\>.config contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file).
46
+
The script will create another local script that can be used to restart k3s if necessary, the script is called `start_k3s_<instance name>.sh`.
47
+
The files `token.<instance name>` and `kube.<instance name>.config` contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file).
48
48
*NOTE*: Is is important K3S_VERSION on client matches the server otherwise things are likely not to work
49
-
The k3s-start.sh downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a kubectl-\<instance name\>.sh script that emulates a kubectl with the correct credentials.
50
-
The file env-\<instance name\>.sh create an alias for kubectl and adds the KUBECONFIG enviroment variable.
49
+
The `k3s-start.sh` downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a `kubectl-<instance name>.sh` script that emulates a kubectl with the correct credentials.
50
+
The file `env-<instance name>.sh` creates an alias for kubectl and adds the KUBECONFIG enviroment variable.
51
51
52
52
# Joining a k3s node
53
-
To join an node which does not use our yocto build. Copy the kube_cloud_install-\<instance name\>.sh to the node and execute it. The script is already configured to connect to the server \<instance name\>.
53
+
To join an node to the cloud cluster, the `kube_cloud_install-<instance name>.sh` to the node and execute it. The script is already configured to connect to the server `<instance name>`.
0 commit comments