You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are actively looking for contributors and maintainers for this project.
10
11
If you have experience in container internals e.g. cgroups, namespaces, or have contributed to any open source projects
11
-
around containers e.g. [`docker`](https://github.com/moby/moby), [`containerd`](https://github.com/containerd/containerd), [`nerdctl`](https://github.com/containerd/nerdctl), [`podman`](https://github.com/containers/podman) etc or build tooling which involves dealing with
12
+
around containers e.g. [`docker`](https://github.com/moby/moby), [`containerd`](https://github.com/containerd/containerd), [`nerdctl`](https://github.com/containerd/nerdctl), [`podman`](https://github.com/containers/podman) etc. or build tooling which involves dealing with
12
13
container internals, and are interested in contributing to this project, I would love to talk to you!
13
14
Golang experience is preferred but not required.
14
15
15
16
Please reach out to me [`@_shishir_m`](https://twitter.com/_shishir_m) or open an issue in this repository with your contact details, if you are interested in contributing to this project.
16
17
17
18
## Overview
19
+
18
20
Nomad task driver for launching containers using containerd.
19
21
20
-
**Containerd**[`(containerd.io)`](https://containerd.io) is a lightweight container daemon for
21
-
running and managing container lifecycle.<br/>
22
+
**Containerd**[`(containerd.io)`](https://containerd.io) is a lightweight container daemon for running and managing container lifecycle.<br/>
or `vagrant provision` if the vagrant VM is already running.
68
70
69
-
Once setup (`vagrant up` OR `vagrant provision`) is complete and the nomad server is up and running, you can check the registered task drivers (which will also show `containerd-driver`) using:
70
-
```
71
-
$ nomad node status (Note down the <node_id>)
71
+
Once setup is complete and the nomad server is up and running, you can check the registered task drivers (which will also show `containerd-driver`) using:
72
+
```shell
73
+
$ nomad node status #Note down node ID
72
74
$ nomad node status <node_id>| grep containerd-driver
73
75
```
74
76
75
77
**NOTE:**[`setup.sh`](vagrant/setup.sh) is part of the vagrant setup and should not be executed directly.
76
78
77
-
## Run Example jobs.
79
+
## Run example jobs
78
80
79
81
There are few example jobs in the [`example`](https://github.com/Roblox/nomad-driver-containerd/tree/master/example) directory.
80
82
83
+
```shell
84
+
$ nomad job run <job_name>.nomad
81
85
```
82
-
$ nomad job run <job_name.nomad>
83
-
```
84
-
will launch the job.<br/>
86
+
will launch the job.
85
87
86
-
More detailed instructions are in the [`example README.md`](https://github.com/Roblox/nomad-driver-containerd/tree/master/example)
88
+
More detailed instructions are available in the [`example README.md`](https://github.com/Roblox/nomad-driver-containerd/tree/master/example).
87
89
88
-
To interact with `images` and `containers` directly, you can use [`nerdctl`](https://github.com/containerd/nerdctl) which is a docker compatible CLI for `containerd`. `nerdctl` is already installed in the vagrant VM at `/usr/local/bin`.
90
+
To interact with `images` and `containers` directly, you can use [`nerdctl`](https://github.com/containerd/nerdctl) which is a docker compatible CLI for `containerd`.
91
+
`nerdctl` is already installed in the vagrant VM at `/usr/local/bin`.
89
92
90
93
## Supported options
91
94
@@ -134,11 +137,12 @@ To interact with `images` and `containers` directly, you can use [`nerdctl`](htt
134
137
   - **type** (string) (Optional): Supported values are `volume`, `bind` or `tmpfs`. **Default:** volume.<br/>
135
138
   - **target** (string) (Required): Target path in the container.<br/>
136
139
   - **source** (string) (Optional): Source path on the host.<br/>
137
-
   - **options** ([]string) (Optional): fstab style [`mount options`](https://github.com/containerd/containerd/blob/master/mount/mount_linux.go#L187-L211). **NOTE**: For bind mounts, atleast`rbind` and `ro` are required.<br/>
140
+
   - **options** ([]string) (Optional): fstab style [`mount options`](https://github.com/containerd/containerd/blob/master/mount/mount_linux.go#L187-L211). **NOTE**: For bind mounts, at least`rbind` and `ro` are required.<br/>
138
141
  \}
139
142
140
143
**Bind mount example**
141
-
```
144
+
145
+
```hcl
142
146
mounts = [
143
147
{
144
148
type = "bind"
@@ -149,18 +153,18 @@ mounts = [
149
153
]
150
154
```
151
155
152
-
In additon to the `mounts` option in `Task Config`, you can also mount your volumes into the container using nomad [`volume_mount stanza`](https://www.nomadproject.io/docs/job-specification/volume_mount)
156
+
In addition to the `mounts` option in `Task Config`, you can also mount your volumes into the container using nomad [`volume_mount stanza`](https://www.nomadproject.io/docs/job-specification/volume_mount)
153
157
154
158
See [`example job`](https://github.com/Roblox/nomad-driver-containerd/blob/master/example/volume_mount.nomad) for `volume_mount`.
155
159
156
160
**Custom seccomp profile example**
157
161
158
-
The default `docker` seccomp profile found [`here`](https://github.com/moby/moby/blob/master/profiles/seccomp/default.json)
159
-
can be downloaded, and modified (by removing/adding syscalls) to create a custom seccomp profile.<br/>
162
+
The default `docker` seccomp profile found [`here`](https://github.com/moby/moby/blob/master/profiles/seccomp/default.json) can be downloaded and modified (by adding/removing syscalls) to create a custom seccomp profile.
160
163
The custom seccomp profile can then be saved under `/opt/seccomp/seccomp.json` on the Nomad client nodes.
161
164
162
165
A nomad job can be launched using this custom seccomp profile.
163
-
```
166
+
167
+
```hcl
164
168
config {
165
169
seccomp = true
166
170
seccomp_profile = "/opt/seccomp/seccomp.json"
@@ -169,7 +173,7 @@ config {
169
173
170
174
**Sysctl example**
171
175
172
-
```
176
+
```hcl
173
177
config {
174
178
sysctl = {
175
179
"net.core.somaxconn" = "16384"
@@ -185,9 +189,10 @@ an image from a private repository in docker hub.<br/>
185
189
`auth` stanza can be set either in `Driver Config` or `Task Config` or both.<br/>
186
190
If set at both places, `Task Config` auth will take precedence over `Driver Config` auth.
187
191
188
-
**NOTE**: In the below example, `user` and `pass` are just placeholder values which need to be replaced by actual `username` and `password`, when specifying the credentials. Below `auth` stanza can be used for both `Driver Config` and `Task Config`.
192
+
**NOTE**: In the example below, `user` and `pass` are just placeholder values which need to be replaced by actual `username` and `password`, when specifying the credentials.
193
+
`auth` stanza can be used for both `Driver Config` and `Task Config`.
189
194
190
-
```
195
+
```hcl
191
196
auth {
192
197
username = "user"
193
198
password = "pass"
@@ -200,34 +205,39 @@ auth {
200
205
201
206
**NOTE:**`host` and `bridge` are mutually exclusive options, and only one of them should be used at a time.
202
207
203
-
1.**Host** network can be enabled by setting `host_network` to `true` in task config
204
-
of the job spec (see under [`Supported options`](https://github.com/Roblox/nomad-driver-containerd#supported-options)).
208
+
1.**Host** network can be enabled by setting `host_network` to `true` in task config of the job spec (see under [`Supported options`](#supported-options)).
205
209
206
210
2.**Bridge** network can be enabled by setting the `network` stanza in the task group section of the job spec.
207
211
208
-
```
212
+
```hcl
209
213
network {
210
214
mode = "bridge"
211
215
}
212
216
```
217
+
213
218
You need to install CNI plugins on Nomad client nodes under `/opt/cni/bin` before you can use `bridge` networks.
Also, ensure your Linux operating system distribution has been configured to allow container traffic through the bridge network to be routed via iptables. These tunables can be set as follows:
220
+
**Installing CNI plugins**
222
221
222
+
[//]: #(TODO: Replace this with containernetworking-plugins Ubuntu package)
Also, ensure your Linux operating system distribution has been configured to allow container traffic through the bridge network to be routed via iptables.
To preserve these settings on startup of a nomad client node, add a file including the following to `/etc/sysctl.d/` or remove the file your Linux distribution puts in that directory.
nomad supports both **static** and **dynamic** port mapping.
239
249
240
-
1.**Static ports**
250
+
### Static ports
241
251
242
252
Static port mapping can be added in the `network` stanza.
243
-
```
253
+
```hcl
244
254
network {
245
255
mode = "bridge"
246
256
port "lb" {
@@ -252,10 +262,10 @@ network {
252
262
Here, `host` port `8889` is mapped to `container` port `8889`.<br/>
253
263
**NOTE**: static ports are usually not recommended, except for `system` or specialized jobs like load balancers.
254
264
255
-
2.**Dynamic ports**
265
+
### Dynamic ports
256
266
257
267
Dynamic port mapping is also enabled in the `network` stanza.
258
-
```
268
+
```hcl
259
269
network {
260
270
mode = "bridge"
261
271
port "http" {
@@ -265,11 +275,13 @@ network {
265
275
```
266
276
Here, nomad will allocate a dynamic port on the `host` and that port will be mapped to `8080` in the container.
267
277
268
-
You can also read more about `network stanza` in the [`nomad official documentation`](https://www.nomadproject.io/docs/job-specification/network)
278
+
You can also read more about `network stanza` in the [`nomad official documentation`](https://www.nomadproject.io/docs/job-specification/network).
269
279
270
280
## Service discovery
271
281
272
-
Nomad schedules workloads of various types across a cluster of generic hosts. Because of this, placement is not known in advance and you will need to use service discovery to connect tasks to other services deployed across your cluster. Nomad integrates with Consul to provide service discovery and monitoring.
282
+
Nomad schedules workloads of various types across a cluster of generic hosts.
283
+
Because of this, placement is not known in advance, and you will need to use service discovery to connect tasks to other services deployed across your cluster.
284
+
Nomad integrates with Consul to provide service discovery and monitoring.
273
285
274
286
A [`service`](https://www.nomadproject.io/docs/job-specification/service) stanza can be added to your job spec, to enable service discovery.
275
287
@@ -279,34 +291,36 @@ The service stanza instructs Nomad to register a service with Consul.
279
291
280
292
If you are running the tests locally, use the [`vagrant VM`](Vagrantfile) provided in the repository.
281
293
282
-
```
294
+
```shell
283
295
$ vagrant up
284
296
$ vagrant ssh containerd-linux
285
297
$ sudo make test
286
298
```
287
299
**NOTE**: These are destructive tests and can leave the system in a changed state.<br/>
288
300
It is highly recommended to run these tests either as part of a CI/CD system e.g. circleci or on
289
-
a immutable infrastructure e.g vagrant VMs.
301
+
an immutable infrastructure e.g. vagrant VMs.
290
302
291
303
You can also run an individual test by specifying the test name. e.g.
0 commit comments