You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are actively looking for contributors and maintainers for this project.
10
11
If you have experience in container internals e.g. cgroups, namespaces, or have contributed to any open source projects
11
-
around containers e.g. [`docker`](https://github.com/moby/moby), [`containerd`](https://github.com/containerd/containerd), [`nerdctl`](https://github.com/containerd/nerdctl), [`podman`](https://github.com/containers/podman) etc or build tooling which involves dealing with
12
+
around containers e.g. [`docker`](https://github.com/moby/moby), [`containerd`](https://github.com/containerd/containerd), [`nerdctl`](https://github.com/containerd/nerdctl), [`podman`](https://github.com/containers/podman) etc. or build tooling which involves dealing with
12
13
container internals, and are interested in contributing to this project, I would love to talk to you!
13
14
Golang experience is preferred but not required.
14
15
15
16
Please reach out to me [`@_shishir_m`](https://twitter.com/_shishir_m) or open an issue in this repository with your contact details, if you are interested in contributing to this project.
16
17
17
18
## Overview
19
+
18
20
Nomad task driver for launching containers using containerd.
19
21
20
-
**Containerd**[`(containerd.io)`](https://containerd.io) is a lightweight container daemon for
21
-
running and managing container lifecycle.<br/>
22
+
**Containerd**[`(containerd.io)`](https://containerd.io) is a lightweight container daemon for running and managing container lifecycle.<br/>
or `vagrant provision` if the vagrant VM is already running.
68
70
69
-
Once setup (`vagrant up` OR `vagrant provision`) is complete and the nomad server is up and running, you can check the registered task drivers (which will also show `containerd-driver`) using:
70
-
```
71
-
$ nomad node status (Note down the <node_id>)
71
+
Once setup is complete and the nomad server is up and running, you can check the registered task drivers (which will also show `containerd-driver`) using:
72
+
```shell
73
+
$ nomad node status #Note down node ID
72
74
$ nomad node status <node_id>| grep containerd-driver
73
75
```
74
76
75
77
**NOTE:**[`setup.sh`](vagrant/setup.sh) is part of the vagrant setup and should not be executed directly.
76
78
77
-
## Run Example jobs.
79
+
## Run example jobs
78
80
79
81
There are few example jobs in the [`example`](https://github.com/Roblox/nomad-driver-containerd/tree/master/example) directory.
80
82
83
+
```shell
84
+
$ nomad job run <job_name>.nomad
81
85
```
82
-
$ nomad job run <job_name.nomad>
83
-
```
84
-
will launch the job.<br/>
86
+
will launch the job.
85
87
86
-
More detailed instructions are in the [`example README.md`](https://github.com/Roblox/nomad-driver-containerd/tree/master/example)
88
+
More detailed instructions are available in the [`example README.md`](https://github.com/Roblox/nomad-driver-containerd/tree/master/example).
87
89
88
-
To interact with `images` and `containers` directly, you can use [`nerdctl`](https://github.com/containerd/nerdctl) which is a docker compatible CLI for `containerd`. `nerdctl` is already installed in the vagrant VM at `/usr/local/bin`.
90
+
To interact with `images` and `containers` directly, you can use [`nerdctl`](https://github.com/containerd/nerdctl) which is a docker compatible CLI for `containerd`.
91
+
`nerdctl` is already installed in the vagrant VM at `/usr/local/bin`.
89
92
90
93
## Supported options
91
94
@@ -135,11 +138,12 @@ To interact with `images` and `containers` directly, you can use [`nerdctl`](htt
135
138
   - **type** (string) (Optional): Supported values are `volume`, `bind` or `tmpfs`. **Default:** volume.<br/>
136
139
   - **target** (string) (Required): Target path in the container.<br/>
137
140
   - **source** (string) (Optional): Source path on the host.<br/>
138
-
   - **options** ([]string) (Optional): fstab style [`mount options`](https://github.com/containerd/containerd/blob/master/mount/mount_linux.go#L187-L211). **NOTE**: For bind mounts, atleast`rbind` and `ro` are required.<br/>
141
+
   - **options** ([]string) (Optional): fstab style [`mount options`](https://github.com/containerd/containerd/blob/master/mount/mount_linux.go#L187-L211). **NOTE**: For bind mounts, at least`rbind` and `ro` are required.<br/>
139
142
  \}
140
143
141
144
**Bind mount example**
142
-
```
145
+
146
+
```hcl
143
147
mounts = [
144
148
{
145
149
type = "bind"
@@ -150,18 +154,18 @@ mounts = [
150
154
]
151
155
```
152
156
153
-
In additon to the `mounts` option in `Task Config`, you can also mount your volumes into the container using nomad [`volume_mount stanza`](https://www.nomadproject.io/docs/job-specification/volume_mount)
157
+
In addition to the `mounts` option in `Task Config`, you can also mount your volumes into the container using nomad [`volume_mount stanza`](https://www.nomadproject.io/docs/job-specification/volume_mount)
154
158
155
159
See [`example job`](https://github.com/Roblox/nomad-driver-containerd/blob/master/example/volume_mount.nomad) for `volume_mount`.
156
160
157
161
**Custom seccomp profile example**
158
162
159
-
The default `docker` seccomp profile found [`here`](https://github.com/moby/moby/blob/master/profiles/seccomp/default.json)
160
-
can be downloaded, and modified (by removing/adding syscalls) to create a custom seccomp profile.<br/>
163
+
The default `docker` seccomp profile found [`here`](https://github.com/moby/moby/blob/master/profiles/seccomp/default.json) can be downloaded and modified (by adding/removing syscalls) to create a custom seccomp profile.
161
164
The custom seccomp profile can then be saved under `/opt/seccomp/seccomp.json` on the Nomad client nodes.
162
165
163
166
A nomad job can be launched using this custom seccomp profile.
164
-
```
167
+
168
+
```hcl
165
169
config {
166
170
seccomp = true
167
171
seccomp_profile = "/opt/seccomp/seccomp.json"
@@ -170,7 +174,7 @@ config {
170
174
171
175
**Sysctl example**
172
176
173
-
```
177
+
```hcl
174
178
config {
175
179
sysctl = {
176
180
"net.core.somaxconn" = "16384"
@@ -186,9 +190,10 @@ an image from a private repository in docker hub.<br/>
186
190
`auth` stanza can be set either in `Driver Config` or `Task Config` or both.<br/>
187
191
If set at both places, `Task Config` auth will take precedence over `Driver Config` auth.
188
192
189
-
**NOTE**: In the below example, `user` and `pass` are just placeholder values which need to be replaced by actual `username` and `password`, when specifying the credentials. Below `auth` stanza can be used for both `Driver Config` and `Task Config`.
193
+
**NOTE**: In the example below, `user` and `pass` are just placeholder values which need to be replaced by actual `username` and `password`, when specifying the credentials.
194
+
`auth` stanza can be used for both `Driver Config` and `Task Config`.
190
195
191
-
```
196
+
```hcl
192
197
auth {
193
198
username = "user"
194
199
password = "pass"
@@ -201,34 +206,39 @@ auth {
201
206
202
207
**NOTE:**`host` and `bridge` are mutually exclusive options, and only one of them should be used at a time.
203
208
204
-
1.**Host** network can be enabled by setting `host_network` to `true` in task config
205
-
of the job spec (see under [`Supported options`](https://github.com/Roblox/nomad-driver-containerd#supported-options)).
209
+
1.**Host** network can be enabled by setting `host_network` to `true` in task config of the job spec (see under [`Supported options`](#supported-options)).
206
210
207
211
2.**Bridge** network can be enabled by setting the `network` stanza in the task group section of the job spec.
208
212
209
-
```
213
+
```hcl
210
214
network {
211
215
mode = "bridge"
212
216
}
213
217
```
218
+
214
219
You need to install CNI plugins on Nomad client nodes under `/opt/cni/bin` before you can use `bridge` networks.
Also, ensure your Linux operating system distribution has been configured to allow container traffic through the bridge network to be routed via iptables. These tunables can be set as follows:
221
+
**Installing CNI plugins**
223
222
223
+
[//]: #(TODO: Replace this with containernetworking-plugins Ubuntu package)
Also, ensure your Linux operating system distribution has been configured to allow container traffic through the bridge network to be routed via iptables.
To preserve these settings on startup of a nomad client node, add a file including the following to `/etc/sysctl.d/` or remove the file your Linux distribution puts in that directory.
nomad supports both **static** and **dynamic** port mapping.
240
250
241
-
1.**Static ports**
251
+
### Static ports
242
252
243
253
Static port mapping can be added in the `network` stanza.
244
-
```
254
+
```hcl
245
255
network {
246
256
mode = "bridge"
247
257
port "lb" {
@@ -253,10 +263,10 @@ network {
253
263
Here, `host` port `8889` is mapped to `container` port `8889`.<br/>
254
264
**NOTE**: static ports are usually not recommended, except for `system` or specialized jobs like load balancers.
255
265
256
-
2.**Dynamic ports**
266
+
### Dynamic ports
257
267
258
268
Dynamic port mapping is also enabled in the `network` stanza.
259
-
```
269
+
```hcl
260
270
network {
261
271
mode = "bridge"
262
272
port "http" {
@@ -266,11 +276,13 @@ network {
266
276
```
267
277
Here, nomad will allocate a dynamic port on the `host` and that port will be mapped to `8080` in the container.
268
278
269
-
You can also read more about `network stanza` in the [`nomad official documentation`](https://www.nomadproject.io/docs/job-specification/network)
279
+
You can also read more about `network stanza` in the [`nomad official documentation`](https://www.nomadproject.io/docs/job-specification/network).
270
280
271
281
## Service discovery
272
282
273
-
Nomad schedules workloads of various types across a cluster of generic hosts. Because of this, placement is not known in advance and you will need to use service discovery to connect tasks to other services deployed across your cluster. Nomad integrates with Consul to provide service discovery and monitoring.
283
+
Nomad schedules workloads of various types across a cluster of generic hosts.
284
+
Because of this, placement is not known in advance, and you will need to use service discovery to connect tasks to other services deployed across your cluster.
285
+
Nomad integrates with Consul to provide service discovery and monitoring.
274
286
275
287
A [`service`](https://www.nomadproject.io/docs/job-specification/service) stanza can be added to your job spec, to enable service discovery.
276
288
@@ -280,34 +292,36 @@ The service stanza instructs Nomad to register a service with Consul.
280
292
281
293
If you are running the tests locally, use the [`vagrant VM`](Vagrantfile) provided in the repository.
282
294
283
-
```
295
+
```shell
284
296
$ vagrant up
285
297
$ vagrant ssh containerd-linux
286
298
$ sudo make test
287
299
```
288
300
**NOTE**: These are destructive tests and can leave the system in a changed state.<br/>
289
301
It is highly recommended to run these tests either as part of a CI/CD system e.g. circleci or on
290
-
a immutable infrastructure e.g vagrant VMs.
302
+
an immutable infrastructure e.g. vagrant VMs.
291
303
292
304
You can also run an individual test by specifying the test name. e.g.
0 commit comments