You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/pipelines/process/service-containers.md
+36-38Lines changed: 36 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,30 +3,40 @@ title: Service containers
3
3
description: Learn about running containerized services in Azure Pipelines single or multiple container jobs or noncontainer jobs.
4
4
ms.assetid: a6af47c5-2358-487a-ba3c-d213930fceb8
5
5
ms.topic: conceptual
6
-
ms.date: 07/15/2024
6
+
ms.date: 09/12/2025
7
7
monikerRange: azure-devops
8
+
#customer intent: As an Azure Pipelines user, I want to understand service containers so I can use them to automatically manage services that my pipelines require.
If your pipeline requires the support of one or more services, you might need to create, connect to, and clean up the services per job. For example, your pipeline might run integration tests that require access to a newly created database and memory cache for each job in the pipeline.
16
+
This article describes using *service containers* in Azure Pipelines. If your pipeline requires the support of one or more services, you might need to create, connect to, and clean up the services per [job](phases.md). For example, your pipeline might run integration tests that require access to a newly created database and memory cache for each job in the pipeline.
15
17
16
-
A *container* provides a simple and portable way to run a service that your pipeline depends on. A *service container* lets you automatically create, network, and manage the lifecycle of a containerized service. Each service container is accessible only to the [job](phases.md) that requires it. Service containers work with any kind of job, but are most commonly used with [container jobs](container-phases.md).
18
+
A service container provides a simple and portable way to run services in your pipeline. The service containeris accessible only to the job that requires it.
17
19
18
-
## Requirements
20
+
Service containers let you automatically create, network, and manage the lifecycles of services that your pipelines depend on. Service containers work with any kind of job, but are most commonly used with [container jobs](container-phases.md).
19
21
20
-
- Service containers must define a `CMD` or `ENTRYPOINT`. The pipeline runs `docker run` for the provided container without any arguments.
22
+
>[!NOTE]
23
+
>Classic pipelines don't support service containers.
21
24
22
-
- Azure Pipelines can run Linux or [Windows containers](/virtualization/windowscontainers/about/). You can use either the hosted Ubuntu container pool for Linux containers or the hosted Windows pool for Windows containers. The hosted macOS pool doesn't support running containers.
25
+
## Conditions and limitations
23
26
24
-
>[!NOTE]
25
-
>Service containers aren't supported in Classic pipelines.
27
+
- Service containers must define a `CMD` or `ENTRYPOINT`. The pipeline runs `docker run` with no arguments for the provided container.
28
+
29
+
- Azure Pipelines can run Linux or [Windows](/virtualization/windowscontainers/about/) containers. You use the hosted Ubuntu pool for Linux containers or the hosted Windows pool for Windows containers. The hosted macOS pool doesn't support running containers.
30
+
31
+
- Service containers share the same container resources as container jobs, so they can use the same [startup options](container-phases.md?tabs=yaml#options).
32
+
33
+
- If a service container specifies a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck), the agent can optionally wait until the container is healthy before running the job.
26
34
27
35
## Single container job
28
36
29
-
The following example YAML pipeline definition shows a single container job.
37
+
The following example YAML pipeline defines a single container job that uses a service container. The pipeline fetches the `buildpack-deps` and `nginx` containers from [Docker Hub](https://hub.docker.com) and then starts all containers. The containers are networked so they can reach each other by their `services` names.
38
+
39
+
From inside the job container, the `nginx` host name resolves to the correct services by using Docker networking. All containers on the network automatically expose all ports to each other.
30
40
31
41
```yaml
32
42
resources:
@@ -49,13 +59,13 @@ steps:
49
59
displayName: Show that nginx is running
50
60
```
51
61
52
-
The preceding pipeline fetches the `nginx` and `buildpack-deps` containers from [Docker Hub](https://hub.docker.com) and then starts the containers. The containers are networked together so that they can reach each other by their `services` name.
62
+
## Single noncontainer job
53
63
54
-
From inside this job container, the `nginx` host name resolves to the correct services by using Docker networking. All containers on the network automatically expose all ports to each other.
64
+
You can also use service containers in noncontainer jobs. The pipeline starts the latest containers, but because the job doesn't run in a container, there's no automatic name resolution. Instead, you reach services by using `localhost`. The following example pipeline explicitly specifies the `8080:80` port for `nginx`.
55
65
56
-
## Single noncontainer job
66
+
An alternative approach is to assign a random port dynamically at runtime. To allow the job to access the port, the pipeline creates a [variable](variables.md) of the form `agent.services.<serviceName>.ports.<port>`. You can access the dynamic port by using this [environment variable](variables.md#environment-variables) in a Bash script.
57
67
58
-
You can also use service containers without a job container, as in the following example.
68
+
In the following pipeline, `redis` gets a random available port on the host, and the `agent.services.redis.ports.6379` variable contains the port number.
59
69
60
70
```yaml
61
71
resources:
@@ -84,12 +94,6 @@ steps:
84
94
echo $AGENT_SERVICES_REDIS_PORTS_6379
85
95
```
86
96
87
-
The preceding pipeline starts the latest `nginx` containers. Since the job isn't running in a container, there's no automatic name resolution. Instead, you can reach services by using `localhost`. The example explicitly provides the `8080:80` port.
88
-
89
-
An alternative approach is to let a random port get assigned dynamically at runtime. You can then access these dynamic ports by using [variables](variables.md). These variables take the form: `agent.services.<serviceName>.ports.<port>`. In a Bash script, you can access variables by using the process environment.
90
-
91
-
In the preceding example, `redis` is assigned a random available port on the host. The `agent.services.redis.ports.6379` variable contains the port number.
92
-
93
97
## Multiple jobs
94
98
95
99
Service containers are also useful for running the same steps against multiple versions of the same service. In the following example, the same steps run against multiple versions of PostgreSQL.
@@ -124,6 +128,10 @@ steps:
124
128
125
129
## Ports
126
130
131
+
Jobs that run directly on the host require `ports` to access the service container. Specifying `ports` isn't required if your job runs in a container, because containers on the same Docker network automatically expose all ports to each other by default.
132
+
133
+
A port takes the form `<hostPort>:<containerPort>` or just `<containerPort>` with an optional `/<protocol>` at the end. For example, `6379/tcp` exposes `tcp` over port `6379`, bound to a random port on the host machine.
134
+
127
135
When you invoke a container resource or an inline container, you can specify an array of `ports` to expose on the container, as in the following example.
128
136
129
137
```yaml
@@ -142,15 +150,13 @@ services:
142
150
- 6379/tcp
143
151
```
144
152
145
-
Specifying `ports` isn't required if your job is running in a container, because containers on the same Docker network automatically expose all ports to each other by default.
146
-
147
-
If your job is running on the host, `ports` are required to access the service. A port takes the form `<hostPort>:<containerPort>` or just `<containerPort>` with an optional `/<protocol>` at the end. For example, `6379/tcp` exposes `tcp` over port `6379`, bound to a random port on the host machine.
148
-
149
153
For ports bound to a random port on the host machine, the pipeline creates a variable of the form `agent.services.<serviceName>.ports.<port>` so that the job can access the port. For example, `agent.services.redis.ports.6379` resolves to the randomly assigned port on the host machine.
150
154
151
155
## Volumes
152
156
153
-
Volumes are useful for sharing data between services or for persisting data between multiple runs of a job. You specify volume mounts as an array of `volumes` of the form `<source>:<destinationPath>`, where `<source>` can be a named volume or an absolute path on the host machine, and `<destinationPath>` is an absolute path in the container. Volumes can be named Docker volumes, anonymous Docker volumes, or bind mounts on the host.
157
+
Volumes are useful for sharing data between services or for persisting data between multiple runs of a job. You specify volume mounts as an array of `volumes`.
158
+
159
+
Each volume takes the form `<source>:<destinationPath>`, where `<source>` is either a named volume or an absolute path on the host, and `<destinationPath>` is an absolute path in the container. Volumes can be named Docker volumes, anonymous Docker volumes, or bind mounts on the host.
154
160
155
161
```yaml
156
162
services:
@@ -163,28 +169,20 @@ services:
163
169
```
164
170
165
171
>[!NOTE]
166
-
>If you use Microsoft-hosted pools, your volumes aren't persisted between jobs, because the host machine is cleaned up after each job is completed.
167
-
168
-
## Startup options
169
-
170
-
Service containers share the same container resources as container jobs. This means that you can use the same [startup options](container-phases.md?tabs=yaml#options).
171
-
172
-
## Health check
173
-
174
-
If any service container specifies a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck), the agent can optionally wait until the container is healthy before running the job.
172
+
>Microsoft-hosted pools don't persist volumes between jobs, because the host machine is cleaned up after each job.
175
173
176
174
## Multiple containers with services example
177
175
178
-
The following example has a Django Python web container connected to PostgreSQL and MySQL database containers.
176
+
The following example pipeline has a Django Python web container connected to PostgreSQL and MySQL database containers.
179
177
180
178
- The PostgreSQL database is the primary database, and its container is named `db`.
181
-
- The `db` container uses volume `/data/db:/var/lib/postgresql/data`, and there are three database variables passed to the container via `env`.
182
-
- The `mysql` container uses port `3306:3306`, and there are also database variables passed via `env`.
179
+
- The `db` container uses volume `/data/db:/var/lib/postgresql/data`, and passes three database variables to the container via `env`.
180
+
- The `mysql` container uses port `3306:3306`, and also passes database variables via `env`.
183
181
- The `web` container is open with port `8000`.
184
182
185
-
In the steps, `pip` installs dependencies and then Django tests run.
183
+
In the steps, `pip` installs dependencies, and then Django tests run.
186
184
187
-
To set up a working example, you need a [Django site set up with two databases](https://docs.djangoproject.com/en/3.2/topics/db/multi-db/). The example assumes your *manage.py* file is in the root directory and your Django project is also within that directory. If not, you might need to update the `/__w/1/s/` path in `/__w/1/s/manage.py test`.
185
+
To set up a working example, you need a [Django site set up with two databases](https://docs.djangoproject.com/en/5.2/topics/db/multi-db/). The example assumes your *manage.py* file and your Django project are in the root directory. If not, you might need to update the `/__w/1/s/` path in `/__w/1/s/manage.py test`.
0 commit comments