Skip to content

Commit 1d69acc

Browse files
Merge pull request #8307 from v-thepet/pipelines9-4
Freshness: Pipelines 4
2 parents 83a64c9 + 262b1af commit 1d69acc

File tree

1 file changed

+36
-38
lines changed

1 file changed

+36
-38
lines changed

docs/pipelines/process/service-containers.md

Lines changed: 36 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -3,30 +3,40 @@ title: Service containers
33
description: Learn about running containerized services in Azure Pipelines single or multiple container jobs or noncontainer jobs.
44
ms.assetid: a6af47c5-2358-487a-ba3c-d213930fceb8
55
ms.topic: conceptual
6-
ms.date: 07/15/2024
6+
ms.date: 09/12/2025
77
monikerRange: azure-devops
8+
#customer intent: As an Azure Pipelines user, I want to understand service containers so I can use them to automatically manage services that my pipelines require.
9+
810
---
911

1012
# Service containers
1113

1214
[!INCLUDE [version-eq-azure-devops](../../includes/version-eq-azure-devops.md)]
1315

14-
If your pipeline requires the support of one or more services, you might need to create, connect to, and clean up the services per job. For example, your pipeline might run integration tests that require access to a newly created database and memory cache for each job in the pipeline.
16+
This article describes using *service containers* in Azure Pipelines. If your pipeline requires the support of one or more services, you might need to create, connect to, and clean up the services per [job](phases.md). For example, your pipeline might run integration tests that require access to a newly created database and memory cache for each job in the pipeline.
1517

16-
A *container* provides a simple and portable way to run a service that your pipeline depends on. A *service container* lets you automatically create, network, and manage the lifecycle of a containerized service. Each service container is accessible only to the [job](phases.md) that requires it. Service containers work with any kind of job, but are most commonly used with [container jobs](container-phases.md).
18+
A service container provides a simple and portable way to run services in your pipeline. The service container is accessible only to the job that requires it.
1719

18-
## Requirements
20+
Service containers let you automatically create, network, and manage the lifecycles of services that your pipelines depend on. Service containers work with any kind of job, but are most commonly used with [container jobs](container-phases.md).
1921

20-
- Service containers must define a `CMD` or `ENTRYPOINT`. The pipeline runs `docker run` for the provided container without any arguments.
22+
>[!NOTE]
23+
>Classic pipelines don't support service containers.
2124
22-
- Azure Pipelines can run Linux or [Windows containers](/virtualization/windowscontainers/about/). You can use either the hosted Ubuntu container pool for Linux containers or the hosted Windows pool for Windows containers. The hosted macOS pool doesn't support running containers.
25+
## Conditions and limitations
2326

24-
>[!NOTE]
25-
>Service containers aren't supported in Classic pipelines.
27+
- Service containers must define a `CMD` or `ENTRYPOINT`. The pipeline runs `docker run` with no arguments for the provided container.
28+
29+
- Azure Pipelines can run Linux or [Windows](/virtualization/windowscontainers/about/) containers. You use the hosted Ubuntu pool for Linux containers or the hosted Windows pool for Windows containers. The hosted macOS pool doesn't support running containers.
30+
31+
- Service containers share the same container resources as container jobs, so they can use the same [startup options](container-phases.md?tabs=yaml#options).
32+
33+
- If a service container specifies a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck), the agent can optionally wait until the container is healthy before running the job.
2634

2735
## Single container job
2836

29-
The following example YAML pipeline definition shows a single container job.
37+
The following example YAML pipeline defines a single container job that uses a service container. The pipeline fetches the `buildpack-deps` and `nginx` containers from [Docker Hub](https://hub.docker.com) and then starts all containers. The containers are networked so they can reach each other by their `services` names.
38+
39+
From inside the job container, the `nginx` host name resolves to the correct services by using Docker networking. All containers on the network automatically expose all ports to each other.
3040

3141
```yaml
3242
resources:
@@ -49,13 +59,13 @@ steps:
4959
displayName: Show that nginx is running
5060
```
5161
52-
The preceding pipeline fetches the `nginx` and `buildpack-deps` containers from [Docker Hub](https://hub.docker.com) and then starts the containers. The containers are networked together so that they can reach each other by their `services` name.
62+
## Single noncontainer job
5363
54-
From inside this job container, the `nginx` host name resolves to the correct services by using Docker networking. All containers on the network automatically expose all ports to each other.
64+
You can also use service containers in noncontainer jobs. The pipeline starts the latest containers, but because the job doesn't run in a container, there's no automatic name resolution. Instead, you reach services by using `localhost`. The following example pipeline explicitly specifies the `8080:80` port for `nginx`.
5565

56-
## Single noncontainer job
66+
An alternative approach is to assign a random port dynamically at runtime. To allow the job to access the port, the pipeline creates a [variable](variables.md) of the form `agent.services.<serviceName>.ports.<port>`. You can access the dynamic port by using this [environment variable](variables.md#environment-variables) in a Bash script.
5767

58-
You can also use service containers without a job container, as in the following example.
68+
In the following pipeline, `redis` gets a random available port on the host, and the `agent.services.redis.ports.6379` variable contains the port number.
5969

6070
```yaml
6171
resources:
@@ -84,12 +94,6 @@ steps:
8494
echo $AGENT_SERVICES_REDIS_PORTS_6379
8595
```
8696

87-
The preceding pipeline starts the latest `nginx` containers. Since the job isn't running in a container, there's no automatic name resolution. Instead, you can reach services by using `localhost`. The example explicitly provides the `8080:80` port.
88-
89-
An alternative approach is to let a random port get assigned dynamically at runtime. You can then access these dynamic ports by using [variables](variables.md). These variables take the form: `agent.services.<serviceName>.ports.<port>`. In a Bash script, you can access variables by using the process environment.
90-
91-
In the preceding example, `redis` is assigned a random available port on the host. The `agent.services.redis.ports.6379` variable contains the port number.
92-
9397
## Multiple jobs
9498

9599
Service containers are also useful for running the same steps against multiple versions of the same service. In the following example, the same steps run against multiple versions of PostgreSQL.
@@ -124,6 +128,10 @@ steps:
124128

125129
## Ports
126130

131+
Jobs that run directly on the host require `ports` to access the service container. Specifying `ports` isn't required if your job runs in a container, because containers on the same Docker network automatically expose all ports to each other by default.
132+
133+
A port takes the form `<hostPort>:<containerPort>` or just `<containerPort>` with an optional `/<protocol>` at the end. For example, `6379/tcp` exposes `tcp` over port `6379`, bound to a random port on the host machine.
134+
127135
When you invoke a container resource or an inline container, you can specify an array of `ports` to expose on the container, as in the following example.
128136

129137
```yaml
@@ -142,15 +150,13 @@ services:
142150
- 6379/tcp
143151
```
144152

145-
Specifying `ports` isn't required if your job is running in a container, because containers on the same Docker network automatically expose all ports to each other by default.
146-
147-
If your job is running on the host, `ports` are required to access the service. A port takes the form `<hostPort>:<containerPort>` or just `<containerPort>` with an optional `/<protocol>` at the end. For example, `6379/tcp` exposes `tcp` over port `6379`, bound to a random port on the host machine.
148-
149153
For ports bound to a random port on the host machine, the pipeline creates a variable of the form `agent.services.<serviceName>.ports.<port>` so that the job can access the port. For example, `agent.services.redis.ports.6379` resolves to the randomly assigned port on the host machine.
150154

151155
## Volumes
152156

153-
Volumes are useful for sharing data between services or for persisting data between multiple runs of a job. You specify volume mounts as an array of `volumes` of the form `<source>:<destinationPath>`, where `<source>` can be a named volume or an absolute path on the host machine, and `<destinationPath>` is an absolute path in the container. Volumes can be named Docker volumes, anonymous Docker volumes, or bind mounts on the host.
157+
Volumes are useful for sharing data between services or for persisting data between multiple runs of a job. You specify volume mounts as an array of `volumes`.
158+
159+
Each volume takes the form `<source>:<destinationPath>`, where `<source>` is either a named volume or an absolute path on the host, and `<destinationPath>` is an absolute path in the container. Volumes can be named Docker volumes, anonymous Docker volumes, or bind mounts on the host.
154160

155161
```yaml
156162
services:
@@ -163,28 +169,20 @@ services:
163169
```
164170

165171
>[!NOTE]
166-
>If you use Microsoft-hosted pools, your volumes aren't persisted between jobs, because the host machine is cleaned up after each job is completed.
167-
168-
## Startup options
169-
170-
Service containers share the same container resources as container jobs. This means that you can use the same [startup options](container-phases.md?tabs=yaml#options).
171-
172-
## Health check
173-
174-
If any service container specifies a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck), the agent can optionally wait until the container is healthy before running the job.
172+
>Microsoft-hosted pools don't persist volumes between jobs, because the host machine is cleaned up after each job.
175173

176174
## Multiple containers with services example
177175

178-
The following example has a Django Python web container connected to PostgreSQL and MySQL database containers.
176+
The following example pipeline has a Django Python web container connected to PostgreSQL and MySQL database containers.
179177

180178
- The PostgreSQL database is the primary database, and its container is named `db`.
181-
- The `db` container uses volume `/data/db:/var/lib/postgresql/data`, and there are three database variables passed to the container via `env`.
182-
- The `mysql` container uses port `3306:3306`, and there are also database variables passed via `env`.
179+
- The `db` container uses volume `/data/db:/var/lib/postgresql/data`, and passes three database variables to the container via `env`.
180+
- The `mysql` container uses port `3306:3306`, and also passes database variables via `env`.
183181
- The `web` container is open with port `8000`.
184182

185-
In the steps, `pip` installs dependencies and then Django tests run.
183+
In the steps, `pip` installs dependencies, and then Django tests run.
186184

187-
To set up a working example, you need a [Django site set up with two databases](https://docs.djangoproject.com/en/3.2/topics/db/multi-db/). The example assumes your *manage.py* file is in the root directory and your Django project is also within that directory. If not, you might need to update the `/__w/1/s/` path in `/__w/1/s/manage.py test`.
185+
To set up a working example, you need a [Django site set up with two databases](https://docs.djangoproject.com/en/5.2/topics/db/multi-db/). The example assumes your *manage.py* file and your Django project are in the root directory. If not, you might need to update the `/__w/1/s/` path in `/__w/1/s/manage.py test`.
188186

189187
```yaml
190188
resources:

0 commit comments

Comments
 (0)