Skip to content

Commit b105be8

Browse files
committed
edits
1 parent ec44d6e commit b105be8

File tree

1 file changed

+36
-37
lines changed

1 file changed

+36
-37
lines changed

docs/pipelines/process/service-containers.md

Lines changed: 36 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -3,30 +3,41 @@ title: Service containers
33
description: Learn about running containerized services in Azure Pipelines single or multiple container jobs or noncontainer jobs.
44
ms.assetid: a6af47c5-2358-487a-ba3c-d213930fceb8
55
ms.topic: conceptual
6-
ms.date: 07/15/2024
6+
ms.date: 09/11/2025
77
monikerRange: azure-devops
8+
#customer intent: As an Azure Pipelines user, I want to understand service containers so I can use them to automatically manage services that my pipelines require.
9+
810
---
911

1012
# Service containers
1113

1214
[!INCLUDE [version-eq-azure-devops](../../includes/version-eq-azure-devops.md)]
1315

14-
If your pipeline requires the support of one or more services, you might need to create, connect to, and clean up the services per job. For example, your pipeline might run integration tests that require access to a newly created database and memory cache for each job in the pipeline.
16+
This article describes using *service containers* in Azure Pipelines. A *container* provides a simple and portable way to run a service. Service containers let you automatically create, network, and manage the lifecycles of services that your pipelines depend on.
1517

16-
A *container* provides a simple and portable way to run a service that your pipeline depends on. A *service container* lets you automatically create, network, and manage the lifecycle of a containerized service. Each service container is accessible only to the [job](phases.md) that requires it. Service containers work with any kind of job, but are most commonly used with [container jobs](container-phases.md).
18+
If your pipeline requires the support of one or more services, you might need to create, connect to, and clean up the services per [job](phases.md). For example, your pipeline might run integration tests that require access to a newly created database and memory cache for each job in the pipeline.
1719

18-
## Requirements
20+
A service container is accessible only to the job that requires it. Service containers work with any kind of job, but are most commonly used with [container jobs](container-phases.md).
1921

20-
- Service containers must define a `CMD` or `ENTRYPOINT`. The pipeline runs `docker run` for the provided container without any arguments.
22+
>[!NOTE]
23+
>Classic pipelines don't support service containers.
2124
22-
- Azure Pipelines can run Linux or [Windows containers](/virtualization/windowscontainers/about/). You can use either the hosted Ubuntu container pool for Linux containers or the hosted Windows pool for Windows containers. The hosted macOS pool doesn't support running containers.
2325

24-
>[!NOTE]
25-
>Service containers aren't supported in Classic pipelines.
26+
## Conditions and limitations
27+
28+
- Service containers must define a `CMD` or `ENTRYPOINT`. The pipeline runs `docker run` with no arguments for the provided container.
29+
30+
- Azure Pipelines can run Linux or [Windows](/virtualization/windowscontainers/about/) containers. You use the hosted Ubuntu pool for Linux containers or the hosted Windows pool for Windows containers. The hosted macOS pool doesn't support running containers.
31+
32+
- Service containers share the same container resources as container jobs, so they can use the same [startup options](container-phases.md?tabs=yaml#options).
33+
34+
- If a service container specifies a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck), the agent can optionally wait until the container is healthy before running the job.
2635

2736
## Single container job
2837

29-
The following example YAML pipeline definition shows a single container job.
38+
The following example YAML pipeline defines a single container job that uses a service container. The pipeline fetches the `buildpack-deps` and `nginx` containers from [Docker Hub](https://hub.docker.com) and then starts all containers. The containers are networked so they can reach each other by their `services` names.
39+
40+
From inside the job container, the `nginx` host name resolves to the correct services by using Docker networking. All containers on the network automatically expose all ports to each other.
3041

3142
```yaml
3243
resources:
@@ -49,13 +60,13 @@ steps:
4960
displayName: Show that nginx is running
5061
```
5162
52-
The preceding pipeline fetches the `nginx` and `buildpack-deps` containers from [Docker Hub](https://hub.docker.com) and then starts the containers. The containers are networked together so that they can reach each other by their `services` name.
63+
## Single noncontainer job
5364
54-
From inside this job container, the `nginx` host name resolves to the correct services by using Docker networking. All containers on the network automatically expose all ports to each other.
65+
You can also use service containers in noncontainer jobs. The pipeline starts the latest containers, but because the job doesn't run in a container, there's no automatic name resolution. Instead, you reach services by using `localhost`. The following example pipeline explicitly specifies the `8080:80` port for `nginx`.
5566

56-
## Single noncontainer job
67+
An alternative approach is to assign a random port dynamically at runtime. To allow the job to access the port, the pipeline creates a [variable](variables.md) of the form `agent.services.<serviceName>.ports.<port>`. You can access the dynamic port by using the [environment variable](variables.md#environment-variables) in a Bash script.
5768

58-
You can also use service containers without a job container, as in the following example.
69+
In the following pipeline, `redis` gets a random available port on the host, and the `agent.services.redis.ports.6379` variable contains the port number.
5970

6071
```yaml
6172
resources:
@@ -84,12 +95,6 @@ steps:
8495
echo $AGENT_SERVICES_REDIS_PORTS_6379
8596
```
8697

87-
The preceding pipeline starts the latest `nginx` containers. Since the job isn't running in a container, there's no automatic name resolution. Instead, you can reach services by using `localhost`. The example explicitly provides the `8080:80` port.
88-
89-
An alternative approach is to let a random port get assigned dynamically at runtime. You can then access these dynamic ports by using [variables](variables.md). These variables take the form: `agent.services.<serviceName>.ports.<port>`. In a Bash script, you can access variables by using the process environment.
90-
91-
In the preceding example, `redis` is assigned a random available port on the host. The `agent.services.redis.ports.6379` variable contains the port number.
92-
9398
## Multiple jobs
9499

95100
Service containers are also useful for running the same steps against multiple versions of the same service. In the following example, the same steps run against multiple versions of PostgreSQL.
@@ -124,6 +129,10 @@ steps:
124129

125130
## Ports
126131

132+
Specifying `ports` isn't required if your job is running in a container, because containers on the same Docker network automatically expose all ports to each other by default. Jobs that run directly on the host require `ports` to access the service container.
133+
134+
A port takes the form `<hostPort>:<containerPort>` or just `<containerPort>` with an optional `/<protocol>` at the end. For example, `6379/tcp` exposes `tcp` over port `6379`, bound to a random port on the host machine.
135+
127136
When you invoke a container resource or an inline container, you can specify an array of `ports` to expose on the container, as in the following example.
128137

129138
```yaml
@@ -142,15 +151,13 @@ services:
142151
- 6379/tcp
143152
```
144153

145-
Specifying `ports` isn't required if your job is running in a container, because containers on the same Docker network automatically expose all ports to each other by default.
146-
147-
If your job is running on the host, `ports` are required to access the service. A port takes the form `<hostPort>:<containerPort>` or just `<containerPort>` with an optional `/<protocol>` at the end. For example, `6379/tcp` exposes `tcp` over port `6379`, bound to a random port on the host machine.
148-
149154
For ports bound to a random port on the host machine, the pipeline creates a variable of the form `agent.services.<serviceName>.ports.<port>` so that the job can access the port. For example, `agent.services.redis.ports.6379` resolves to the randomly assigned port on the host machine.
150155

151156
## Volumes
152157

153-
Volumes are useful for sharing data between services or for persisting data between multiple runs of a job. You specify volume mounts as an array of `volumes` of the form `<source>:<destinationPath>`, where `<source>` can be a named volume or an absolute path on the host machine, and `<destinationPath>` is an absolute path in the container. Volumes can be named Docker volumes, anonymous Docker volumes, or bind mounts on the host.
158+
Volumes are useful for sharing data between services or for persisting data between multiple runs of a job. You specify volume mounts as an array of `volumes`.
159+
160+
Each volume takes the form `<source>:<destinationPath>`, where `<source>` is either a named volume or an absolute path on the host, and `<destinationPath>` is an absolute path in the container. Volumes can be named Docker volumes, anonymous Docker volumes, or bind mounts on the host.
154161

155162
```yaml
156163
services:
@@ -163,28 +170,20 @@ services:
163170
```
164171

165172
>[!NOTE]
166-
>If you use Microsoft-hosted pools, your volumes aren't persisted between jobs, because the host machine is cleaned up after each job is completed.
167-
168-
## Startup options
169-
170-
Service containers share the same container resources as container jobs. This means that you can use the same [startup options](container-phases.md?tabs=yaml#options).
171-
172-
## Health check
173-
174-
If any service container specifies a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck), the agent can optionally wait until the container is healthy before running the job.
173+
>Microsoft-hosted pools don't persist volumes between jobs because the host machine is cleaned up after each job.
175174

176175
## Multiple containers with services example
177176

178177
The following example has a Django Python web container connected to PostgreSQL and MySQL database containers.
179178

180179
- The PostgreSQL database is the primary database, and its container is named `db`.
181-
- The `db` container uses volume `/data/db:/var/lib/postgresql/data`, and there are three database variables passed to the container via `env`.
182-
- The `mysql` container uses port `3306:3306`, and there are also database variables passed via `env`.
180+
- The `db` container uses volume `/data/db:/var/lib/postgresql/data`, and passes three database variables to the container via `env`.
181+
- The `mysql` container uses port `3306:3306`, and also passes database variables via `env`.
183182
- The `web` container is open with port `8000`.
184183

185-
In the steps, `pip` installs dependencies and then Django tests run.
184+
In the steps, `pip` installs dependencies, and then Django tests run.
186185

187-
To set up a working example, you need a [Django site set up with two databases](https://docs.djangoproject.com/en/3.2/topics/db/multi-db/). The example assumes your *manage.py* file is in the root directory and your Django project is also within that directory. If not, you might need to update the `/__w/1/s/` path in `/__w/1/s/manage.py test`.
186+
To set up a working example, you need a [Django site set up with two databases](https://docs.djangoproject.com/en/3.2/topics/db/multi-db/). The example assumes your *manage.py* file and your Django project are in the root directory. If not, you might need to update the `/__w/1/s/` path in `/__w/1/s/manage.py test`.
188187

189188
```yaml
190189
resources:

0 commit comments

Comments
 (0)