Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .markdownlint.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,6 @@
},
"fenced-code-language": true,
"table-pipe-style": true,
"table-column-count": true
"table-column-count": true,
"descriptive-link-text": { "prohibited_texts": ["click here","here","link","more","learn more","find out more"]}
}
9 changes: 4 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,12 @@ RUN --mount=type=cache,target=/tmp/hugo_cache \
hugo --gc --minify -e $HUGO_ENV -b $DOCS_URL

# lint lints markdown files
FROM davidanson/markdownlint-cli2:v0.14.0 AS lint
USER root
FROM ghcr.io/igorshubovych/markdownlint-cli:v0.45.0 AS lint
RUN --mount=type=bind,target=. \
/usr/local/bin/markdownlint-cli2 \
markdownlint \
"content/**/*.md" \
"#content/manuals/engine/release-notes/*.md" \
"#content/manuals/desktop/previous-versions/*.md"
--ignore "content/manuals/engine/release-notes/*.md" \
--ignore "content/manuals/desktop/previous-versions/*.md"

# test validates HTML output and checks for broken links
FROM wjdp/htmltest:v${HTMLTEST_VERSION} AS test
Expand Down
2 changes: 1 addition & 1 deletion content/contribute/components/videos.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ If all of the above criteria are met, you can reference the following best pract
- Videos should adhere to the same standards for accessibility as the rest of the documentation.
- Ensure the quality of your video by writing a script (if there's narration), making sure multiple browsers and URLs aren't visible, blurring or cropping out any sensitive information, and using smooth transitions between different browsers or screens.

Videos are not hosted in the Docker documentation repository. To add a video, you can use a [link](./links.md) to hosted content, or embed using an [iframe](#iframe).
Videos are not hosted in the Docker documentation repository. To add a video, you can [link to](./links.md) hosted content, or embed using an [iframe](#iframe).


## iframe
Expand Down
2 changes: 1 addition & 1 deletion content/guides/localstack.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Launch a quick demo of LocalStack by using the following steps:

When you create a local S3 bucket using LocalStack, you're essentially simulating the creation of an S3 bucket on AWS. This lets you to test and develop applications that interact with S3 without needing an actual AWS account.

To create Local Amazon S3 bucket, you’ll need to install an `awscli-local` package to be installed on your system. This package provides the awslocal command, which is a thin wrapper around the AWS command line interface for use with LocalStack. It lets you to test and develop against a simulated environment on your local machine without needing to access the real AWS services. You can learn more about this utility [here](https://github.com/localstack/awscli-local).
To create Local Amazon S3 bucket, install the [`awscli-local` CLI](https://github.com/localstack/awscli-local) on your system. The `awslocal` command is a thin wrapper around the AWS command line interface for use with LocalStack. It lets you to test and develop against a simulated environment on your local machine without needing to access the real AWS services.

```console
$ pip install awscli-local
Expand Down
2 changes: 1 addition & 1 deletion content/guides/python/configure-github-actions.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ Each GitHub Actions workflow includes one or several jobs. Each job consists of

## 2. Run the workflow

Let's commit the changes, push them to the `main` branch. In the workflow above, the trigger is set to `push` events on the `main` branch. This means that the workflow will run every time you push changes to the `main` branch. You can find more information about the workflow triggers [here](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows).
Commit the changes and push them to the `main` branch. This workflow is runs every time you push changes to the `main` branch. You can find more information about workflow triggers [in the GitHub documentation](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows).

Go to the **Actions** tab of you GitHub repository. It displays the workflow. Selecting the workflow shows you the breakdown of all the steps.

Expand Down
2 changes: 1 addition & 1 deletion content/guides/ruby/configure-github-actions.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Each GitHub Actions workflow includes one or several jobs. Each job consists of

## 2. Run the workflow

Let's commit the changes, push them to the `main` branch. In the workflow above, the trigger is set to `push` events on the `main` branch. This means that the workflow will run every time you push changes to the `main` branch. You can find more information about the workflow triggers [here](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows).
Commit the changes and push them to the `main` branch. This workflow is runs every time you push changes to the `main` branch. You can find more information about workflow triggers [in the GitHub documentation](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows).

Go to the **Actions** tab of you GitHub repository. It displays the workflow. Selecting the workflow shows you the breakdown of all the steps.

Expand Down
7 changes: 3 additions & 4 deletions content/manuals/build/builders/drivers/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,7 @@ is configurable using the following driver options:
- `requests.cpu`, `requests.memory`, `requests.ephemeral-storage`, `limits.cpu`, `limits.memory`, `limits.ephemeral-storage`

These options allow requesting and limiting the resources available to each
BuildKit pod according to the official Kubernetes documentation
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
BuildKit pod [according to the official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).

For example, to create 4 replica BuildKit pods:

Expand Down Expand Up @@ -247,8 +246,8 @@ that you want to support.
## Rootless mode

The Kubernetes driver supports rootless mode. For more information on how
rootless mode works, and its requirements, see
[here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
rootless mode works, and its requirements, refer to the
[Rootless Buildkit documentation](https://github.com/moby/buildkit/blob/master/docs/rootless.md).

To turn it on in your cluster, you can use the `rootless=true` driver option:

Expand Down
18 changes: 9 additions & 9 deletions content/manuals/build/builders/drivers/remote.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,9 @@ Unix socket, and have Buildx connect through it.
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
```

Alternatively, [see here](https://github.com/moby/buildkit/blob/master/docs/rootless.md)
for running buildkitd in rootless mode or [here](https://github.com/moby/buildkit/tree/master/examples/systemd)
for examples of running it as a systemd service.
Alternatively, refer to the [Rootless Buildkit documentation](https://github.com/moby/buildkit/blob/master/docs/rootless.md)
for running buildkitd in rootless mode, or [the BuildKit systemd examples](https://github.com/moby/buildkit/tree/master/examples/systemd)
for running it as a systemd service.

2. Check that you have a Unix socket that you can connect to.

Expand Down Expand Up @@ -159,13 +159,13 @@ BuildKit manually. Additionally, when executing builds from inside Kubernetes
pods, the Buildx builder will need to be recreated from within each pod or
copied between them.

1. Create a Kubernetes deployment of `buildkitd`, as per the instructions
[here](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
1. Create a Kubernetes deployment of `buildkitd` by following the instructions
[in the BuildKit documentation](https://github.com/moby/buildkit/tree/master/examples/kubernetes).

Following the guide, create certificates for the BuildKit daemon and client
using [create-certs.sh](https://github.com/moby/buildkit/blob/master/examples/kubernetes/create-certs.sh),
and create a deployment of BuildKit pods with a service that connects to
them.
Create certificates for the BuildKit daemon and client using the
[create-certs.sh](https://github.com/moby/buildkit/blob/master/examples/kubernetes/create-certs.sh),
script and create a deployment of BuildKit pods with a service that connects
to them.

2. Assuming that the service is called `buildkitd`, create a remote builder in
Buildx, ensuring that the listed certificate files are present:
Expand Down
2 changes: 1 addition & 1 deletion content/manuals/build/buildkit/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ see [GitHub issues](https://github.com/moby/buildkit/issues?q=is%3Aissue%20state

Select the Docker icon in the taskbar, and then **Switch to Windows containers...**.

3. Install containerd version 1.7.7 or later following the setup instructions [here](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#installing-containerd-on-windows).
3. Install containerd version 1.7.7 or later following the [setup instructions](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#installing-containerd-on-windows).

4. Download and extract the latest BuildKit release.

Expand Down
4 changes: 2 additions & 2 deletions content/manuals/docker-hub/repos/manage/builds/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ when the tests succeed.

> [!NOTE]
>
> You may be redirected to the settings page to [link](link-source.md) the
> code repository service. Otherwise, if you are editing the build settings
> You may be redirected to the settings page to [link the code repository
> service](link-source.md). Otherwise, if you are editing the build settings
> for an existing automated build, select **Configure automated builds**.

4. Select the **source repository** to build the Docker images from.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ system access to the repositories.
This step is optional, but allows you to revoke the build-only keypair without removing other access.

2. Copy the private half of the keypair to your clipboard.
3. In Docker Hub, navigate to the build page for the repository that has linked private submodules. (If necessary, follow the steps [here](index.md#configure-automated-builds) to configure the automated build.)
3. In Docker Hub, navigate to the build page for the repository that has linked private submodules. (If necessary, [follow the steps here](index.md#configure-automated-builds) to configure the automated build.)
4. At the bottom of the screen, select the **plus** icon next to **Build Environment variables**.
5. Enter `SSH_PRIVATE` as the name for the new environment variable.
6. Paste the private half of the keypair into the **Value** field.
Expand Down
6 changes: 0 additions & 6 deletions content/manuals/engine/network/links.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,6 @@ Let's say you used this command to run a simple Python Flask application:
$ docker run -d -P training/webapp python app.py
```

> [!NOTE]
>
> Containers have an internal network and an IP address.
> Docker can have a variety of network configurations. You can see more
> information on Docker networking [here](index.md).

When that container was created, the `-P` flag was used to automatically map
any network port inside it to a random high port within an *ephemeral port
range* on your Docker host. Next, when `docker ps` was run, you saw that port
Expand Down
9 changes: 4 additions & 5 deletions content/manuals/engine/security/seccomp.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,10 @@ CONFIG_SECCOMP=y

## Pass a profile for a container

The default `seccomp` profile provides a sane default for running containers with
seccomp and disables around 44 system calls out of 300+. It is moderately
protective while providing wide application compatibility. The default Docker
profile can be found
[here](https://github.com/moby/profiles/blob/main/seccomp/default.json).
The [default `seccomp` profile](https://github.com/moby/profiles/blob/main/seccomp/default.json)
provides a sane default for running containers with seccomp and disables around
44 system calls out of 300+. It is moderately protective while providing wide
application compatibility.

In effect, the profile is an allowlist that denies access to system calls by
default and then allows specific system calls. The profile works by defining a
Expand Down
4 changes: 2 additions & 2 deletions content/manuals/engine/security/trust/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,8 +121,8 @@ Within the Docker CLI we can sign and push a container image with the
`$ docker trust` command syntax. This is built on top of the Notary feature
set. For more information, see the [Notary GitHub repository](https://github.com/theupdateframework/notary).

A prerequisite for signing an image is a Docker Registry with a Notary server (such as Docker Hub) attached. Instructions for
standing up a self-hosted environment can be found [here](/engine/security/trust/deploying_notary/).
A prerequisite for signing an image is a Docker Registry with a Notary server (such as Docker Hub) attached.
Refer to [Deploying Notary](/engine/security/trust/deploying_notary/) for instructions.

> [!NOTE]
>
Expand Down
11 changes: 5 additions & 6 deletions content/manuals/engine/security/trust/trust_delegation.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,7 @@ configure the Notary CLI:
The newly created configuration file contains information about the location of your local Docker trust data and the notary server URL.

For more detailed information about how to use notary outside of the
Docker Content Trust use cases, refer to the Notary CLI documentation
[here](https://github.com/theupdateframework/notary/blob/master/docs/command_reference.md)
Docker Content Trust use cases, refer to the [Notary CLI documentation](https://github.com/theupdateframework/notary/blob/master/docs/command_reference.md)

## Creating delegation keys

Expand Down Expand Up @@ -189,8 +188,8 @@ jeff 9deed251daa1aa6f9d5f9b752847647cf8d705da
When the first delegation is added to the Notary Server using `$ docker trust`,
we automatically initiate trust data for the repository. This includes creating
the notary target and snapshots keys, and rotating the snapshot key to be
managed by the notary server. More information on these keys can be found
[here](trust_key_mng.md)
managed by the notary server. More information on these keys can be found in
[Manage keys for content trust](trust_key_mng.md).

When initiating a repository, you will need the key and the passphrase of a local
Notary Canonical Root Key. If you have not initiated a repository before, and
Expand Down Expand Up @@ -374,8 +373,8 @@ Successfully removed ben from registry.example.com/admin/demo
$ notary witness registry.example.com/admin/demo targets/releases --publish
```

More information on the `$ notary witness` command can be found
[here](https://github.com/theupdateframework/notary/blob/master/docs/advanced_usage.md#recovering-a-delegation)
For more information on the `notary witness` command, refer to the
[Notary client advanced usage guide](https://github.com/theupdateframework/notary/blob/master/docs/advanced_usage.md#recovering-a-delegation)

### Removing a contributor's key from a delegation

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ The `ddClient` object gives access to various APIs:
- [Dashboard](dashboard.md)
- [Navigation](dashboard-routes-navigation.md)

Find the Extensions API reference [here](reference/api/extensions-sdk/_index.md).
See also the [Extensions API reference](reference/api/extensions-sdk/_index.md).
2 changes: 1 addition & 1 deletion content/manuals/subscription/desktop-license.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ Docker Desktop is built using open-source software. For information about the li
## Open source components

Docker Desktop distributes some components that are licensed under the
GNU General Public License. Select [here](https://download.docker.com/opensource/License.tar.gz) to download the source for these components.
GNU General Public License. [Download the source code for these components here](https://download.docker.com/opensource/License.tar.gz).
4 changes: 2 additions & 2 deletions content/reference/compose-file/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,8 +115,8 @@ must be prefixed to avoid ambiguity with a `type://` prefix.
Compose warns you if the image builder does not support additional contexts and may list
the unused contexts.

Illustrative examples of how this is used in Buildx can be found
[here](https://github.com/docker/buildx/blob/master/docs/reference/buildx_build.md#-additional-build-contexts---build-context).
Refer to the reference documentation for [`docker buildx build --build-context`](https://github.com/docker/buildx/blob/master/docs/reference/buildx_build.md#-additional-build-contexts---build-context)
for example usage.

`additional_contexts` can also refer to an image built by another service.
This allows a service image to be built using another service image as a base image, and to share
Expand Down
2 changes: 1 addition & 1 deletion content/reference/compose-file/services.md
Original file line number Diff line number Diff line change
Expand Up @@ -1799,7 +1799,7 @@ runs with environment variables `DATABASE_URL` and `DATABASE_API_KEY`.

As Compose stops the application, the `awesomecloud` binary is used to manage the `database` service tear down.

The mechanism used by Compose to delegate the service lifecycle to an external binary is described [here](https://github.com/docker/compose/tree/main/docs/extension.md).
The mechanism used by Compose to delegate the service lifecycle to an external binary is described in the [Compose extensibility documentation](https://github.com/docker/compose/tree/main/docs/extension.md).

For more information on using the `provider` attribute, see [Use provider services](/manuals/compose/how-tos/provider-services.md).

Expand Down
Loading