Skip to content

Commit fe23317

Browse files
[Docs] Fix incorrect URLs (#3297)
1 parent ead324c commit fe23317

24 files changed

+57
-57
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Backends can be set up in `~/.dstack/server/config.yml` or through the [project
5252

5353
For more details, see [Backends](https://dstack.ai/docs/concepts/backends).
5454

55-
> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh) once the server is up.
55+
> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets) once the server is up.
5656
5757
##### Start the server
5858

docs/blog/posts/amd-on-tensorwave.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ to orchestrate AI containers with any AI cloud vendor, whether they provide on-d
1515

1616
In this tutorial, we’ll walk you through how `dstack` can be used with
1717
[TensorWave :material-arrow-top-right-thin:{ .external }](https://tensorwave.com/){:target="_blank"} using
18-
[SSH fleets](../../docs/concepts/fleets.md#ssh).
18+
[SSH fleets](../../docs/concepts/fleets.md#ssh-fleets).
1919

2020
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-tensorwave-v2.png" width="630"/>
2121

@@ -235,6 +235,6 @@ Want to see how it works? Check out the video below:
235235
<iframe width="750" height="520" src="https://www.youtube.com/embed/b1vAgm5fCfE?si=qw2gYHkMjERohdad&rel=0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
236236
237237
!!! info "What's next?"
238-
1. See [SSH fleets](../../docs/concepts/fleets.md#ssh)
238+
1. See [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets)
239239
2. Read about [dev environments](../../docs/concepts/dev-environments.md), [tasks](../../docs/concepts/tasks.md), and [services](../../docs/concepts/services.md)
240240
3. Join [Discord :material-arrow-top-right-thin:{ .external }](https://discord.gg/u8SmfwPpMd)

docs/blog/posts/benchmark-amd-containers-and-partitions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ The full, reproducible steps are available in our GitHub repository. Below is a
122122

123123
#### Creating a fleet
124124

125-
We first defined a `dstack` [SSH fleet](../../docs/concepts/fleets.md#ssh) to manage the two-node cluster.
125+
We first defined a `dstack` [SSH fleet](../../docs/concepts/fleets.md#ssh-fleets) to manage the two-node cluster.
126126

127127
```yaml
128128
type: fleet

docs/blog/posts/gh200-on-lambda.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ categories:
1111
# Supporting ARM and NVIDIA GH200 on Lambda
1212

1313
The latest update to `dstack` introduces support for NVIDIA GH200 instances on [Lambda](../../docs/concepts/backends.md#lambda)
14-
and enables ARM-powered hosts, including GH200 and GB200, with [SSH fleets](../../docs/concepts/fleets.md#ssh).
14+
and enables ARM-powered hosts, including GH200 and GB200, with [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets).
1515

1616
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-arm--gh200-lambda-min.png" width="630"/>
1717

@@ -78,7 +78,7 @@ $ dstack apply -f .dstack.yml
7878
!!! info "Retry policy"
7979
Note, if GH200s are not available at the moment, you can specify the [retry policy](../../docs/concepts/dev-environments.md#retry-policy) in your run configuration so that `dstack` can run the configuration once the GPU becomes available.
8080

81-
> If you have GH200 or GB200-powered hosts already provisioned via Lambda, another cloud provider, or on-prem, you can now use them with [SSH fleets](../../docs/concepts/fleets.md#ssh).
81+
> If you have GH200 or GB200-powered hosts already provisioned via Lambda, another cloud provider, or on-prem, you can now use them with [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets).
8282

8383
!!! info "What's next?"
8484
1. Sign up with [Lambda :material-arrow-top-right-thin:{ .external }](https://cloud.lambda.ai/sign-up?_gl=1*1qovk06*_gcl_au*MTg2MDc3OTAyOS4xNzQyOTA3Nzc0LjE3NDkwNTYzNTYuMTc0NTQxOTE2MS4xNzQ1NDE5MTYw*_ga*MTE2NDM5MzI0My4xNzQyOTA3Nzc0*_ga_43EZT1FM6Q*czE3NDY3MTczOTYkbzM0JGcxJHQxNzQ2NzE4MDU2JGo1NyRsMCRoMTU0Mzg1NTU1OQ..){:target="_blank"}

docs/blog/posts/gpu-health-checks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ For active checks today, you can run [NCCL tests](../../examples/clusters/nccl-t
5555

5656
## Supported backends
5757

58-
Passive GPU health checks work on AWS (except with custom `os_images`), Azure (except A10 GPUs), GCP, OCI, and [SSH fleets](../../docs/concepts/fleets.md#ssh) where DCGM is installed and configured for background checks.
58+
Passive GPU health checks work on AWS (except with custom `os_images`), Azure (except A10 GPUs), GCP, OCI, and [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets) where DCGM is installed and configured for background checks.
5959

6060
> Fleets created before version 0.19.22 need to be recreated to enable this feature.
6161

docs/blog/posts/instance-volumes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,8 @@ resources:
4141
4242
<!-- more -->
4343
44-
> Instance volumes work with both [SSH fleets](../../docs/concepts/fleets.md#ssh)
45-
> and [cloud fleets](../../docs/concepts/fleets.md#cloud), and it is possible to mount any folders on the instance,
44+
> Instance volumes work with both [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets)
45+
> and [cloud fleets](../../docs/concepts/fleets.md#backend-fleets), and it is possible to mount any folders on the instance,
4646
> whether they are regular folders or NFS share mounts.
4747
4848
The configuration above mounts `/root/.dstack/cache` on the instance to `/root/.cache` inside container.

docs/blog/posts/intel-gaudi.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ machines equipped with Intel Gaudi accelerators.
4444
## Create a fleet
4545

4646
To manage container workloads on on-prem machines with Intel Gaudi accelerators, start by configuring an
47-
[SSH fleet](../../docs/concepts/fleets.md#ssh). Here’s an example configuration for your fleet:
47+
[SSH fleet](../../docs/concepts/fleets.md#ssh-fleets). Here’s an example configuration for your fleet:
4848

4949
<div editor-title="examples/misc/fleets/gaudi.dstack.yml">
5050

docs/blog/posts/kubernetes-beta.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@ VM-based backends also offer more granular control over cluster provisioning.
299299

300300
### SSH fleets vs Kubernetes backend
301301

302-
If you’re using on-prem servers and Kubernetes isn’t a requirement, [SSH fleets](../../docs/concepts/fleets.md#ssh) may be simpler.
302+
If you’re using on-prem servers and Kubernetes isn’t a requirement, [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets) may be simpler.
303303
They provide a lightweight and flexible alternative.
304304

305305
### AMD GPUs

docs/blog/posts/nebius.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ $ dstack apply -f .dstack.yml
103103
The new `nebius` backend supports CPU and GPU instances, [fleets](../../docs/concepts/fleets.md),
104104
[distributed tasks](../../docs/concepts/tasks.md#distributed-tasks), and more.
105105

106-
> Support for [network volumes](../../docs/concepts/volumes.md#network) and accelerated cluster
106+
> Support for [network volumes](../../docs/concepts/volumes.md#network-volumes) and accelerated cluster
107107
interconnects is coming soon.
108108

109109
!!! info "What's next?"

docs/blog/posts/prometheus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ For a full list of available metrics and labels, check out [Metrics](../../docs/
4949

5050
??? info "NVIDIA"
5151
NVIDIA DCGM metrics are automatically collected for `aws`, `azure`, `gcp`, and `oci` backends,
52-
as well as for [SSH fleets](../../docs/concepts/fleets.md#ssh).
52+
as well as for [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets).
5353

5454
To ensure NVIDIA DCGM metrics are collected from SSH fleets, ensure the `datacenter-gpu-manager-4-core`,
5555
`datacenter-gpu-manager-4-proprietary`, and `datacenter-gpu-manager-exporter` packages are installed on the hosts.

0 commit comments

Comments
 (0)