You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: bare-metal/elastic-metal/reference-content/shared-responsibility-model.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ content:
7
7
paragraph: Learn about the shared responsibility model for Scaleway Bare Metal services, outlining the roles of Scaleway and users in managing server security, backups, and compliance.
Copy file name to clipboardExpand all lines: compute/gpu/how-to/use-gpu-with-docker.mdx
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ content:
7
7
paragraph: Learn how to efficiently access and use GPUs with Docker on Scaleway GPU Instances.
8
8
tags: gpu docker
9
9
dates:
10
-
validation: 2024-07-16
10
+
validation: 2025-01-20
11
11
posted: 2022-03-25
12
12
categories:
13
13
- compute
@@ -49,7 +49,7 @@ We recommend that you map volumes from your GPU Instance to your Docker containe
49
49
50
50
You can map directories from your GPU Instance's Local Storage to your Docker container, using the `-v <local_storage>:<container_mountpoint>` flag. See the example command below:
51
51
52
-
```
52
+
```bash
53
53
docker run -it --rm -v /root/mydata/:/workspace nvidia/cuda:11.2.1-runtime-ubuntu20.04
54
54
55
55
# use the `exit` command for exiting this docker container
@@ -65,7 +65,7 @@ In the above example, everything in the `/root/mydata` directory on the Instance
65
65
66
66
In the following example, we create a directory called `my-data`, create a "Hello World" text file inside that directory, then use the `chown` command to set appropriate ownership for the directory before running the Docker container and specifying the mapped directories. The "Hello World" file is then available inside the Docker container:
67
67
68
-
```
68
+
```bash
69
69
mkdir -p /root/my-data/
70
70
echo"Hello World"> /root/my-data/hello.txt
71
71
chown -R 1000:100 /root/my-data
@@ -153,22 +153,22 @@ The possible values of the `NVIDIA_VISIBLE_DEVICES` variable are:
153
153
### Example commands
154
154
155
155
* Starting a GPU-enabled CUDA container (using `--gpus`)
156
-
```sh
156
+
```bash
157
157
docker run --runtime=nvidia -it --rm --gpus all nvidia/cuda:11.2.1-runtime-ubuntu20.04 nvidia-smi
158
158
```
159
159
160
160
* Starting a GPU-enabled container using `NVIDIA_VISIBLE_DEVICES` and specify the nvidia runtime
161
-
```
161
+
```bash
162
162
docker run --runtime=nvidia -it --rm --e NVIDIA_VISIBLE_DEVICES=all nvidia/cuda:11.2.1-runtime-ubuntu20.04 nvidia-smi
163
163
```
164
164
165
165
* Starting a GPU-enabled [Tensorflow](https://www.tensorflow.org/) container with a Jupyter notebook using `NVIDIA_VISIBLE_DEVICES` and map the port `88888` to access the web GUI:
166
-
```
166
+
```bash
167
167
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all -it --rm -p 8888:8888 tensorflow/tensorflow:latest-gpu-jupyter
168
168
```
169
169
170
170
* Query the GPU UUID of the first GPU using nvidia-smi and then specifying that to the container:
0 commit comments