You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/articles/users/tutorials/Installation/docker-ollama-setup.md
+23-4Lines changed: 23 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,9 +18,9 @@ This document provides instructions on how to set up and run Ollama for deployin
18
18
- On Windows, install Windows Subsystem for Linux (WSL). Latest version is v2: [Official Microsoft Documentation](https://learn.microsoft.com/en-us/windows/wsl/install)
19
19
- GPU Support:
20
20
- For MacOS users with Apple Silicon chips (typically M series), you can **follow the instructions for CPU and ignore the GPU-related sections**
21
-
- For Windows & Linux with dedicated GPUs, we strongly recommend enabling GPU support to increase LLM output speed. On the computer with the GPU, do:
21
+
- For Windows & Linux with dedicated GPUs, we strongly recommend enabling GPU support to increase LLM output speed. We will be using the official documentation for each GPU type:
@@ -33,10 +33,11 @@ This document provides instructions on how to set up and run Ollama for deployin
33
33
- Either enable integration with your default WSL distro (arrow 4.1) OR select a specific one (arrow 4.2)
34
34
- Click "Apply & Restart" if you make changes (arrow 5)
35
35
36
-
3.**For NVIDIA GPU owners, setup GPU Support (nothing to do for AMD GPU owners at this stage)**:
36
+
3.**For GPU owners, setup GPU Support**:
37
37
-[Open a terminal](../../../appendices/open_a_terminal.md) on the computer with the GPU you want to use (for GPU servers, you likely connect through ssh)
38
38
- On Windows, launch the Linux version that was installed via WSL and that Docker is using. For example, in the previous image, that would be `Ubuntu-24.04`; so, run `wsl -d Ubuntu-24.04` to start Ubuntu.
39
-
- For NVIDIA GPU support, run:
39
+
40
+
-**For NVIDIA GPU support**, run:
40
41
41
42
```bash
42
43
# Add NVIDIA repository keys
@@ -55,6 +56,24 @@ This document provides instructions on how to set up and run Ollama for deployin
> Troubleshooting: If you encounter issues with the `/dev/kfd` or `/dev/dri` devices, try running the command with the `--privileged` flag: `docker run -d --privileged --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm`
37
+
35
38
### Checking that GPU support is enabled as expected
36
39
37
40
- Go to the `Containers` tab in Docker Desktop (arrow 1) and select your Ollama container
@@ -72,6 +75,12 @@ At this step, you will be downloading the content of Hatchling. Currently, we ar
0 commit comments