Skip to content

Commit a21938b

Browse files
Merge pull request #4 from LittleCoinCoin/docs/for-v0.4.0
Fixing documentation instructions to setup Hatchling
2 parents 4ff1c29 + e2499bd commit a21938b

File tree

2 files changed

+32
-4
lines changed

2 files changed

+32
-4
lines changed

docs/articles/users/tutorials/Installation/docker-ollama-setup.md

Lines changed: 23 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ This document provides instructions on how to set up and run Ollama for deployin
1818
- On Windows, install Windows Subsystem for Linux (WSL). Latest version is v2: [Official Microsoft Documentation](https://learn.microsoft.com/en-us/windows/wsl/install)
1919
- GPU Support:
2020
- For MacOS users with Apple Silicon chips (typically M series), you can **follow the instructions for CPU and ignore the GPU-related sections**
21-
- For Windows & Linux with dedicated GPUs, we strongly recommend enabling GPU support to increase LLM output speed. On the computer with the GPU, do:
21+
- For Windows & Linux with dedicated GPUs, we strongly recommend enabling GPU support to increase LLM output speed. We will be using the official documentation for each GPU type:
2222
- NVIDIA GPUs: [NVIDIA Container Toolkit Installation](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
23-
- AMD GPUs: Nothing, you can move on.
23+
- AMD GPUs: [AMD ROCm Installation](https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/native_linux/install-radeon.html)
2424

2525
## Setup with Docker Desktop
2626

@@ -33,10 +33,11 @@ This document provides instructions on how to set up and run Ollama for deployin
3333
- Either enable integration with your default WSL distro (arrow 4.1) OR select a specific one (arrow 4.2)
3434
- Click "Apply & Restart" if you make changes (arrow 5)
3535

36-
3. **For NVIDIA GPU owners, setup GPU Support (nothing to do for AMD GPU owners at this stage)**:
36+
3. **For GPU owners, setup GPU Support**:
3737
- [Open a terminal](../../../appendices/open_a_terminal.md) on the computer with the GPU you want to use (for GPU servers, you likely connect through ssh)
3838
- On Windows, launch the Linux version that was installed via WSL and that Docker is using. For example, in the previous image, that would be `Ubuntu-24.04`; so, run `wsl -d Ubuntu-24.04` to start Ubuntu.
39-
- For NVIDIA GPU support, run:
39+
40+
- **For NVIDIA GPU support**, run:
4041

4142
```bash
4243
# Add NVIDIA repository keys
@@ -55,6 +56,24 @@ This document provides instructions on how to set up and run Ollama for deployin
5556
sudo nvidia-ctk runtime configure --runtime=docker
5657
```
5758

59+
- **For AMD GPU support**, run:
60+
61+
```bash
62+
# Install required packages
63+
sudo apt install python3-setuptools python3-wheel
64+
65+
# Download and install AMD GPU installer script (for Ubuntu 24.04)
66+
sudo apt update
67+
wget https://repo.radeon.com/amdgpu-install/6.4.2.1/ubuntu/noble/amdgpu-install_6.4.60402-1_all.deb
68+
sudo apt install ./amdgpu-install_6.4.60402-1_all.deb
69+
70+
# Install graphics and ROCm support
71+
sudo amdgpu-install -y --usecase=graphics,rocm
72+
73+
# Add current user to render and video groups
74+
sudo usermod -a -G render,video $LOGNAME
75+
```
76+
5877
- Close the terminal
5978
- Restart Docker
6079
- For Docker Desktop, click on the three vertical dots icon (arrow 1), then `Restart` (arrow 2)

docs/articles/users/tutorials/Installation/running_hatchling.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,9 @@ This section assumes you have followed the [Docker & Ollama setup](./docker-olla
3232
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
3333
```
3434

35+
> [!Note]
36+
> Troubleshooting: If you encounter issues with the `/dev/kfd` or `/dev/dri` devices, try running the command with the `--privileged` flag: `docker run -d --privileged --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm`
37+
3538
### Checking that GPU support is enabled as expected
3639

3740
- Go to the `Containers` tab in Docker Desktop (arrow 1) and select your Ollama container
@@ -72,6 +75,12 @@ At this step, you will be downloading the content of Hatchling. Currently, we ar
7275
cd ./Hatchling/docker
7376
```
7477

78+
### Copy the `.env.example` file to `.env`
79+
80+
```bash
81+
cp .env.example .env
82+
```
83+
7584
### Install Hatchling by building the code
7685

7786
```bash

0 commit comments

Comments
 (0)