Skip to content

Commit 7ea4b73

Browse files
committed
Bulk update documentation with new image tag names
Signed-off-by: Simon Redman <[email protected]>
1 parent 1a1af07 commit 7ea4b73

File tree

5 files changed

+19
-19
lines changed

5 files changed

+19
-19
lines changed

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343

4444
> :bulb: Get help - [❓FAQ](https://localai.io/faq/) [💭Discussions](https://github.com/go-skynet/LocalAI/discussions) [:speech_balloon: Discord](https://discord.gg/uJAeKSAGDy) [:book: Documentation website](https://localai.io/)
4545
>
46-
> [💻 Quickstart](https://localai.io/basics/getting_started/) [🖼️ Models](https://models.localai.io/) [🚀 Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) [🛫 Examples](https://github.com/mudler/LocalAI-examples) Try on
46+
> [💻 Quickstart](https://localai.io/basics/getting_started/) [🖼️ Models](https://models.localai.io/) [🚀 Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) [🛫 Examples](https://github.com/mudler/LocalAI-examples) Try on
4747
[![Telegram](https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white)](https://t.me/localaiofficial_bot)
4848

4949
[![tests](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml)[![Build and Release](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml)[![build container images](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml)[![Bump dependencies](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml)[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/localai)](https://artifacthub.io/packages/search?repo=localai)
@@ -131,10 +131,10 @@ For more installation options, see [Installer Options](https://localai.io/instal
131131
Or run with docker:
132132

133133
> **💡 Docker Run vs Docker Start**
134-
>
134+
>
135135
> - `docker run` creates and starts a new container. If a container with the same name already exists, this command will fail.
136136
> - `docker start` starts an existing container that was previously created with `docker run`.
137-
>
137+
>
138138
> If you've already run LocalAI before and want to start it again, use: `docker start -i local-ai`
139139
140140
### CPU only image:
@@ -163,7 +163,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nv
163163
### AMD GPU Images (ROCm):
164164

165165
```bash
166-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
166+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-amd-rocm-6
167167
```
168168

169169
### Intel GPU Images (oneAPI):
@@ -194,7 +194,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-ai
194194
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel
195195

196196
# AMD GPU version
197-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
197+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-amd-rocm-6
198198
```
199199

200200
For more information about the AIO images and pre-downloaded models, see [Container Documentation](https://localai.io/basics/container/).
@@ -254,7 +254,7 @@ Roadmap items: [List of issues](https://github.com/mudler/LocalAI/issues?q=is%3A
254254
- 🗣 [Text to Audio](https://localai.io/features/text-to-audio/)
255255
- 🔈 [Audio to Text](https://localai.io/features/audio-to-text/) (Audio transcription with `whisper.cpp`)
256256
- 🎨 [Image generation](https://localai.io/features/image-generation)
257-
- 🔥 [OpenAI-alike tools API](https://localai.io/features/openai-functions/)
257+
- 🔥 [OpenAI-alike tools API](https://localai.io/features/openai-functions/)
258258
- 🧠 [Embeddings generation for vector databases](https://localai.io/features/embeddings/)
259259
- ✍️ [Constrained grammars](https://localai.io/features/constrained_grammars/)
260260
- 🖼️ [Download Models directly from Huggingface ](https://localai.io/models/)
@@ -362,7 +362,7 @@ Other:
362362
- Github bot which answer on issues, with code and documentation as context https://github.com/JackBekket/GitHelper
363363
- Github Actions: https://github.com/marketplace/actions/start-localai
364364
- Examples: https://github.com/mudler/LocalAI/tree/master/examples/
365-
365+
366366

367367
### 🔗 Resources
368368

docs/content/features/GPU-acceleration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ The following are examples of the ROCm specific configuration elements required.
182182

183183
```yaml
184184
# For full functionality select a non-'core' image, version locking the image is recommended for debug purposes.
185-
image: quay.io/go-skynet/local-ai:master-aio-gpu-hipblas
185+
image: quay.io/go-skynet/local-ai:master-aio-gpu-amd-rocm-6
186186
environment:
187187
- DEBUG=true
188188
# If your gpu is not already included in the current list of default targets the following build details are required.
@@ -205,7 +205,7 @@ docker run \
205205
-e GPU_TARGETS=gfx906 \
206206
--device /dev/dri \
207207
--device /dev/kfd \
208-
quay.io/go-skynet/local-ai:master-aio-gpu-hipblas
208+
quay.io/go-skynet/local-ai:master-aio-gpu-amd-rocm-6
209209
```
210210
211211
Please ensure to add all other required environment variables, port forwardings, etc to your `compose` file or `run` command.

docs/content/getting-started/container-images.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -84,9 +84,9 @@ Standard container images do not have pre-installed models. Use these if you wan
8484

8585
| Description | Quay | Docker Hub |
8686
| --- | --- |-------------------------------------------------------------|
87-
| Latest images from the branch (development) | `quay.io/go-skynet/local-ai:master-gpu-hipblas` | `localai/localai:master-gpu-hipblas` |
88-
| Latest tag | `quay.io/go-skynet/local-ai:latest-gpu-hipblas` | `localai/localai:latest-gpu-hipblas` |
89-
| Versioned image | `quay.io/go-skynet/local-ai:{{< version >}}-gpu-hipblas` | `localai/localai:{{< version >}}-gpu-hipblas` |
87+
| Latest images from the branch (development) | `quay.io/go-skynet/local-ai:master-gpu-amd-rocm-6` | `localai/localai:master-gpu-amd-rocm-6` |
88+
| Latest tag | `quay.io/go-skynet/local-ai:latest-gpu-amd-rocm-6` | `localai/localai:latest-gpu-amd-rocm-6` |
89+
| Versioned image | `quay.io/go-skynet/local-ai:{{< version >}}-gpu-amd-rocm-6` | `localai/localai:{{< version >}}-gpu-amd-rocm-6` |
9090

9191
{{% /tab %}}
9292

@@ -190,7 +190,7 @@ services:
190190

191191
**Models caching**: The **AIO** image will download the needed models on the first run if not already present and store those in `/models` inside the container. The AIO models will be automatically updated with new versions of AIO images.
192192

193-
You can change the directory inside the container by specifying a `MODELS_PATH` environment variable (or `--models-path`).
193+
You can change the directory inside the container by specifying a `MODELS_PATH` environment variable (or `--models-path`).
194194

195195
If you want to use a named model or a local directory, you can mount it as a volume to `/models`:
196196

@@ -215,7 +215,7 @@ docker run -p 8080:8080 --name local-ai -ti -v localai-models:/models localai/lo
215215
| Versioned image (e.g. for CPU) | `quay.io/go-skynet/local-ai:{{< version >}}-aio-cpu` | `localai/localai:{{< version >}}-aio-cpu` |
216216
| Latest images for Nvidia GPU (CUDA12) | `quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12` | `localai/localai:latest-aio-gpu-nvidia-cuda-12` |
217217
| Latest images for Nvidia GPU (CUDA13) | `quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-13` | `localai/localai:latest-aio-gpu-nvidia-cuda-13` |
218-
| Latest images for AMD GPU | `quay.io/go-skynet/local-ai:latest-aio-gpu-hipblas` | `localai/localai:latest-aio-gpu-hipblas` |
218+
| Latest images for AMD GPU | `quay.io/go-skynet/local-ai:latest-aio-gpu-amd-rocm-6` | `localai/localai:latest-aio-gpu-amd-rocm-6` |
219219
| Latest images for Intel GPU | `quay.io/go-skynet/local-ai:latest-aio-gpu-intel` | `localai/localai:latest-aio-gpu-intel` |
220220

221221
### Available environment variables

docs/content/installation/docker.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gp
7070

7171
**AMD GPU (ROCm):**
7272
```bash
73-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
73+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-amd-rocm-6
7474
```
7575

7676
**Intel GPU:**
@@ -119,7 +119,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-ai
119119

120120
**AMD GPU (ROCm):**
121121
```bash
122-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
122+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-amd-rocm-6
123123
```
124124

125125
**Intel GPU:**
@@ -140,7 +140,7 @@ services:
140140
# image: localai/localai:latest-aio-gpu-nvidia-cuda-13
141141
# image: localai/localai:latest-aio-gpu-nvidia-cuda-12
142142
# image: localai/localai:latest-aio-gpu-nvidia-cuda-11
143-
# image: localai/localai:latest-aio-gpu-hipblas
143+
# image: localai/localai:latest-aio-gpu-amd-rocm-6
144144
# image: localai/localai:latest-aio-gpu-intel
145145
healthcheck:
146146
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]

docs/static/install.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -702,10 +702,10 @@ install_docker() {
702702
$envs \
703703
-d -p $PORT:8080 --name local-ai localai/localai:$IMAGE_TAG $STARTCOMMAND
704704
elif [ "$HAS_AMD" ]; then
705-
IMAGE_TAG=${LOCALAI_VERSION}-gpu-hipblas
705+
IMAGE_TAG=${LOCALAI_VERSION}-gpu-amd-rocm-6
706706
# AIO
707707
if [ "$USE_AIO" = true ]; then
708-
IMAGE_TAG=${LOCALAI_VERSION}-aio-gpu-hipblas
708+
IMAGE_TAG=${LOCALAI_VERSION}-aio-gpu-amd-rocm-6
709709
fi
710710

711711
info "Starting LocalAI Docker container..."

0 commit comments

Comments
 (0)