Skip to content

Commit 7b5d493

Browse files
committed
Merge branch 'main' into ryan/spandrel-upscale
2 parents ecbff2a + c90b554 commit 7b5d493

File tree

139 files changed

+1036
-1082
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

139 files changed

+1036
-1082
lines changed

.github/actions/install-frontend-deps/action.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ runs:
99
node-version: '18'
1010

1111
- name: setup pnpm
12-
uses: pnpm/action-setup@v2
12+
uses: pnpm/action-setup@v4
1313
with:
14-
version: 8
14+
version: 8.15.6
1515
run_install: false
1616

1717
- name: get pnpm store directory

.github/pull_request_template.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
## QA Instructions
1010

11-
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
11+
<!--WHEN APPLICABLE: Describe how you have tested the changes in this PR. Provide enough detail that a reviewer can reproduce your tests.-->
1212

1313
## Merge Plan
1414

README.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,33 @@ Invoke is available in two editions:
4949

5050
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
5151

52+
## Docker Container
53+
54+
We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
55+
56+
> [!IMPORTANT]
57+
> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
58+
59+
### Generate!
60+
61+
Run the container, modifying the command as necessary:
62+
63+
```bash
64+
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
65+
```
66+
67+
Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
68+
69+
For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
70+
71+
### Persist your data
72+
73+
You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
74+
75+
### DIY
76+
77+
Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
78+
5279
## Troubleshooting, FAQ and Support
5380

5481
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
@@ -126,3 +153,5 @@ Original portions of the software are Copyright © 2024 by respective contributo
126153
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
127154
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
128155
[translation status link]: https://hosted.weblate.org/engage/invokeai/
156+
[nvidia docker docs]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
157+
[amd docker docs]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html

docker/.env.sample

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,9 @@
1919
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
2020
# INVOKEAI_PORT=9090
2121

22-
## GPU_DRIVER can be set to either `nvidia` or `rocm` to enable GPU support in the container accordingly.
23-
# GPU_DRIVER=nvidia #| rocm
22+
## GPU_DRIVER can be set to either `cuda` or `rocm` to enable GPU support in the container accordingly.
23+
# GPU_DRIVER=cuda #| rocm
2424

2525
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
26+
## It is usually not necessary to change this. Use `id -u` on the host system to find the UID.
2627
# CONTAINER_UID=1000

docker/README.md

Lines changed: 52 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,51 +1,85 @@
1-
# InvokeAI Containerized
1+
# Invoke in Docker
22

3-
All commands should be run within the `docker` directory: `cd docker`
3+
- Ensure that Docker can use the GPU on your system
4+
- This documentation assumes Linux, but should work similarly under Windows with WSL2
5+
- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
46

5-
## Quickstart :rocket:
7+
## Quickstart :lightning:
68

7-
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
9+
No `docker compose`, no persistence, just a simple one-liner using the official images:
810

9-
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
11+
**CUDA:**
1012

11-
## Detailed setup
13+
```bash
14+
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
15+
```
16+
17+
**ROCm:**
18+
19+
```bash
20+
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
21+
```
22+
23+
Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
24+
25+
> [!TIP]
26+
> To persist your data (including downloaded models) outside of the container, add a `--volume/-v` flag to the above command, e.g.: `docker run --volume /some/local/path:/invokeai <...the rest of the command>`
27+
28+
## Customize the container
29+
30+
We ship the `run.sh` script, which is a convenient wrapper around `docker compose` for cases where custom image build args are needed. Alternatively, the familiar `docker compose` commands work just as well.
31+
32+
```bash
33+
cd docker
34+
cp .env.sample .env
35+
# edit .env to your liking if you need to; it is well commented.
36+
./run.sh
37+
```
38+
39+
It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
40+
41+
## Docker setup in detail
1242

1343
#### Linux
1444

1545
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
1646
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
17-
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
47+
- The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
1848
3. Ensure docker daemon is able to access the GPU.
19-
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
49+
- [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
50+
- [AMD docs](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html)
2051

2152
#### macOS
2253

54+
> [!TIP]
55+
> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
56+
57+
If you are still reading:
58+
2359
1. Ensure Docker has at least 16GB RAM
2460
2. Enable VirtioFS for file sharing
2561
3. Enable `docker compose` V2 support
2662

27-
This is done via Docker Desktop preferences
63+
This is done via Docker Desktop preferences.
2864

29-
### Configure Invoke environment
65+
### Configure the Invoke Environment
3066

31-
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
32-
a. the desired location of the InvokeAI runtime directory, or
33-
b. an existing, v3.0.0 compatible runtime directory.
67+
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
3468
1. Execute `run.sh`
3569

3670
The image will be built automatically if needed.
3771

38-
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
72+
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
3973

4074
### Use a GPU
4175

4276
- Linux is *recommended* for GPU support in Docker.
4377
- WSL2 is *required* for Windows.
4478
- only `x86_64` architecture is supported.
4579

46-
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
80+
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
4781

48-
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
82+
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
4983

5084
## Customize
5185

@@ -59,10 +93,10 @@ Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The defa
5993
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
6094
HUGGINGFACE_TOKEN=the_actual_token
6195
CONTAINER_UID=1000
62-
GPU_DRIVER=nvidia
96+
GPU_DRIVER=cuda
6397
```
6498

65-
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
99+
Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
66100

67101
## Even More Customizing!
68102

docker/docker-compose.yml

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
22

3-
version: '3.8'
4-
53
x-invokeai: &invokeai
64
image: "local/invokeai:latest"
75
build:
@@ -32,7 +30,7 @@ x-invokeai: &invokeai
3230

3331

3432
services:
35-
invokeai-nvidia:
33+
invokeai-cuda:
3634
<<: *invokeai
3735
deploy:
3836
resources:

docker/docker-entrypoint.sh

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,18 @@ usermod -u ${USER_ID} ${USER} 1>/dev/null
2323
# but it is useful to have the full SSH server e.g. on Runpod.
2424
# (use SCP to copy files to/from the image, etc)
2525
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
26-
apt-get update
27-
apt-get install -y openssh-server
28-
pushd "$HOME"
29-
mkdir -p .ssh
30-
echo "${PUBLIC_KEY}" > .ssh/authorized_keys
31-
chmod -R 700 .ssh
32-
popd
33-
service ssh start
26+
apt-get update
27+
apt-get install -y openssh-server
28+
pushd "$HOME"
29+
mkdir -p .ssh
30+
echo "${PUBLIC_KEY}" >.ssh/authorized_keys
31+
chmod -R 700 .ssh
32+
popd
33+
service ssh start
3434
fi
3535

3636
mkdir -p "${INVOKEAI_ROOT}"
37-
chown --recursive ${USER} "${INVOKEAI_ROOT}"
37+
chown --recursive ${USER} "${INVOKEAI_ROOT}" || true
3838
cd "${INVOKEAI_ROOT}"
3939

4040
# Run the CMD as the Container User (not root).

docker/run.sh

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,15 @@ run() {
88
local build_args=""
99
local profile=""
1010

11+
# create .env file if it doesn't exist, otherwise docker compose will fail
1112
touch .env
13+
14+
# parse .env file for build args
1215
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
1316
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
1417

15-
[[ -z "$profile" ]] && profile="nvidia"
18+
# default to 'cuda' profile
19+
[[ -z "$profile" ]] && profile="cuda"
1620

1721
local service_name="invokeai-$profile"
1822

docs/installation/040_INSTALL_DOCKER.md

Lines changed: 18 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -4,50 +4,37 @@ title: Installing with Docker
44

55
# :fontawesome-brands-docker: Docker
66

7-
!!! warning "macOS and AMD GPU Users"
7+
!!! warning "macOS users"
88

9-
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
10-
because Docker containers can not access the GPU on macOS.
11-
12-
!!! warning "AMD GPU Users"
13-
14-
Container support for AMD GPUs has been reported to work by the community, but has not received
15-
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
16-
use the `build.sh` script to build the image for this to take effect at build time.
9+
Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
1710

1811
!!! tip "Linux and Windows Users"
1912

20-
For optimal performance, configure your Docker daemon to access your machine's GPU.
13+
Configure Docker to access your machine's GPU.
2114
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
22-
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
15+
Linux users should follow the [NVIDIA](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [AMD](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html) documentation.
2316

24-
## Why containers?
17+
## TL;DR
2518

26-
They provide a flexible, reliable way to build and deploy InvokeAI.
27-
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
28-
methodology for details on why running applications in such a stateless fashion is important.
19+
Ensure your Docker setup is able to use your GPU. Then:
2920

30-
The container is configured for CUDA by default, but can be built to support AMD GPUs
31-
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
21+
```bash
22+
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
23+
```
3224

33-
Developers on Apple silicon (M1/M2/M3): You
34-
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
35-
and performance is reduced compared with running it directly on macOS but for
36-
development purposes it's fine. Once you're done with development tasks on your
37-
laptop you can build for the target platform and architecture and deploy to
38-
another environment with NVIDIA GPUs on-premises or in the cloud.
25+
Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
3926

40-
## TL;DR
27+
## Build-It-Yourself
4128

42-
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
29+
All the docker materials are located inside the [docker](https://github.com/invoke-ai/InvokeAI/tree/main/docker) directory in the Git repo.
4330

4431
```bash
45-
# docker compose commands should be run from the `docker` directory
4632
cd docker
33+
cp .env.sample .env
4734
docker compose up
4835
```
4936

50-
## Installation in a Linux container (desktop)
37+
We also ship the `run.sh` convenience script. See the `docker/README.md` file for detailed instructions on how to customize the docker setup to your needs.
5138

5239
### Prerequisites
5340

@@ -58,18 +45,9 @@ Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
5845
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
5946
increase Swap and Disk image size too.
6047

61-
#### Get a Huggingface-Token
62-
63-
Besides the Docker Agent you will need an Account on
64-
[huggingface.co](https://huggingface.co/join).
65-
66-
After you succesfully registered your account, go to
67-
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
68-
a token and copy it, since you will need in for the next step.
69-
7048
### Setup
7149

72-
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
50+
Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
7351

7452
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
7553

@@ -103,10 +81,9 @@ Once the container starts up (and configures the InvokeAI root directory if this
10381
## Troubleshooting / FAQ
10482

10583
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
106-
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
107-
and you may have cloned this repository before the issue was fixed. To solve this, please change
108-
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
84+
- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
85+
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
10986
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
11087
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
11188
to reset the file to its most recent version.
112-
For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
89+
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)

installer/templates/invoke.bat.in

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ echo 2. Open the developer console
1313
echo 3. Command-line help
1414
echo Q - Quit
1515
echo.
16-
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.
16+
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest
1717
echo.
1818
set /P choice="Please enter 1-4, Q: [1] "
1919
if not defined choice set choice=1

0 commit comments

Comments
 (0)