InvokeAI Version 2.1.3 - A Stable Diffusion Toolkit
Welcome! Click here to get the latest release!
Read below for the old 2.1.3 release.
Invoke AI 2.1.3
The invoke-ai team is excited to be able to share the release of InvokeAI 2.1 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit. Version 2.1 of the tool introduces multiple new features and performance enhancements.
This 14-minute YouTube video introduces you to some of the new features contained in this release. The following sections describe what's new in the Web interface (WebGUI) and the command-line interface (CLI).
Version 2.1.3 is primarily a bug fix release that improves the installation process and provides enhanced stability and usability.
Update 22 November - updated invokeAI-src-installer-mac.zip to correct an error downloading the micromamba distirbution.
New features
- A choice of installer scripts that automate installation and configuration. See Installation.
- A streamlined manual installation process that works for both Conda and PIP-only installs. See Manual Installation.
- The ability to save frequently-used startup options (model to load, steps, sampler, etc) in a
.invokeaifile. See Client - Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
Installation
For those installing InvokeAI for the first time, please use this recipe:
- For automated installation, open up the "Assets" section below and download one of the
InvokeAI-*.zipfiles. The instructions in the Installation section of the [InvokeAI docs](https://invoke-ai.github.io/InvokeAI) will provide you with a guide to which file to download and what to do with it when you get it. - For manual installation download one of the "Source Code" archive files located in the Assets below.
- Unpack the file, and enter the
InvokeAIdirectory that it creates. - Alternatively, you may clone the source code repository using the command
git clone http://github.com/invoke-ai/InvokeAI - Follow the instructions in Manual Installation.
- Unpack the file, and enter the
Upgrading
For those wishing to upgrade from an earlier version, please use this recipe:
- Download one of the "Source Code" archive files located in the Assets below.
- Unpack the file, and enter the
InvokeAIdirectory that it creates. - Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands
git checkout main, followed bygit pull - Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new
environments-and-requirementsdirectory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU- Important Step that developers tend to miss! Either copy this environment file to the root directory with the name
environment.yml, or make a symbolic link fromenvironment.ymlto the selected enrivonment file:
-
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml # Replace `xxx` and `yyy` with the appropriate OS and GPU codes. -
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
- Now run the following commands in the InvokeAI directory.
conda update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide.
The most important thing is to know about contributing code is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.