@@ -122,9 +122,9 @@ experimental versions later.
122122 [ latest release] ( https://github.com/invoke-ai/InvokeAI/releases/latest ) ,
123123 and look for a file named:
124124
125- - InvokeAI-installer-v3 .X.X.zip
125+ - InvokeAI-installer-v4 .X.X.zip
126126
127- where "3 .X.X" is the latest released version. The file is located
127+ where "4 .X.X" is the latest released version. The file is located
128128 at the very bottom of the release page, under ** Assets** .
129129
1301304 . ** Unpack the installer** : Unpack the zip file into a convenient directory. This will create a new
@@ -199,136 +199,7 @@ experimental versions later.
199199 
200200 </figure>
201201
202- 10. **Post-install Configuration**: After installation completes, the
203- installer will launch the configuration form, which will guide you
204- through the first-time process of adjusting some of InvokeAI's
205- startup settings. To move around this form use ctrl-N for
206- <N>ext and ctrl-P for <P>revious, or use <tab>
207- and shift-<tab> to move forward and back. Once you are in a
208- multi-checkbox field use the up and down cursor keys to select the
209- item you want, and <space> to toggle it on and off. Within
210- a directory field, pressing <tab> will provide autocomplete
211- options.
212-
213- Generally the defaults are fine, and you can come back to this screen at
214- any time to tweak your system. Here are the options you can adjust:
215-
216- - ***HuggingFace Access Token***
217- InvokeAI has the ability to download embedded styles and subjects
218- from the HuggingFace Concept Library on-demand. However, some of
219- the concept library files are password protected. To make download
220- smoother, you can set up an account at huggingface.co, obtain an
221- access token, and paste it into this field. Note that you paste
222- to this screen using ctrl-shift-V
223-
224- - ***Free GPU memory after each generation***
225- This is useful for low-memory machines and helps minimize the
226- amount of GPU VRAM used by InvokeAI.
227-
228- - ***Enable xformers support if available***
229- If the xformers library was successfully installed, this will activate
230- it to reduce memory consumption and increase rendering speed noticeably.
231- Note that xformers has the side effect of generating slightly different
232- images even when presented with the same seed and other settings.
233-
234- - ***Force CPU to be used on GPU systems***
235- This will use the (slow) CPU rather than the accelerated GPU. This
236- can be used to generate images on systems that don't have a compatible
237- GPU.
238-
239- - ***Precision***
240- This controls whether to use float32 or float16 arithmetic.
241- float16 uses less memory but is also slightly less accurate.
242- Ordinarily the right arithmetic is picked automatically ("auto"),
243- but you may have to use float32 to get images on certain systems
244- and graphics cards. The "autocast" option is deprecated and
245- shouldn't be used unless you are asked to by a member of the team.
246-
247- - **Size of the RAM cache used for fast model switching***
248- This allows you to keep models in memory and switch rapidly among
249- them rather than having them load from disk each time. This slider
250- controls how many models to keep loaded at once. A typical SD-1 or SD-2 model
251- uses 2-3 GB of memory. A typical SDXL model uses 6-7 GB. Providing more
252- RAM will allow more models to be co-resident.
253-
254- - ***Output directory for images***
255- This is the path to a directory in which InvokeAI will store all its
256- generated images.
257-
258- - ***Autoimport Folder***
259- This is the directory in which you can place models you have
260- downloaded and wish to load into InvokeAI. You can place a variety
261- of models in this directory, including diffusers folders, .ckpt files,
262- .safetensors files, as well as LoRAs, ControlNet and Textual Inversion
263- files (both folder and file versions). To help organize this folder,
264- you can create several levels of subfolders and drop your models into
265- whichever ones you want.
266-
267- - ***LICENSE***
268-
269- At the bottom of the screen you will see a checkbox for accepting
270- the CreativeML Responsible AI Licenses. You need to accept the license
271- in order to download Stable Diffusion models from the next screen.
272-
273- _You can come back to the startup options form_ as many times as you like.
274- From the `invoke.sh` or `invoke.bat` launcher, select option (6) to relaunch
275- this script. On the command line, it is named `invokeai-configure`.
276-
277- 11. **Downloading Models**: After you press `[NEXT]` on the screen, you will be taken
278- to another screen that prompts you to download a series of starter models. The ones
279- we recommend are preselected for you, but you are encouraged to use the checkboxes to
280- pick and choose.
281- You will probably wish to download `autoencoder-840000` for use with models that
282- were trained with an older version of the Stability VAE.
283-
284- <figure markdown>
285- 
286- </figure>
287-
288- Below the preselected list of starter models is a large text field which you can use
289- to specify a series of models to import. You can specify models in a variety of formats,
290- each separated by a space or newline. The formats accepted are:
291-
292- - The path to a .ckpt or .safetensors file. On most systems, you can drag a file from
293- the file browser to the textfield to automatically paste the path. Be sure to remove
294- extraneous quotation marks and other things that come along for the ride.
295-
296- - The path to a directory containing a combination of `.ckpt` and `.safetensors` files.
297- The directory will be scanned from top to bottom (including subfolders) and any
298- file that can be imported will be.
299-
300- - A URL pointing to a `.ckpt` or `.safetensors` file. You can cut
301- and paste directly from a web page, or simply drag the link from the web page
302- or navigation bar. (You can also use ctrl-shift-V to paste into this field)
303- The file will be downloaded and installed.
304-
305- - The HuggingFace repository ID (repo_id) for a `diffusers` model. These IDs have
306- the format _author_name/model_name_, as in `andite/anything-v4.0`
307-
308- - The path to a local directory containing a `diffusers`
309- model. These directories always have the file `model_index.json`
310- at their top level.
311-
312- _Select a directory for models to import_ You may select a local
313- directory for autoimporting at startup time. If you select this
314- option, the directory you choose will be scanned for new
315- .ckpt/.safetensors files each time InvokeAI starts up, and any new
316- files will be automatically imported and made available for your
317- use.
318-
319- _Convert imported models into diffusers_ When legacy checkpoint
320- files are imported, you may select to use them unmodified (the
321- default) or to convert them into `diffusers` models. The latter
322- load much faster and have slightly better rendering performance,
323- but not all checkpoint files can be converted. Note that Stable Diffusion
324- Version 2.X files are **only** supported in `diffusers` format and will
325- be converted regardless.
326-
327- _You can come back to the model install form_ as many times as you like.
328- From the `invoke.sh` or `invoke.bat` launcher, select option (5) to relaunch
329- this script. On the command line, it is named `invokeai-model-install`.
330-
331- 12. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
202+ 10. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
332203 for the directory `invokeai` installed in the location you chose at the
333204 beginning of the install session. Look for a shell script named `invoke.sh`
334205 (Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
@@ -349,14 +220,14 @@ experimental versions later.
349220 http://localhost:9090. Click on this link to open up a browser
350221 and start exploring InvokeAI's features.
351222
352- 12. **InvokeAI Options**: You can launch InvokeAI with several different command-line arguments that
353- customize its behavior. For example, you can change the location of the
223+ 12. **InvokeAI Options**: You can configure using the `invokeai.yaml` config file.
224+ For example, you can change the location of the
354225 image output directory or balance memory usage vs performance. See
355226 [Configuration](../features/CONFIGURATION.md) for a full list of the options.
356227
357228 - To set defaults that will take effect every time you launch InvokeAI,
358229 use a text editor (e.g. Notepad) to exit the file
359- `invokeai\invokeai.init `. It contains a variety of examples that you can
230+ `invokeai\invokeai.yaml `. It contains a variety of examples that you can
360231 follow to add and modify launch options.
361232
362233 - The launcher script also offers you an option labeled "open the developer
@@ -394,7 +265,6 @@ rm .\.venv -r -force
394265python -mvenv .venv
395266.\.venv\Scripts\activate
396267pip install invokeai
397- invokeai-configure --yes --root .
398268```
399269
400270If you see anything marked as an error during this process please stop
@@ -426,16 +296,10 @@ error messages:
426296This failure mode occurs when there is a network glitch during
427297downloading the very large SDXL model.
428298
429- To address this, first go to the Web Model Manager and delete the
430- Stable-Diffusion-XL-base-1.X model. Then navigate to HuggingFace and
431- manually download the .safetensors version of the model. The 1.0
432- version is located at
433- https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main
434- and the file is named ` sd_xl_base_1.0.safetensors ` .
435-
436- Save this file to disk and then reenter the Model Manager. Navigate to
437- Import Models->Add Model, then type (or drag-and-drop) the path to the
438- .safetensors file. Press "Add Model".
299+ To address this, first go to the Model Manager and delete the
300+ Stable-Diffusion-XL-base-1.X model. Then, click the HuggingFace tab,
301+ paste the Repo ID stabilityai/stable-diffusion-xl-base-1.0 and install
302+ the model.
439303
440304### _ Package dependency conflicts_
441305
@@ -488,15 +352,7 @@ download models, etc), but this doesn't fix the problem.
488352
489353This issue is often caused by a misconfigured configuration directive in the
490354` invokeai\invokeai.init ` initialization file that contains startup settings. The
491- easiest way to fix the problem is to move the file out of the way and re-run
492- ` invokeai-configure ` . Enter the developer's console (option 3 of the launcher
493- script) and run this command:
494-
495- ``` cmd
496- invokeai-configure --root=.
497- ```
498-
499- Note the dot (.) after ` --root ` . It is part of the command.
355+ easiest way to fix the problem is to move the file out of the way and restart the app.
500356
501357_ If none of these maneuvers fixes the problem_ then please report the problem to
502358the [ InvokeAI Issues] ( https://github.com/invoke-ai/InvokeAI/issues ) section, or
@@ -565,16 +421,4 @@ This distribution is changing rapidly, and we add new features
565421regularly. Releases are announced at
566422http://github.com/invoke-ai/InvokeAI/releases , and at
567423https://pypi.org/project/InvokeAI/ To update to the latest released
568- version (recommended), follow these steps:
569-
570- 1 . Start the ` invoke.sh ` /` invoke.bat ` launch script from within the
571- ` invokeai ` root directory.
572-
573- 2 . Choose menu item (10) "Update InvokeAI".
574-
575- 3 . This will launch a menu that gives you the option of:
576-
577- 1 . Updating to the latest official release;
578- 2 . Updating to the bleeding-edge development version; or
579- 3 . Manually entering the tag or branch name of a version of
580- InvokeAI you wish to try out.
424+ version (recommended), download the latest release and run the installer.
0 commit comments