Skip to content

Commit c54c1f6

Browse files
authored
Merge branch 'main' into bugfix/set-vram-on-macs
2 parents c965d3e + dfbcb77 commit c54c1f6

File tree

118 files changed

+3364
-7585
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

118 files changed

+3364
-7585
lines changed

.github/CODEOWNERS

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,34 @@
11
# continuous integration
2-
/.github/workflows/ @lstein @blessedcoolant
2+
/.github/workflows/ @lstein @blessedcoolant @hipsterusername
33

44
# documentation
55
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
6-
/mkdocs.yml @lstein @blessedcoolant
6+
/mkdocs.yml @lstein @blessedcoolant @hipsterusername @Millu
77

88
# nodes
9-
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising
9+
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising @hipsterusername
1010

1111
# installation and configuration
12-
/pyproject.toml @lstein @blessedcoolant
13-
/docker/ @lstein @blessedcoolant
14-
/scripts/ @ebr @lstein
15-
/installer/ @lstein @ebr
16-
/invokeai/assets @lstein @ebr
17-
/invokeai/configs @lstein
18-
/invokeai/version @lstein @blessedcoolant
12+
/pyproject.toml @lstein @blessedcoolant @hipsterusername
13+
/docker/ @lstein @blessedcoolant @hipsterusername
14+
/scripts/ @ebr @lstein @hipsterusername
15+
/installer/ @lstein @ebr @hipsterusername
16+
/invokeai/assets @lstein @ebr @hipsterusername
17+
/invokeai/configs @lstein @hipsterusername
18+
/invokeai/version @lstein @blessedcoolant @hipsterusername
1919

2020
# web ui
21-
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp
22-
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp
21+
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
22+
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
2323

2424
# generation, model management, postprocessing
25-
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising @ryanjdick
25+
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising @ryanjdick @hipsterusername
2626

2727
# front ends
28-
/invokeai/frontend/CLI @lstein
29-
/invokeai/frontend/install @lstein @ebr
30-
/invokeai/frontend/merge @lstein @blessedcoolant
31-
/invokeai/frontend/training @lstein @blessedcoolant
32-
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp
28+
/invokeai/frontend/CLI @lstein @hipsterusername
29+
/invokeai/frontend/install @lstein @ebr @hipsterusername
30+
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
31+
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
32+
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername
3333

3434

docs/contributing/INVOCATIONS.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -244,8 +244,12 @@ copy-paste the template above.
244244
We can use the `@invocation` decorator to provide some additional info to the
245245
UI, like a custom title, tags and category.
246246

247+
We also encourage providing a version. This must be a
248+
[semver](https://semver.org/) version string ("$MAJOR.$MINOR.$PATCH"). The UI
249+
will let users know if their workflow is using a mismatched version of the node.
250+
247251
```python
248-
@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations")
252+
@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations", version="1.0.0")
249253
class ResizeInvocation(BaseInvocation):
250254
"""Resizes an image"""
251255

@@ -279,8 +283,6 @@ take a look a at our [contributing nodes overview](contributingNodes).
279283

280284
## Advanced
281285

282-
-->
283-
284286
### Custom Output Types
285287

286288
Like with custom inputs, sometimes you might find yourself needing custom

docs/nodes/communityNodes.md

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,26 @@ To use a community node graph, download the the `.json` node graph file and load
2222
![b920b710-1882-49a0-8d02-82dff2cca907](https://github.com/invoke-ai/InvokeAI/assets/25252829/7660c1ed-bf7d-4d0a-947f-1fc1679557ba)
2323
![71a91805-fda5-481c-b380-264665703133](https://github.com/invoke-ai/InvokeAI/assets/25252829/f8f6a2ee-2b68-4482-87da-b90221d5c3e2)
2424

25+
--------------------------------
2526
### Ideal Size
2627

2728
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
2829

2930
**Node Link:** https://github.com/JPPhoto/ideal-size-node
3031

32+
--------------------------------
33+
### Film Grain
34+
35+
**Description:** This node adds a film grain effect to the input image based on the weights, seeds, and blur radii parameters. It works with RGB input images only.
36+
37+
**Node Link:** https://github.com/JPPhoto/film-grain-node
38+
39+
--------------------------------
40+
### Image Picker
41+
42+
**Description:** This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
43+
44+
**Node Link:** https://github.com/JPPhoto/image-picker-node
3145

3246
--------------------------------
3347
### Retroize
@@ -95,6 +109,91 @@ a Text-Generation-Webui instance (might work remotely too, but I never tried it)
95109

96110
This node works best with SDXL models, especially as the style can be described independantly of the LLM's output.
97111

112+
--------------------------------
113+
### Depth Map from Wavefront OBJ
114+
115+
**Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
116+
117+
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
118+
119+
**Node Link:** https://github.com/dwringer/depth-from-obj-node
120+
121+
**Example Usage:**
122+
![depth from obj usage graph](https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg)
123+
124+
--------------------------------
125+
### Enhance Image (simple adjustments)
126+
127+
**Description:** Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
128+
129+
Color inversion is toggled with a simple switch, while each of the four enhancer modes are activated by entering a value other than 1 in each corresponding input field. Values less than 1 will reduce the corresponding property, while values greater than 1 will enhance it.
130+
131+
**Node Link:** https://github.com/dwringer/image-enhance-node
132+
133+
**Example Usage:**
134+
![enhance image usage graph](https://raw.githubusercontent.com/dwringer/image-enhance-node/main/image_enhance_usage.jpg)
135+
136+
--------------------------------
137+
### Generative Grammar-Based Prompt Nodes
138+
139+
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no more nonterminal terms remain in the string.
140+
141+
This includes 3 Nodes:
142+
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
143+
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
144+
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
145+
146+
**Node Link:** https://github.com/dwringer/generative-grammar-prompt-nodes
147+
148+
**Example Usage:**
149+
![lookups usage example graph](https://raw.githubusercontent.com/dwringer/generative-grammar-prompt-nodes/main/lookuptables_usage.jpg)
150+
151+
--------------------------------
152+
### Image and Mask Composition Pack
153+
154+
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
155+
156+
This includes 4 Nodes:
157+
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
158+
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
159+
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
160+
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
161+
162+
**Node Link:** https://github.com/dwringer/composition-nodes
163+
164+
**Example Usage:**
165+
![composition nodes usage graph](https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_nodes_usage.jpg)
166+
167+
--------------------------------
168+
### Size Stepper Nodes
169+
170+
**Description:** This is a set of nodes for calculating the necessary size increments for doing upscaling workflows. Use the *Final Size & Orientation* node to enter your full size dimensions and orientation (portrait/landscape/random), then plug that and your initial generation dimensions into the *Ideal Size Stepper* and get 1, 2, or 3 intermediate pairs of dimensions for upscaling. Note this does not output the initial size or full size dimensions: the 1, 2, or 3 outputs of this node are only the intermediate sizes.
171+
172+
A third node is included, *Random Switch (Integers)*, which is just a generic version of Final Size with no orientation selection.
173+
174+
**Node Link:** https://github.com/dwringer/size-stepper-nodes
175+
176+
**Example Usage:**
177+
![size stepper usage graph](https://raw.githubusercontent.com/dwringer/size-stepper-nodes/main/size_nodes_usage.jpg)
178+
179+
--------------------------------
180+
181+
### Text font to Image
182+
183+
**Description:** text font to text image node for InvokeAI, download a font to use (or if in font cache uses it from there), the text is always resized to the image size, but can control that with padding, optional 2nd line
184+
185+
**Node Link:** https://github.com/mickr777/textfontimage
186+
187+
**Output Examples**
188+
189+
![a3609d48-d9b7-41f0-b280-063d857986fb](https://github.com/mickr777/InvokeAI/assets/115216705/c21b0af3-d9c6-4c16-9152-846a23effd36)
190+
191+
Results after using the depth controlnet
192+
193+
![9133eabb-bcda-4326-831e-1b641228b178](https://github.com/mickr777/InvokeAI/assets/115216705/915f1a53-968e-43eb-aa61-07cd8f1a733a)
194+
![4f9a3fa8-9be9-4236-8a3e-fcec66decd2a](https://github.com/mickr777/InvokeAI/assets/115216705/821ef89e-8a60-44f5-b94e-471a9d8690cc)
195+
![babd69c4-9d60-4a55-a834-5e8397f62610](https://github.com/mickr777/InvokeAI/assets/115216705/2befcb6d-49f4-4bfd-b5fc-1fee19274f89)
196+
98197
--------------------------------
99198

100199
### Example Node Template

docs/nodes/defaultNodes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,13 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
3535
|Inverse Lerp Image | Inverse linear interpolation of all pixels of an image|
3636
|Image Primitive | An image primitive value|
3737
|Lerp Image | Linear interpolation of all pixels of an image|
38-
|Image Luminosity Adjustment | Adjusts the Luminosity (Value) of an image.|
38+
|Offset Image Channel | Add to or subtract from an image color channel by a uniform value.|
39+
|Multiply Image Channel | Multiply or Invert an image color channel by a scalar value.|
3940
|Multiply Images | Multiplies two images together using `PIL.ImageChops.multiply()`.|
4041
|Blur NSFW Image | Add blur to NSFW-flagged images|
4142
|Paste Image | Pastes an image into another image.|
4243
|ImageProcessor | Base class for invocations that preprocess images for ControlNet|
4344
|Resize Image | Resizes an image to specific dimensions|
44-
|Image Saturation Adjustment | Adjusts the Saturation of an image.|
4545
|Scale Image | Scales an image by a factor|
4646
|Image to Latents | Encodes an image into latents.|
4747
|Add Invisible Watermark | Add an invisible watermark to an image|

invokeai/app/api/routers/app_info.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
import typing
22
from enum import Enum
3+
from pathlib import Path
4+
35
from fastapi import Body
46
from fastapi.routing import APIRouter
5-
from pathlib import Path
67
from pydantic import BaseModel, Field
78

9+
from invokeai.app.invocations.upscale import ESRGAN_MODELS
10+
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
811
from invokeai.backend.image_util.patchmatch import PatchMatch
912
from invokeai.backend.image_util.safety_checker import SafetyChecker
10-
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
11-
from invokeai.app.invocations.upscale import ESRGAN_MODELS
12-
13+
from invokeai.backend.util.logging import logging
1314
from invokeai.version import __version__
1415

1516
from ..dependencies import ApiDependencies
16-
from invokeai.backend.util.logging import logging
1717

1818

1919
class LogLevel(int, Enum):
@@ -55,7 +55,7 @@ async def get_version() -> AppVersion:
5555

5656
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
5757
async def get_config() -> AppConfig:
58-
infill_methods = ["tile", "lama"]
58+
infill_methods = ["tile", "lama", "cv2"]
5959
if PatchMatch.patchmatch_available():
6060
infill_methods.append("patchmatch")
6161

0 commit comments

Comments
 (0)