Skip to content

Commit eb90ea4

Browse files
authored
Merge branch 'main' into textfontimage
2 parents 945b9e3 + 42ee95e commit eb90ea4

File tree

87 files changed

+3233
-1204
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

87 files changed

+3233
-1204
lines changed

docs/contributing/INVOCATIONS.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -244,8 +244,12 @@ copy-paste the template above.
244244
We can use the `@invocation` decorator to provide some additional info to the
245245
UI, like a custom title, tags and category.
246246

247+
We also encourage providing a version. This must be a
248+
[semver](https://semver.org/) version string ("$MAJOR.$MINOR.$PATCH"). The UI
249+
will let users know if their workflow is using a mismatched version of the node.
250+
247251
```python
248-
@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations")
252+
@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations", version="1.0.0")
249253
class ResizeInvocation(BaseInvocation):
250254
"""Resizes an image"""
251255

@@ -279,8 +283,6 @@ take a look a at our [contributing nodes overview](contributingNodes).
279283

280284
## Advanced
281285

282-
-->
283-
284286
### Custom Output Types
285287

286288
Like with custom inputs, sometimes you might find yourself needing custom

docs/nodes/communityNodes.md

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,73 @@ a Text-Generation-Webui instance (might work remotely too, but I never tried it)
109109

110110
This node works best with SDXL models, especially as the style can be described independantly of the LLM's output.
111111

112+
--------------------------------
113+
### Depth Map from Wavefront OBJ
114+
115+
**Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
116+
117+
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
118+
119+
**Node Link:** https://github.com/dwringer/depth-from-obj-node
120+
121+
**Example Usage:**
122+
![depth from obj usage graph](https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg)
123+
124+
--------------------------------
125+
### Enhance Image (simple adjustments)
126+
127+
**Description:** Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
128+
129+
Color inversion is toggled with a simple switch, while each of the four enhancer modes are activated by entering a value other than 1 in each corresponding input field. Values less than 1 will reduce the corresponding property, while values greater than 1 will enhance it.
130+
131+
**Node Link:** https://github.com/dwringer/image-enhance-node
132+
133+
**Example Usage:**
134+
![enhance image usage graph](https://raw.githubusercontent.com/dwringer/image-enhance-node/main/image_enhance_usage.jpg)
135+
136+
--------------------------------
137+
### Generative Grammar-Based Prompt Nodes
138+
139+
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no more nonterminal terms remain in the string.
140+
141+
This includes 3 Nodes:
142+
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
143+
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
144+
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
145+
146+
**Node Link:** https://github.com/dwringer/generative-grammar-prompt-nodes
147+
148+
**Example Usage:**
149+
![lookups usage example graph](https://raw.githubusercontent.com/dwringer/generative-grammar-prompt-nodes/main/lookuptables_usage.jpg)
150+
151+
--------------------------------
152+
### Image and Mask Composition Pack
153+
154+
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
155+
156+
This includes 4 Nodes:
157+
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
158+
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
159+
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
160+
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
161+
162+
**Node Link:** https://github.com/dwringer/composition-nodes
163+
164+
**Example Usage:**
165+
![composition nodes usage graph](https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_nodes_usage.jpg)
166+
167+
--------------------------------
168+
### Size Stepper Nodes
169+
170+
**Description:** This is a set of nodes for calculating the necessary size increments for doing upscaling workflows. Use the *Final Size & Orientation* node to enter your full size dimensions and orientation (portrait/landscape/random), then plug that and your initial generation dimensions into the *Ideal Size Stepper* and get 1, 2, or 3 intermediate pairs of dimensions for upscaling. Note this does not output the initial size or full size dimensions: the 1, 2, or 3 outputs of this node are only the intermediate sizes.
171+
172+
A third node is included, *Random Switch (Integers)*, which is just a generic version of Final Size with no orientation selection.
173+
174+
**Node Link:** https://github.com/dwringer/size-stepper-nodes
175+
176+
**Example Usage:**
177+
![size stepper usage graph](https://raw.githubusercontent.com/dwringer/size-stepper-nodes/main/size_nodes_usage.jpg)
178+
112179
--------------------------------
113180

114181
### Text font to Image

docs/nodes/defaultNodes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,13 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
3535
|Inverse Lerp Image | Inverse linear interpolation of all pixels of an image|
3636
|Image Primitive | An image primitive value|
3737
|Lerp Image | Linear interpolation of all pixels of an image|
38-
|Image Luminosity Adjustment | Adjusts the Luminosity (Value) of an image.|
38+
|Offset Image Channel | Add to or subtract from an image color channel by a uniform value.|
39+
|Multiply Image Channel | Multiply or Invert an image color channel by a scalar value.|
3940
|Multiply Images | Multiplies two images together using `PIL.ImageChops.multiply()`.|
4041
|Blur NSFW Image | Add blur to NSFW-flagged images|
4142
|Paste Image | Pastes an image into another image.|
4243
|ImageProcessor | Base class for invocations that preprocess images for ControlNet|
4344
|Resize Image | Resizes an image to specific dimensions|
44-
|Image Saturation Adjustment | Adjusts the Saturation of an image.|
4545
|Scale Image | Scales an image by a factor|
4646
|Image to Latents | Encodes an image into latents.|
4747
|Add Invisible Watermark | Add an invisible watermark to an image|

invokeai/app/api/routers/app_info.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
import typing
22
from enum import Enum
3+
from pathlib import Path
4+
35
from fastapi import Body
46
from fastapi.routing import APIRouter
5-
from pathlib import Path
67
from pydantic import BaseModel, Field
78

9+
from invokeai.app.invocations.upscale import ESRGAN_MODELS
10+
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
811
from invokeai.backend.image_util.patchmatch import PatchMatch
912
from invokeai.backend.image_util.safety_checker import SafetyChecker
10-
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
11-
from invokeai.app.invocations.upscale import ESRGAN_MODELS
12-
13+
from invokeai.backend.util.logging import logging
1314
from invokeai.version import __version__
1415

1516
from ..dependencies import ApiDependencies
16-
from invokeai.backend.util.logging import logging
1717

1818

1919
class LogLevel(int, Enum):
@@ -55,7 +55,7 @@ async def get_version() -> AppVersion:
5555

5656
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
5757
async def get_config() -> AppConfig:
58-
infill_methods = ["tile", "lama"]
58+
infill_methods = ["tile", "lama", "cv2"]
5959
if PatchMatch.patchmatch_available():
6060
infill_methods.append("patchmatch")
6161

invokeai/app/invocations/baseinvocation.py

Lines changed: 58 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,16 @@
2626
from pydantic import BaseModel, Field, validator
2727
from pydantic.fields import Undefined, ModelField
2828
from pydantic.typing import NoArgAnyCallable
29+
import semver
2930

3031
if TYPE_CHECKING:
3132
from ..services.invocation_services import InvocationServices
3233

3334

35+
class InvalidVersionError(ValueError):
36+
pass
37+
38+
3439
class FieldDescriptions:
3540
denoising_start = "When to start denoising, expressed a percentage of total steps"
3641
denoising_end = "When to stop denoising, expressed a percentage of total steps"
@@ -105,24 +110,39 @@ class UIType(str, Enum):
105110
"""
106111

107112
# region Primitives
108-
Integer = "integer"
109-
Float = "float"
110113
Boolean = "boolean"
111-
String = "string"
112-
Array = "array"
113-
Image = "ImageField"
114-
Latents = "LatentsField"
114+
Color = "ColorField"
115115
Conditioning = "ConditioningField"
116116
Control = "ControlField"
117-
Color = "ColorField"
118-
ImageCollection = "ImageCollection"
119-
ConditioningCollection = "ConditioningCollection"
117+
Float = "float"
118+
Image = "ImageField"
119+
Integer = "integer"
120+
Latents = "LatentsField"
121+
String = "string"
122+
# endregion
123+
124+
# region Collection Primitives
125+
BooleanCollection = "BooleanCollection"
120126
ColorCollection = "ColorCollection"
121-
LatentsCollection = "LatentsCollection"
122-
IntegerCollection = "IntegerCollection"
127+
ConditioningCollection = "ConditioningCollection"
128+
ControlCollection = "ControlCollection"
123129
FloatCollection = "FloatCollection"
130+
ImageCollection = "ImageCollection"
131+
IntegerCollection = "IntegerCollection"
132+
LatentsCollection = "LatentsCollection"
124133
StringCollection = "StringCollection"
125-
BooleanCollection = "BooleanCollection"
134+
# endregion
135+
136+
# region Polymorphic Primitives
137+
BooleanPolymorphic = "BooleanPolymorphic"
138+
ColorPolymorphic = "ColorPolymorphic"
139+
ConditioningPolymorphic = "ConditioningPolymorphic"
140+
ControlPolymorphic = "ControlPolymorphic"
141+
FloatPolymorphic = "FloatPolymorphic"
142+
ImagePolymorphic = "ImagePolymorphic"
143+
IntegerPolymorphic = "IntegerPolymorphic"
144+
LatentsPolymorphic = "LatentsPolymorphic"
145+
StringPolymorphic = "StringPolymorphic"
126146
# endregion
127147

128148
# region Models
@@ -176,6 +196,7 @@ class _InputField(BaseModel):
176196
ui_type: Optional[UIType]
177197
ui_component: Optional[UIComponent]
178198
ui_order: Optional[int]
199+
item_default: Optional[Any]
179200

180201

181202
class _OutputField(BaseModel):
@@ -223,6 +244,7 @@ def InputField(
223244
ui_component: Optional[UIComponent] = None,
224245
ui_hidden: bool = False,
225246
ui_order: Optional[int] = None,
247+
item_default: Optional[Any] = None,
226248
**kwargs: Any,
227249
) -> Any:
228250
"""
@@ -249,6 +271,11 @@ def InputField(
249271
For this case, you could provide `UIComponent.Textarea`.
250272
251273
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
274+
275+
: param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
276+
277+
: param bool item_default: [None] Specifies the default item value, if this is a collection input. \
278+
Ignored for non-collection fields..
252279
"""
253280
return Field(
254281
*args,
@@ -282,6 +309,7 @@ def InputField(
282309
ui_component=ui_component,
283310
ui_hidden=ui_hidden,
284311
ui_order=ui_order,
312+
item_default=item_default,
285313
**kwargs,
286314
)
287315

@@ -332,6 +360,8 @@ def OutputField(
332360
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
333361
334362
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
363+
364+
: param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
335365
"""
336366
return Field(
337367
*args,
@@ -376,6 +406,9 @@ class UIConfigBase(BaseModel):
376406
tags: Optional[list[str]] = Field(default_factory=None, description="The node's tags")
377407
title: Optional[str] = Field(default=None, description="The node's display name")
378408
category: Optional[str] = Field(default=None, description="The node's category")
409+
version: Optional[str] = Field(
410+
default=None, description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".'
411+
)
379412

380413

381414
class InvocationContext:
@@ -474,6 +507,8 @@ def schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None:
474507
schema["tags"] = uiconfig.tags
475508
if uiconfig and hasattr(uiconfig, "category"):
476509
schema["category"] = uiconfig.category
510+
if uiconfig and hasattr(uiconfig, "version"):
511+
schema["version"] = uiconfig.version
477512
if "required" not in schema or not isinstance(schema["required"], list):
478513
schema["required"] = list()
479514
schema["required"].extend(["type", "id"])
@@ -542,7 +577,11 @@ def validate_workflow_is_json(cls, v):
542577

543578

544579
def invocation(
545-
invocation_type: str, title: Optional[str] = None, tags: Optional[list[str]] = None, category: Optional[str] = None
580+
invocation_type: str,
581+
title: Optional[str] = None,
582+
tags: Optional[list[str]] = None,
583+
category: Optional[str] = None,
584+
version: Optional[str] = None,
546585
) -> Callable[[Type[GenericBaseInvocation]], Type[GenericBaseInvocation]]:
547586
"""
548587
Adds metadata to an invocation.
@@ -569,6 +608,12 @@ def wrapper(cls: Type[GenericBaseInvocation]) -> Type[GenericBaseInvocation]:
569608
cls.UIConfig.tags = tags
570609
if category is not None:
571610
cls.UIConfig.category = category
611+
if version is not None:
612+
try:
613+
semver.Version.parse(version)
614+
except ValueError as e:
615+
raise InvalidVersionError(f'Invalid version string for node "{invocation_type}": "{version}"') from e
616+
cls.UIConfig.version = version
572617

573618
# Add the invocation type to the pydantic model of the invocation
574619
invocation_type_annotation = Literal[invocation_type] # type: ignore

invokeai/app/invocations/collections.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,9 @@
1010
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
1111

1212

13-
@invocation("range", title="Integer Range", tags=["collection", "integer", "range"], category="collections")
13+
@invocation(
14+
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"
15+
)
1416
class RangeInvocation(BaseInvocation):
1517
"""Creates a range of numbers from start to stop with step"""
1618

@@ -33,6 +35,7 @@ def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
3335
title="Integer Range of Size",
3436
tags=["collection", "integer", "size", "range"],
3537
category="collections",
38+
version="1.0.0",
3639
)
3740
class RangeOfSizeInvocation(BaseInvocation):
3841
"""Creates a range from start to start + size with step"""
@@ -50,6 +53,7 @@ def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
5053
title="Random Range",
5154
tags=["range", "integer", "random", "collection"],
5255
category="collections",
56+
version="1.0.0",
5357
)
5458
class RandomRangeInvocation(BaseInvocation):
5559
"""Creates a collection of random numbers"""

invokeai/app/invocations/compel.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ class ConditioningFieldData:
4444
# PerpNeg = "perp_neg"
4545

4646

47-
@invocation("compel", title="Prompt", tags=["prompt", "compel"], category="conditioning")
47+
@invocation("compel", title="Prompt", tags=["prompt", "compel"], category="conditioning", version="1.0.0")
4848
class CompelInvocation(BaseInvocation):
4949
"""Parse prompt using compel package to conditioning."""
5050

@@ -267,6 +267,7 @@ def _lora_loader():
267267
title="SDXL Prompt",
268268
tags=["sdxl", "compel", "prompt"],
269269
category="conditioning",
270+
version="1.0.0",
270271
)
271272
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
272273
"""Parse prompt using compel package to conditioning."""
@@ -351,6 +352,7 @@ def invoke(self, context: InvocationContext) -> ConditioningOutput:
351352
title="SDXL Refiner Prompt",
352353
tags=["sdxl", "compel", "prompt"],
353354
category="conditioning",
355+
version="1.0.0",
354356
)
355357
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
356358
"""Parse prompt using compel package to conditioning."""
@@ -403,7 +405,7 @@ class ClipSkipInvocationOutput(BaseInvocationOutput):
403405
clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
404406

405407

406-
@invocation("clip_skip", title="CLIP Skip", tags=["clipskip", "clip", "skip"], category="conditioning")
408+
@invocation("clip_skip", title="CLIP Skip", tags=["clipskip", "clip", "skip"], category="conditioning", version="1.0.0")
407409
class ClipSkipInvocation(BaseInvocation):
408410
"""Skip layers in clip text_encoder model."""
409411

0 commit comments

Comments
 (0)