Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
3f3f941
feat(mm): add UnknownModelConfig
psychedelicious Sep 18, 2025
b68871a
refactor(ui): move model categorisation-ish logic to central location…
psychedelicious Sep 18, 2025
bd893cf
refactor(ui)refactor(ui): more cleanup of model categories
psychedelicious Sep 18, 2025
fa47e23
refactor(ui): remove unused excludeSubmodels
psychedelicious Sep 18, 2025
7f9022e
feat(nodes): add unknown as model base
psychedelicious Sep 18, 2025
a87fcfd
chore(ui): typegen
psychedelicious Sep 18, 2025
e348105
feat(ui): add unknown model base support in ui
psychedelicious Sep 18, 2025
3f82c38
feat(ui): allow changing model type in MM, fix up base and variant se…
psychedelicious Sep 18, 2025
c9dd115
feat(mm): omit model description instead of making it "base type file…
psychedelicious Sep 18, 2025
57787e3
feat(app): add setting to allow unknown models
psychedelicious Sep 18, 2025
82409d1
feat(ui): allow changing model format in MM
psychedelicious Sep 18, 2025
39bb60a
feat(app): add the installed model config to install complete events
psychedelicious Sep 18, 2025
d6b72a3
chore(ui): typegen
psychedelicious Sep 18, 2025
b18916d
feat(ui): toast warning when installed model is unidentified
psychedelicious Sep 18, 2025
0159634
docs: update config docstrings
psychedelicious Sep 18, 2025
15e5c9a
chore(ui): typegen
psychedelicious Sep 18, 2025
4070f26
tests(mm): fix test for MM, leave the UnknownModelConfig class in the…
psychedelicious Sep 18, 2025
73bed0d
tidy(ui): prefer types from zod schemas for model attrs
psychedelicious Sep 18, 2025
b9c7c6a
chore(ui): lint
psychedelicious Sep 18, 2025
7f3e5ce
fix(ui): wrong translation string
psychedelicious Sep 18, 2025
abecb6d
feat(mm): normalized model storage
psychedelicious Sep 18, 2025
e42d9c9
feat(mm): add migration to flat model storage
psychedelicious Sep 18, 2025
af25393
fix(mm): normalized multi-file/diffusers model installation no worky
psychedelicious Sep 19, 2025
12d6a69
refactor: port MM probes to new api
psychedelicious Sep 23, 2025
9d41833
feat(mm): port TIs to new API
psychedelicious Sep 23, 2025
91b5f79
tidy(mm): remove unused probes
psychedelicious Sep 23, 2025
02380fc
feat(mm): port spandrel to new API
psychedelicious Sep 23, 2025
b2d33b0
fix(mm): parsing for spandrel
psychedelicious Sep 23, 2025
d683a55
fix(mm): loader for clip embed
psychedelicious Sep 23, 2025
8bfe152
fix(mm): tis use existing weight_files method
psychedelicious Sep 23, 2025
5aea3ab
feat(mm): port vae to new API
psychedelicious Sep 23, 2025
3a94315
fix(mm): vae class inheritance and config_path
psychedelicious Sep 23, 2025
f81c1dc
tidy(mm): patcher types and import paths
psychedelicious Sep 23, 2025
0648aa5
feat(mm): better errors when invalid model config found in db
psychedelicious Sep 23, 2025
d9ce393
feat(mm): port t5 to new API
psychedelicious Sep 23, 2025
ae25948
feat(mm): make config_path optional
psychedelicious Sep 23, 2025
ba839ce
refactor(mm): simplify model classification process
psychedelicious Sep 24, 2025
640d2e7
refactor(mm): remove unused methods in config.py
psychedelicious Sep 24, 2025
c483ce0
refactor(mm): add model config parsing utils
psychedelicious Sep 24, 2025
1b6bd5e
fix(mm): abstractmethod bork
psychedelicious Sep 24, 2025
6dd87c7
tidy(mm): clarify that model id utils are private
psychedelicious Sep 24, 2025
8c3a1f3
fix(mm): fall back to UnknownModelConfig correctly
psychedelicious Sep 24, 2025
7507be1
feat(mm): port CLIPVisionDiffusersConfig to new api
psychedelicious Sep 24, 2025
f644866
feat(mm): port SigLIPDiffusersConfig to new api
psychedelicious Sep 24, 2025
d30a826
feat(mm): make match helpers more succint
psychedelicious Sep 24, 2025
d03131c
feat(mm): port flux redux to new api
psychedelicious Sep 24, 2025
12519f1
feat(mm): port ip adapter to new api
psychedelicious Sep 24, 2025
b5269f6
tidy(mm): skip optimistic override handling for now
psychedelicious Sep 24, 2025
b6a4e63
refactor(mm): continue iterating on config
psychedelicious Sep 25, 2025
5b14492
feat(mm): port flux "control lora" and t2i adapter to new api
psychedelicious Sep 25, 2025
b9a9ce5
tidy(ui): use Extract to get model config types
psychedelicious Sep 25, 2025
161ef9c
fix(mm): t2i base determination
psychedelicious Sep 25, 2025
b413a12
feat(mm): port cnet to new api
psychedelicious Sep 25, 2025
eddc0a4
refactor(mm): add config validation utils, make it all consistent and…
psychedelicious Sep 25, 2025
c5eda48
feat(mm): wip port of main models to new api
psychedelicious Sep 25, 2025
0687546
feat(mm): wip port of main models to new api
psychedelicious Sep 25, 2025
38f0024
feat(mm): wip port of main models to new api
psychedelicious Sep 25, 2025
98b29ad
docs(mm): add todos
psychedelicious Sep 26, 2025
b86a876
tidy(mm): removed unused model merge class
psychedelicious Sep 29, 2025
a500280
feat(mm): wip port main models to new api
psychedelicious Sep 29, 2025
e6f2f6c
tidy(mm): clean up model heuristic utils
psychedelicious Oct 1, 2025
cf44bfa
tidy(mm): clean up ModelOnDisk caching
psychedelicious Oct 1, 2025
9b843ef
tidy(mm): flux lora format util
psychedelicious Oct 1, 2025
97ce406
refactor(mm): make config classes narrow
psychedelicious Oct 1, 2025
596d85e
refactor(mm): diffusers loras
psychedelicious Oct 1, 2025
3e8520f
feat(mm): consistent naming for all model config classes
psychedelicious Oct 1, 2025
10b9064
fix(mm): tag generation & scattered probe fixes
psychedelicious Oct 1, 2025
0426153
tidy(mm): consistent class names
psychedelicious Oct 2, 2025
c1ae605
refactor(mm): split configs into separate files
psychedelicious Oct 3, 2025
9c5d0a0
docs(mm): add comments for identification utils
psychedelicious Oct 6, 2025
6c248bd
chore(ui): typegen
psychedelicious Oct 6, 2025
ab8af54
refactor(mm): remove legacy probe, new configs dir structure, update …
psychedelicious Oct 7, 2025
248db55
fix(mm): inverted condition
psychedelicious Oct 7, 2025
40570de
docs(mm): update docsstrings in factory.py
psychedelicious Oct 7, 2025
8121032
docs(mm): document flux variant attr
psychedelicious Oct 7, 2025
4003456
feat(mm): add helper method for legacy configs
psychedelicious Oct 7, 2025
4f1c8e6
feat(mm): satisfy type checker in flux denoise
psychedelicious Oct 7, 2025
05957e5
docs(mm): remove extraneous comment
psychedelicious Oct 7, 2025
64dbf23
fix(mm): ensure unknown model configs get unknown attrs
psychedelicious Oct 7, 2025
ead62ed
fix(mm): t5 identification
psychedelicious Oct 7, 2025
50fc362
fix(mm): sdxl ip adapter identification
psychedelicious Oct 7, 2025
e4c0aa0
feat(mm): more flexible config matching utils
psychedelicious Oct 7, 2025
582df91
fix(mm): clip vision identification
psychedelicious Oct 7, 2025
48e7240
feat(mm): add sanity checks before probing paths
psychedelicious Oct 7, 2025
04cac15
docs(mm): add reminder for self for field migrations
psychedelicious Oct 7, 2025
3054d18
feat(mm): clearer naming for main config class hierarchy
psychedelicious Oct 8, 2025
0f63937
feat(mm): fix clip vision starter model bases, add ref to actual models
psychedelicious Oct 8, 2025
fe816bc
feat(mm): add model config schema migration logic
psychedelicious Oct 8, 2025
9fae842
fix(mm): duplicate import
psychedelicious Oct 8, 2025
2ac30ea
refactor(mm): split big migration into 3
psychedelicious Oct 8, 2025
37d2c37
fix(mm): pop base/type/format when creating unknown model config
psychedelicious Oct 8, 2025
116de0a
fix(db): migration 22 insert only real cols
psychedelicious Oct 8, 2025
f35d1dd
fix(db): migration 23 fall back to unknown model when config change f…
psychedelicious Oct 8, 2025
2d36e6d
feat(db): run migrations 23 and 24
psychedelicious Oct 8, 2025
05cc3b1
wip
psychedelicious Oct 8, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions invokeai/app/api/dependencies.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,10 @@

import asyncio
from logging import Logger
from pathlib import Path

import torch
from torchvision.datasets.clevr import json

from invokeai.app.services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
from invokeai.app.services.board_images.board_images_default import BoardImagesService
Expand Down Expand Up @@ -187,6 +189,17 @@ def initialize(
)

ApiDependencies.invoker = Invoker(services)
all_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
for m in all_models:
path = Path(m.path)
if path.is_absolute():
continue

metadata_path = config.models_path / m.key / "__metadata__.json"
print(f"Writing metadata for model {m.name} to {metadata_path}")
content = {"source": m.source, "expected_config_attrs": m.model_dump(), "notes": ""}
content_json = json.dumps(content, indent=2)
metadata_path.write_text(content_json)
db.clean()

@staticmethod
Expand Down
30 changes: 20 additions & 10 deletions invokeai/app/api/routers/model_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,12 @@
UnknownModelException,
)
from invokeai.app.util.suppress_output import SuppressOutput
from invokeai.backend.model_manager import BaseModelType, ModelFormat, ModelType
from invokeai.backend.model_manager.config import (
AnyModelConfig,
MainCheckpointConfig,
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.configs.main import (
Main_Checkpoint_SD1_Config,
Main_Checkpoint_SD2_Config,
Main_Checkpoint_SDXL_Config,
Main_Checkpoint_SDXLRefiner_Config,
)
from invokeai.backend.model_manager.load.model_cache.cache_stats import CacheStats
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
Expand All @@ -44,6 +46,7 @@
StarterModelBundle,
StarterModelWithoutDependencies,
)
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelFormat, ModelType

model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])

Expand Down Expand Up @@ -297,10 +300,8 @@ async def update_model_record(
"""Update a model's config."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_manager.store
installer = ApiDependencies.invoker.services.model_manager.install
try:
record_store.update_model(key, changes=changes)
config = installer.sync_model_path(key)
config = record_store.update_model(key, changes=changes)
config = add_cover_image_to_model_config(config, ApiDependencies)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
Expand Down Expand Up @@ -743,9 +744,18 @@ async def convert_model(
logger.error(str(e))
raise HTTPException(status_code=424, detail=str(e))

if not isinstance(model_config, MainCheckpointConfig):
logger.error(f"The model with key {key} is not a main checkpoint model.")
raise HTTPException(400, f"The model with key {key} is not a main checkpoint model.")
if not isinstance(
model_config,
(
Main_Checkpoint_SD1_Config,
Main_Checkpoint_SD2_Config,
Main_Checkpoint_SDXL_Config,
Main_Checkpoint_SDXLRefiner_Config,
),
):
msg = f"The model with key {key} is not a main SD 1/2/XL checkpoint model."
logger.error(msg)
raise HTTPException(400, msg)

with TemporaryDirectory(dir=ApiDependencies.invoker.services.configuration.models_path) as tmpdir:
convert_path = pathlib.Path(tmpdir) / pathlib.Path(model_config.path).stem
Expand Down
2 changes: 1 addition & 1 deletion invokeai/app/invocations/cogview4_denoise.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.sampling_utils import clip_timestep_schedule_fractional
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType
from invokeai.backend.rectified_flow.rectified_flow_inpaint_extension import RectifiedFlowInpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import CogView4ConditioningInfo
Expand Down
3 changes: 1 addition & 2 deletions invokeai/app/invocations/cogview4_model_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@
VAEField,
)
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import SubModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType


@invocation_output("cogview4_model_loader_output")
Expand Down
11 changes: 5 additions & 6 deletions invokeai/app/invocations/create_gradient_mask.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,7 @@
from invokeai.app.invocations.image_to_latents import ImageToLatentsInvocation
from invokeai.app.invocations.model import UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.config import MainConfigBase
from invokeai.backend.model_manager.taxonomy import ModelVariantType
from invokeai.backend.model_manager.taxonomy import FluxVariantType, ModelType, ModelVariantType
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor


Expand Down Expand Up @@ -182,10 +180,11 @@ def invoke(self, context: InvocationContext) -> GradientMaskOutput:
if self.unet is not None and self.vae is not None and self.image is not None:
# all three fields must be present at the same time
main_model_config = context.models.get_config(self.unet.unet.key)
assert isinstance(main_model_config, MainConfigBase)
if main_model_config.variant is ModelVariantType.Inpaint:
assert main_model_config.type is ModelType.Main
variant = getattr(main_model_config, "variant", None)
if variant is ModelVariantType.Inpaint or variant is FluxVariantType.DevFill:
mask = dilated_mask_tensor
vae_info: LoadedModel = context.models.load(self.vae.vae)
vae_info = context.models.load(self.vae.vae)
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
Expand Down
2 changes: 1 addition & 1 deletion invokeai/app/invocations/denoise_latents.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelVariantType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.patches.layer_patcher import LayerPatcher
Expand Down
7 changes: 4 additions & 3 deletions invokeai/app/invocations/flux_denoise.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
unpack,
)
from invokeai.backend.flux.text_conditioning import FluxReduxConditioning, FluxTextConditioning
from invokeai.backend.model_manager.taxonomy import ModelFormat, ModelVariantType
from invokeai.backend.model_manager.taxonomy import BaseModelType, FluxVariantType, ModelFormat, ModelType
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_TRANSFORMER_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
Expand Down Expand Up @@ -232,7 +232,8 @@ def _run_diffusion(
)

transformer_config = context.models.get_config(self.transformer.transformer)
is_schnell = "schnell" in getattr(transformer_config, "config_path", "")
assert transformer_config.base is BaseModelType.Flux and transformer_config.type is ModelType.Main
is_schnell = transformer_config.variant is FluxVariantType.Schnell

# Calculate the timestep schedule.
timesteps = get_schedule(
Expand Down Expand Up @@ -277,7 +278,7 @@ def _run_diffusion(

# Prepare the extra image conditioning tensor (img_cond) for either FLUX structural control or FLUX Fill.
img_cond: torch.Tensor | None = None
is_flux_fill = transformer_config.variant == ModelVariantType.Inpaint # type: ignore
is_flux_fill = transformer_config.variant is FluxVariantType.DevFill
if is_flux_fill:
img_cond = self._prep_flux_fill_img_cond(
context, device=TorchDevice.choose_torch_device(), dtype=inference_dtype
Expand Down
7 changes: 2 additions & 5 deletions invokeai/app/invocations/flux_ip_adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,7 @@
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
)
from invokeai.backend.model_manager.configs.ip_adapter import IPAdapter_Checkpoint_FLUX_Config
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType


Expand Down Expand Up @@ -68,7 +65,7 @@ def validate_begin_end_step_percent(self) -> Self:
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
assert isinstance(ip_adapter_info, IPAdapter_Checkpoint_FLUX_Config)

# Note: There is a IPAdapterInvokeAIConfig.image_encoder_model_id field, but it isn't trustworthy.
image_encoder_starter_model = CLIP_VISION_MODEL_MAP[self.clip_vision_model]
Expand Down
10 changes: 4 additions & 6 deletions invokeai/app/invocations/flux_model_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,8 @@
preprocess_t5_encoder_model_identifier,
preprocess_t5_tokenizer_model_identifier,
)
from invokeai.backend.flux.util import max_seq_lengths
from invokeai.backend.model_manager.config import (
CheckpointConfigBase,
)
from invokeai.backend.flux.util import get_flux_max_seq_length
from invokeai.backend.model_manager.configs.base import Checkpoint_Config_Base
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType


Expand Down Expand Up @@ -87,12 +85,12 @@ def invoke(self, context: InvocationContext) -> FluxModelLoaderOutput:
t5_encoder = preprocess_t5_encoder_model_identifier(self.t5_encoder_model)

transformer_config = context.models.get_config(transformer)
assert isinstance(transformer_config, CheckpointConfigBase)
assert isinstance(transformer_config, Checkpoint_Config_Base)

return FluxModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
clip=CLIPField(tokenizer=tokenizer, text_encoder=clip_encoder, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer2, text_encoder=t5_encoder, loras=[]),
vae=VAEField(vae=vae),
max_seq_len=max_seq_lengths[transformer_config.config_path],
max_seq_len=get_flux_max_seq_length(transformer_config.variant),
)
4 changes: 2 additions & 2 deletions invokeai/app/invocations/flux_redux.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.redux.flux_redux_model import FluxReduxModel
from invokeai.backend.model_manager import BaseModelType, ModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.starter_models import siglip
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
from invokeai.backend.sig_lip.sig_lip_pipeline import SigLipPipeline
from invokeai.backend.util.devices import TorchDevice

Expand Down
2 changes: 1 addition & 1 deletion invokeai/app/invocations/flux_text_encoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
from invokeai.app.invocations.primitives import FluxConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.conditioner import HFEncoder
from invokeai.backend.model_manager import ModelFormat
from invokeai.backend.model_manager.taxonomy import ModelFormat
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_CLIP_PREFIX, FLUX_LORA_T5_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
Expand Down
2 changes: 1 addition & 1 deletion invokeai/app/invocations/flux_vae_encode.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.vae_working_memory import estimate_vae_working_memory_flux
Expand Down
2 changes: 1 addition & 1 deletion invokeai/app/invocations/image_to_latents.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
from invokeai.backend.util.devices import TorchDevice
Expand Down
12 changes: 6 additions & 6 deletions invokeai/app/invocations/ip_adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
AnyModelConfig,
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.configs.ip_adapter import (
IPAdapter_Checkpoint_Config_Base,
IPAdapter_InvokeAI_Config_Base,
)
from invokeai.backend.model_manager.starter_models import (
StarterModel,
Expand Down Expand Up @@ -123,9 +123,9 @@ def validate_begin_end_step_percent(self) -> Self:
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
assert isinstance(ip_adapter_info, (IPAdapter_InvokeAI_Config_Base, IPAdapter_Checkpoint_Config_Base))

if isinstance(ip_adapter_info, IPAdapterInvokeAIConfig):
if isinstance(ip_adapter_info, IPAdapter_InvokeAI_Config_Base):
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
else:
Expand Down
9 changes: 4 additions & 5 deletions invokeai/app/invocations/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,7 @@
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import (
AnyModelConfig,
)
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType


Expand All @@ -24,8 +22,9 @@ class ModelIdentifierField(BaseModel):
name: str = Field(description="The model's name")
base: BaseModelType = Field(description="The model's base model type")
type: ModelType = Field(description="The model's type")
submodel_type: Optional[SubModelType] = Field(
description="The submodel to load, if this is a main model", default=None
submodel_type: SubModelType | None = Field(
description="The submodel to load, if this is a main model",
default=None,
)

@classmethod
Expand Down
2 changes: 1 addition & 1 deletion invokeai/app/invocations/sd3_denoise.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
from invokeai.app.invocations.sd3_text_encoder import SD3_T5_MAX_SEQ_LEN
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.sampling_utils import clip_timestep_schedule_fractional
from invokeai.backend.model_manager import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType
from invokeai.backend.rectified_flow.rectified_flow_inpaint_extension import RectifiedFlowInpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import SD3ConditioningInfo
Expand Down
2 changes: 2 additions & 0 deletions invokeai/app/services/config/config_default.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ class InvokeAIAppConfig(BaseSettings):
remote_api_tokens: List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.
scan_models_on_startup: Scan the models directory on startup, registering orphaned models. This is typically only used in conjunction with `use_memory_db` for testing purposes.
unsafe_disable_picklescan: UNSAFE. Disable the picklescan security check during model installation. Recommended only for development and testing purposes. This will allow arbitrary code execution during model installation, so should never be used in production.
allow_unknown_models: Allow installation of models that we are unable to identify. If enabled, models will be marked as `unknown` in the database, and will not have any metadata associated with them. If disabled, unknown models will be rejected during installation.
"""

_root: Optional[Path] = PrivateAttr(default=None)
Expand Down Expand Up @@ -198,6 +199,7 @@ class InvokeAIAppConfig(BaseSettings):
remote_api_tokens: Optional[list[URLRegexTokenPair]] = Field(default=None, description="List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.")
scan_models_on_startup: bool = Field(default=False, description="Scan the models directory on startup, registering orphaned models. This is typically only used in conjunction with `use_memory_db` for testing purposes.")
unsafe_disable_picklescan: bool = Field(default=False, description="UNSAFE. Disable the picklescan security check during model installation. Recommended only for development and testing purposes. This will allow arbitrary code execution during model installation, so should never be used in production.")
allow_unknown_models: bool = Field(default=True, description="Allow installation of models that we are unable to identify. If enabled, models will be marked as `unknown` in the database, and will not have any metadata associated with them. If disabled, unknown models will be rejected during installation.")

# fmt: on

Expand Down
4 changes: 2 additions & 2 deletions invokeai/app/services/events/events_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,8 @@
SessionQueueItem,
SessionQueueStatus,
)
from invokeai.backend.model_manager import SubModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.configs.factory import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import SubModelType


class EventServiceBase:
Expand Down
Loading
Loading