Skip to content

Conversation

@Pfannkuchensack
Copy link
Collaborator

Summary

Add a new conditioning node for Z-Image models that injects seed-based noise into text embeddings to increase visual variation between seeds.

Z-Image-Turbo can produce relatively similar images with different seeds, making it harder to explore variations of a prompt. This feature implements reproducible, seed-based noise injection into text embeddings to increase visual variation while maintaining reproducibility.

Backend:

  • New invocation: z_image_seed_variance_enhancer.py
  • Parameters: strength (0-2), randomize_percent (1-100%), seed
  • Auto-calibrates noise intensity relative to embedding standard deviation

Frontend:

  • State management in paramsSlice with selectors and reducers
  • UI components in SeedVariance/ folder with toggle and sliders
  • Integration in GenerationSettingsAccordion (Advanced Options)
  • Graph builder integration in buildZImageGraph.ts
  • Metadata recall handlers for remix functionality
  • Translations and tooltip descriptions

Based on: https://github.com/Pfannkuchensack/invokeai-z-image-seed-variance-enhancer

Related Issues / Discussions

Integrates community node as core feature for Z-Image models.

QA Instructions

  1. Select a Z-Image model (e.g., Z-Image-Turbo)
  2. Open "Advanced Options" in the Generation Settings
  3. Enable "Seed Variance Enhancer"
  4. Adjust Strength (0.1 = subtle, 0.5 = strong) and Randomize Percent
  5. Generate images with different seeds - they should show more variation
  6. Verify metadata recall works: generate an image, then use "Recall Parameters" on it

Workflow Editor:

  • The node is available under Conditioning → "Seed Variance Enhancer - Z-Image"
  • Connect between Z-Image Text Encoder and Z-Image Denoise

Merge Plan

No special merge requirements. Schema will be auto-generated on backend start.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Changes to a redux slice have a corresponding migration (uses existing migration pattern)
  • Documentation added / updated (if applicable) (tooltips added)
  • Updated What's New copy (if doing a release after this PR)

Add a new conditioning node for Z-Image models that injects seed-based
noise into text embeddings to increase visual variation between seeds.

Backend:
- New invocation: z_image_seed_variance_enhancer.py
- Parameters: strength (0-2), randomize_percent (1-100%), seed

Frontend:
- State management in paramsSlice with selectors and reducers
- UI components in SeedVariance/ folder with toggle and sliders
- Integration in GenerationSettingsAccordion (Advanced Options)
- Graph builder integration in buildZImageGraph.ts
- Metadata recall handlers for remix functionality
- Translations and tooltip descriptions

Based on: github.com/Pfannkuchensack/invokeai-z-image-seed-variance-enhancer
@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations frontend PRs that change frontend files labels Jan 10, 2026
noise = torch.rand(
prompt_embeds.shape, generator=generator, device=prompt_embeds.device, dtype=prompt_embeds.dtype
)
noise = noise * 2 - 1 # Scale to [-1, 1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically this is [-1, 1) as 1 will not be generated by the distribution.

Copy link
Collaborator

@JPPhoto JPPhoto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python code looks good. Need to get reviews from frontend prior to merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend PRs that change frontend files invocations PRs that change invocations python PRs that change python files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants