-
-
Notifications
You must be signed in to change notification settings - Fork 49
Description
Hi,
Thanks again for this awesome tool. I can see it become more amd more powerful with every update :)
I hope the following suggestions are rather easy to implement (they probably aren't though - things that look simple can be very tricky under the hood)
Feature 1:
Generating at specific time without jumping to first frame:
Hitting "Generate" does not cause an animation to jump to the first frame. If a scene is at a specific keyframe then it generates at that keyframe.
StableGen Aspects that are affected:
- [ x ] Scene Setup (Cameras, Object preparation)
Additional context:
Currently the only way to put an object into different states and render it with StableGen is to export the model at a desired keyframe state as FBX into a new scene or lock/delete keyframes and copy the desired keyframe to frame 1.
This has a strong impact on a concurrent scene.
This feature would tremendously enhance generation capabilities.
For example you have a straight setup of a jacket at frame 1 and its wrinkeled at frame 20.
Rendering it at 1st and 20th frame gives you the ability to interpolate both textures with RIFE VFI frame interpolation and have an animated texture transition on the model.
Feature 2:
Disabling one or more cameras (hide or disable) in Viewport:
When a camera is hidden or disabled it would be removed from the array of cameras and not be considered.
StableGen Aspects that are affected:
- [ x ] Scene Setup (Cameras, Object preparation)
Additional context:
Sometimes generation of a paricular camera needs readjustment or re-rendering an Img2Img variant of a certain view. Currently the only way to realize this is to bake the solution, keep the needed cameras and rerender.
Additional / Alternative approach idea via paint mask:
This is probably an obsure and cumbersome solution but I just thought I'd give some ideas.
What if we would be a able to set up a black-white texture mask in which black represents 0% denoise and white 100% denoise. Then masked parts of a model could be rerendered in various strengths.
Anthother thing to consider is how hard it is to manage this from an end user perspective. Probably not everyone would feel the need to paint a texture mask in texture paint in oder to control certain denoise levels on the model. The inpaint/camera blending solution in its current state does a great job already.
Even though the mask would be very simple with just a blotch here and there it could be too much for many since managing textures in Blender is not very straight forward.
An automated solution idea to this approach:
Setting a distance field parameter to the camera control how strong the denoise is at a certain distance.
Similar to view distance where an object becomes invisible the denoise would become less and less.
I hope this helps finding good future ideas for future updates.
^^