RFInversionFluxPipeline, small fix for enable_model_cpu_offload & enable_sequential_cpu_offload compatibility #10480
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Use
self._execution_deviceinstead ofself.devicewhen selecting a device for the input image tensor inRFInversionFluxPipeline.encode_image.This allows for compatibility with
enable_model_cpu_offload&enable_sequential_cpu_offloadSince this is in a copied method, might want to look elsewhere for this.
What does this PR do?
Allows for turning on VRAM optimizations without encountering meta tensor copy error / device mismatch etc.
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@linoytsaban