Skip to content

Schedulers in C

cmdr2 edited this page Oct 13, 2025 · 1 revision

Claude's analysis of PyTorch usage in the schedulers used in diffusers, to understand if they can be ported to C++ without major dependencies.

Summary: HIGHLY PORTABLE TO C++

The vast majority of schedulers use only small tensor operations that can be easily reimplemented in C++ without any major dependencies.


Key Findings by Category:

1. Common Schedulers (DDIM, DDPM, Euler variants) - 95% of use cases

Torch operations used:

  • Element-wise arithmetic: +, -, *, /, ** (power)
  • Math functions on scalars/small tensors: sqrt(), exp(), log(), clamp()
  • Indexing: Array indexing like alphas_cumprod[timestep]
  • Shape manipulation: unsqueeze(), flatten(), reshape()
  • Tensor creation: torch.tensor(), torch.linspace(), torch.from_numpy()

Operations on large tensors (the actual image latents):

# Typical pattern in step() functions:
prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction
noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise

These are pure element-wise operations - no matrix multiplications, no convolutions, no complex algorithms.

2. DPM-Solver Family

Similar to above, with additional operations:

  • torch.expm1() (exp(x) - 1, for numerical stability)
  • More complex formulas but still element-wise on scalars and image tensors
  • Example from DPMSolver:
x_t = (sigma_t / sigma_s) * sample - (alpha_t * (torch.exp(-h) - 1.0)) * model_output

3. Flow Matching Schedulers (Euler, Heun)

Even simpler:

prev_sample = sample + dt * model_output  # Flow matching is just ODE integration

4. UniPC Scheduler - ⚠️ The ONLY Exception

Uses small matrix operations for high-order polynomial solvers:

  • torch.linalg.solve(R[:-1, :-1], b[:-1]) - Solving small linear systems (order 2-4)
  • torch.einsum("k,bkc...->bc...", rhos_p, D1s) - Weighted sum across solver orders

Matrix sizes: Typically 2x2 to 4x4 (for solver orders 2-4) Complexity: O(n³) where n ≤ 4, completely negligible

5. Optional Features (rarely used)

  • torch.quantile() - Used in dynamic thresholding (optional feature, rarely enabled)
  • torch.cumprod() - Only used in initialization, not per-step

Torch Operations Breakdown:

In initialization (one-time):

  • torch.linspace() - Creating timestep schedules
  • torch.cumprod() - Computing cumulative products of alphas
  • torch.log(), torch.sqrt() - Computing noise schedules
  • NumPy operations for schedule generation

In step() function (per diffusion step):

  • Scalar operations: indexing self.alphas_cumprod[timestep]
  • Element-wise ops: **0.5, *, +, -, / on image tensors
  • Broadcasting: scalar * tensor operations
  • Random noise: randn_tensor() (calls torch.randn())

C++ Portability Assessment:

Easy to Port (95% of schedulers):

Schedulers: DDIM, DDPM, Euler, Euler Ancestral, DPMSolver++, DPMSolver, Heun, LCM, TCD, Flow Match, SASolver, K-DPM

C++ Implementation needs:

  1. Basic math library (std::sqrt, std::exp, std::log, std::pow)
  2. Random number generation (std::normal_distribution)
  3. Array operations (can use raw arrays or std::vector)
  4. No external dependencies needed!

Example translation:

// PyTorch:
prev_sample = alpha_prod_t_prev ** 0.5 * pred_original_sample + pred_sample_direction

// C++:
for (size_t i = 0; i < sample_size; ++i) {
    prev_sample[i] = std::sqrt(alpha_prod_t_prev) * pred_original_sample[i] 
                   + pred_sample_direction[i];
}

⚠️ Slightly More Complex (1 scheduler):

UniPC requires:

  • Small matrix linear system solver (can use simple Gaussian elimination)
  • Small einsum (just a weighted sum, trivial to implement)

Complexity: Still easily doable in C++. You could:

  1. Implement a tiny 4x4 matrix solver (few dozen lines)
  2. Use a lightweight library like Eigen (header-only)
  3. Or skip UniPC entirely (not commonly used)

Performance Considerations:

Are these operations actually expensive?

No! The tensor operations are on:

  1. Scalars: Single float values (sigma, alpha, timestep)
  2. Small tensors: Arrays of ~50-1000 timesteps/sigmas
  3. Image latents: The actual computation per pixel is trivial (multiply-add)

For a 512x512 image with 4 channels (latent space):

  • Tensor size: 4 × 64 × 64 = 16,384 floats
  • Operations: Simple multiply-adds
  • This is negligible compared to the UNet inference (which runs the actual neural network)

Where the time is actually spent:

  • 99%+ of inference time: Running the UNet model
  • <0.1%: Scheduler step() operations

Recommendations:

For C++ Port:

  1. Start with the simplest: Euler, DDIM

    • ~50-100 lines of C++ code
    • Just arithmetic operations
  2. Add DPMSolver++ next: Most popular advanced scheduler

    • ~200-300 lines of C++
    • Still just math functions
  3. Skip or do last: UniPC

    • Rarely used in practice
    • Requires small matrix solver

Dependencies Needed:

#include <cmath>      // sqrt, exp, log, pow
#include <random>     // normal_distribution
#include <vector>     // or use raw arrays
#include <algorithm>  // optional, for utility functions

No PyTorch/LibTorch needed! These are trivial operations.

Structure Suggestion:

class Scheduler {
    std::vector<float> alphas_cumprod;
    std::vector<float> timesteps;
    
    void step(float* sample, float* model_output, 
              int timestep, float* output);
};

Conclusion:

The diffusers schedulers are PERFECT candidates for C++ porting! They use torch primarily for:

  1. Convenient array syntax
  2. Automatic GPU tensor management (not needed for schedulers)
  3. Element-wise operations that are trivial in C++

The "torch" dependency is more for convenience than necessity. You can reimplement everything with basic C++ standard library, and the performance will be identical (or better, without Python overhead).

The only scheduler with any complexity is UniPC (with its small matrix solve), and even that is manageable with ~50 lines of C++ or a tiny dependency like Eigen.