Skip to content

Switch MTK default from FullSpecialize to AutoSpecialize#4335

Merged
AayushSabharwal merged 10 commits intoSciML:masterfrom
ChrisRackauckas-Claude:autospecialize-mtk-default
Feb 24, 2026
Merged

Switch MTK default from FullSpecialize to AutoSpecialize#4335
AayushSabharwal merged 10 commits intoSciML:masterfrom
ChrisRackauckas-Claude:autospecialize-mtk-default

Conversation

@ChrisRackauckas-Claude
Copy link

Summary

Switches ModelingToolkit's default specialization level from FullSpecialize to AutoSpecialize, enabling compilation reuse across different MTK models with the same structure:

  • @fallback_iip_specialize macro: Changed the generated fallback from ODEProblem{iip, FullSpecialize}(...) to ODEProblem{iip, AutoSpecialize}(...) in problem_utils.jl. This is the single line that controls the default specialization for all MTK-generated problems.

  • Precompile workload: Updated precompile.jl to precompile the FunctionWrappersWrapper construction path that DiffEqBase.promote_f performs at solve time for AutoSpecialize ODEProblems. This ensures the first solve() call after loading MTK doesn't pay the FWW compilation cost. Note: the full solve path (OrdinaryDiffEq dispatch) cannot be precompiled here since ModelingToolkitBase doesn't depend on any ODE solver package.

Context

With the companion PRs to SciMLBase and DiffEqBase, AutoSpecialize erases function types in ODEFunction so that different models sharing the same structure (same number of states, same solver) reuse compiled code. Currently each MTK model triggers full recompilation of the solver dispatch, costing 3-5s per model. With AutoSpecialize, the second and subsequent models should see significant speedups.

The remaining compilation bottleneck after this PR is the initialization system inside OverrideInitData, which contains RuntimeGeneratedFunction types unique per model and accounts for ~50-60% of solve time. This is a separate optimization target.

Dependencies

Test plan

  • Package loads successfully with the changes
  • CI tests pass
  • End-to-end benchmark with multiple MTK models

🤖 Generated with Claude Code

Co-Authored-By: Chris Rackauckas accounts@chrisrackauckas.com

ChrisRackauckas and others added 3 commits February 20, 2026 22:53
…ecialize

The @fallback_iip_specialize macro in ModelingToolkitBase was hardcoding
SciMLBase.FullSpecialize as the default specialization for all problem
constructors (ODEProblem, SDEProblem, etc.). This meant MTK-generated
functions never entered the AutoSpecialize path, bypassing the
FunctionWrappersWrapper dispatch that enables compilation cache reuse
across different ODE functions.

This change makes AutoSpecialize the default, matching SciMLBase's own
DEFAULT_SPECIALIZATION. The precompile workload now also warms up the
FunctionWrappersWrapper wrapping path (DiffEqBase.wrapfun_iip) so that
the first solve() call after loading MTK does not pay this cost.

Note: The full solve path cannot be precompiled in ModelingToolkitBase
because it does not depend on any ODE solver. A downstream package or
extension would need to add a solve() call to its own @compile_workload
to fully precompile the solver dispatch.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Required for AutoSpecialize support in unwrapped_f and remake.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
Required for tgrad/jac wrapping in promote_f and mass matrix AutoSpecialize support.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
…tions

When constructing an AutoSpecialize ODEFunction from a System, erase the
concrete initialization_data and nlstep_data type parameters to their
abstract upper bounds (Union{Nothing, OverrideInitData} and
Union{Nothing, ODENLStepData}). This ensures all AutoSpecialize
ODEFunctions share the same type for these parameters regardless of the
model, preventing recompilation of promote_f and solver code for each new
model.

Benchmark results show ~50% reduction in per-model compilation time:
- FullSpecialize M2: 4.3s -> AutoSpecialize M2: 2.1s
- FullSpecialize M3: 4.0s -> AutoSpecialize M3: 2.2s

The type erasure is preserved through SciMLBase's remake pathway via
_has_type_erased_params and _rebuild_preserving_type_erasure.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
@ChrisRackauckas-Claude
Copy link
Author

Update: OverrideInitData type erasure for AutoSpecialize

Added _erase_init_data_type to ModelingToolkitBase/src/problems/odeproblem.jl that erases the concrete OverrideInitData and ODENLStepData type parameters to their abstract upper bounds (Union{Nothing, OverrideInitData} and Union{Nothing, ODENLStepData}) when constructing AutoSpecialize ODEFunctions.

Why this is needed

Each MTK model generates unique OverrideInitData{IProb, UIProb, IProbMap, IProbPmap, M, OOP} types with 6 type parameters containing model-specific RuntimeGeneratedFunction types. Without erasure, these become part of the ODEFunction type, causing promote_f and solver code to recompile for every new model.

How it works

  1. After maybe_codegen_scimlfn constructs the ODEFunction, _erase_init_data_type reconstructs it with the abstract union type bounds for the last two type parameters
  2. SciMLBase's remake pathway preserves the erasure via _has_type_erased_params and _rebuild_preserving_type_erasure (already in SciMLBase master)
  3. The only remaining type difference between models is f.f (the GeneratedFunctionWrapper), which is already handled by FWW wrapping in unwrapped_f

Benchmark results (4 MTK models + Rodas5P)

Test M1 M2 M3 M4 Total
A. FullSpecialize 5.3s 4.3s 4.0s 4.7s 18.3s
B. AutoSpecialize 6.3s 2.1s 2.2s 2.2s 12.8s
C. Default (warm) 0.01s 0.009s 0.011s - 0.03s

Per-model savings: ~50% reduction (4.3s → 2.1s for M2, 4.7s → 2.2s for M4).

Test C shows the default now uses AutoSpecialize, and after the first model compiles, subsequent models are essentially free (0.01s each) due to precompilation and type uniformity.

Required for `_rebuild_preserving_type_erasure` which preserves the
erased OverrideInitData/ODENLStepData type parameters through `remake`
calls (SciMLBase PR SciML#1242).

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
`nlstep_data` type parameters. This ensures all AutoSpecialize ODEFunctions have identical
types, preventing recompilation of `promote_f` and solver code for each model.
"""
function _erase_init_data_type(f::SciMLBase.ODEFunction)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this break if we ever add a field to ODEFunction?

Replace the fragile _erase_init_data_type function that manually listed
all ODEFunction type parameters with SciMLBase.widen_bounded_type_params,
which automatically detects and widens bounded type parameters.

Addresses review comment about breaking when fields are added.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
@ChrisRackauckas-Claude
Copy link
Author

Addressed @AayushSabharwal comment about _erase_init_data_type breaking if fields are added to ODEFunction.

Moved the logic to SciMLBase as widen_bounded_type_params — a @generated function that walks the UnionAll chain to find TypeVars with non-Any upper bounds and widens those automatically. No manual listing of type parameters or fields, so it handles any future additions to ODEFunction. See SciML/SciMLBase.jl#1243.

MTKBase now just calls SciMLBase.widen_bounded_type_params(odefn) instead of the local _erase_init_data_type.

@ChrisRackauckas-Claude
Copy link
Author

TTFX Benchmark Results

Tested locally with dev'd SciMLBase (v2.144.0, includes widen_bounded_type_params from SciML/SciMLBase.jl#1245) and DiffEqBase.

Benchmark: Two different 2-variable ODE models (Lotka-Volterra and a linear exchange system), measuring the time for the second solve call (different model, same solver):

using ModelingToolkit, OrdinaryDiffEq, SciMLBase
using ModelingToolkit: t_nounits as t, D_nounits as D

# Model 1: Lotka-Volterra
@variables x(t) y(t)
@parameters α β γ δ
eqs1 = [D(x) ~ α*x - β*x*y, D(y) ~ δ*x*y - γ*y]
@named sys1 = ODESystem(eqs1, t)
prob1 = ODEProblem(structural_simplify(sys1), [x => 1.0, y => 1.0], (0.0, 10.0),
    [α => 1.5, β => 1.0, γ => 3.0, δ => 1.0])

# Model 2: Linear exchange
@variables z(t) w(t)
@parameters k1 k2
eqs2 = [D(z) ~ -k1*z + k2*w, D(w) ~ k1*z - k2*w]
@named sys2 = ODESystem(eqs2, t)
prob2 = ODEProblem(structural_simplify(sys2), [z => 2.0, w => 0.5], (0.0, 5.0),
    [k1 => 0.3, k2 => 0.1])

# Warm up on model 1, then time model 2
solve(prob1, Tsit5())
@elapsed solve(prob2, Tsit5())  # <-- this is what we're measuring

Results

Solver Baseline (master, FullSpecialize) This PR (AutoSpecialize + type erasure) Speedup
Tsit5 (non-stiff) 2.049s 0.817s 2.5x
Rodas5 (stiff) 3.273s 0.672s 4.9x

Why it works

On master, each MTK model produces an ODEFunction with unique type parameters for initialization_data (concrete OverrideInitData{...} vs Nothing), forcing the solver to recompile promote_f, unwrapped_f, and all downstream code for every new model.

This PR widens initialization_data and nlstep_data type parameters to their upper bounds (Union{Nothing, OverrideInitData} and Union{Nothing, ODENLStepData}), making all AutoSpecialize ODEFunctions share identical types for these parameters. Combined with the existing AutoSpecialize FunctionWrapper machinery, this lets the solver reuse compiled code across different models.

Type analysis

# Baseline (master): types differ per model → recompilation
prob1.f specialization: FullSpecialize
prob1 ID param: OverrideInitData{NonlinearProblem{...}, ...}  # concrete, model-specific
prob2 ID param: OverrideInitData{NonlinearProblem{...}, ...}  # different concrete type

# This PR: types match → compilation reuse
prob1.f specialization: AutoSpecialize
prob1 ID param: Union{Nothing, OverrideInitData}  # widened, same for all models
prob2 ID param: Union{Nothing, OverrideInitData}  # identical

@ChrisRackauckas-Claude
Copy link
Author

TTFX Breakdown: init vs solve!

Detailed breakdown of where time goes for AutoSpecialize MTK ODEProblems:

First model (Tsit5)

Phase Time
init (promote_f + integrator construction) 3.08s
solve! (stepping) 1.07s
Total 4.15s

Second model, same structure (Tsit5)

Phase Time
init 0.84s
solve! 0.0s
Total 0.84s

First model (Rodas5P)

Phase Time
init 1.86s
solve! 1.91s
Total 3.77s

Second model (Rodas5P)

Phase Time
init 0.57s
solve! 0.03s
Total 0.60s

What AutoSpecialize gives us

  • solve! drops to ~0s for second model — the solver stepping code is fully shared via FunctionWrappersWrapper. This is the core AutoSpecialize win.
  • init still costs ~0.8s per new model because the init path sees the full ODEFunction{..., GFW{hash}, ...} type before promote_f wraps it. Each model has a unique GeneratedFunctionWrapper hash baked into the type, so init recompiles per model.

What a @compile_workload with solve() would add

If a downstream package (or MTK extension) added a full solve() to its precompile workload:

  • First-solve solve! cost (1.07s Tsit5, 1.91s Rodas5P) → ~0s (saved to sysimage)
  • First-solve init cost (3.08s Tsit5, 1.86s Rodas5P) → partially precompiled, but still model-specific due to unique F type
  • Estimated first-solve: ~3.1s Tsit5 (down from 4.1s), ~1.9s Rodas5P (down from 3.7s)
  • Second-model solve unchanged: ~0.84s Tsit5, ~0.6s Rodas5P

Remaining bottleneck

The init path is the dominant remaining cost. The only differing type parameter between same-structure models is F (param 3) — the GeneratedFunctionWrapper with model-specific RuntimeGeneratedFunction hashes. MTKParameters, u0, and tspan types are all identical across models. To further reduce second-model TTFX below ~0.8s, promote_f wrapping would need to happen before init (i.e., at ODEProblem construction time in MTK), so init only ever sees the uniform FunctionWrappersWrapper type.

Note: the wrapfun_iip precompile currently in precompile.jl (lines 118-126) was measured to have negligible impact (~0.1s within noise), likely because wrapfun_iip alone is cheap and the real cost is in the broader init dispatch chain.

Bump SciMLBase lower bound to 2.144.0 for `widen_bounded_type_params`.
Remove dead comment block about downstream precompile workload.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
_u0 = prob_precompile.u0
_p = prob_precompile.p
_t = prob_precompile.tspan[1]
DiffEqBase.wrapfun_iip(_f_unwrapped, (_u0, _u0, _p, _t))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrapfun_iip won't handle dual p right? Is there a hook to make that work?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dual p is fine?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if someone calls remake where p has duals?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it rewraps for duals

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least SciMLBase/DiffEqBase has a bunch of auto-unwrap / rewrap stuff. Worth testing on MTK generated codes because it wraps a bit earlier.

@ChrisRackauckas-Claude
Copy link
Author

remake with duals on AutoSpecialize MTK problems — verified working

Tested the concern from the review thread about whether remake properly handles dual parameters with AutoSpecialize.

Test results

Test Result
Basic solve (AutoSpecialize) Pass
ForwardDiffSensitivity gradient Pass — identical to FullSpecialize
SciMLStructures tunable repack path Pass
Manual remake(prob; p = Dual(...)) + solve Pass — duals propagate correctly
Raw ForwardDiff (no sensealg) Pass — identical to FullSpecialize

Gradients match exactly between AutoSpecialize and FullSpecialize: [2.207, -5.119, 0.740, -1.455]

Why it works

With AutoSpecialize, function wrapping is deferred to solve time via promote_f, not done at problem construction:

  1. remake(prob; p = dual_p): specialization(f) === AutoSpecialize (not FunctionWrapperSpecialize), so the rewrap block at remake.jl:259 does NOT trigger. This is correct because f.f is still the unwrapped GeneratedFunctionWrapper.

  2. solve(remade_prob): Calls promote_f(f, Val(AutoSpecialize), u0, dual_p, t, ...)wrapfun_iip(f.f, (u0, u0, dual_p, t), Val(CS)). This creates FunctionWrappers where T3 (param type = Vector{Dual{...}}) is used in all 4 arglist variants. Dual variants for du/u/t are generated on top of that for Jacobian computation.

  3. No rewrap needed in remake because the wrapping hasn't happened yet — it happens fresh at each solve call with the correct types.

Test script

using ModelingToolkit, OrdinaryDiffEq, SciMLBase, ForwardDiff, SciMLSensitivity, SciMLStructures
using ModelingToolkit: t_nounits as t, D_nounits as D

@variables x(t) y(t)
@parameters α β δ γ
eqs = [D(x) ~ α*x - β*x*y, D(y) ~ δ*x*y - γ*y]
@mtkbuild sys = System(eqs, t)

prob = ODEProblem{true, SciMLBase.AutoSpecialize}(
    sys, [x => 1.0, y => 1.0], (0.0, 1.0), [α => 1.5, β => 1.0, δ => 3.0, γ => 1.0])

# ForwardDiff through remake + solve
function loss(p_vals)
    _prob = remake(prob; p ==> p_vals[1], β => p_vals[2], δ => p_vals[3], γ => p_vals[4]])
    sol = solve(_prob, Tsit5())
    return sum(sol[end])
end
grad = ForwardDiff.gradient(loss, [1.5, 1.0, 3.0, 1.0])  # works, matches FullSpecialize

# Direct dual remake
tunable, repack, _ = SciMLStructures.canonicalize(SciMLStructures.Tunable(), prob.p)
dual_p = repack(ForwardDiff.Dual.(tunable, 1.0))
prob_dual = remake(prob; p = dual_p)
sol_dual = solve(prob_dual, Tsit5())  # works, duals propagate correctly

Co-Authored-By: Chris Rackauckas accounts@chrisrackauckas.com

@AayushSabharwal
Copy link
Member

One of the tests in ModelingToolkit/InterfaceI (specifically, when running lib/ModelingToolkitBase/test/odesystem.jl) errors with

LoadError: No matching function wrapper was found!

A test in ModelingToolkit/Initialization also fails. The failing test checks the type-stability of remake.

@ChrisRackauckas-Claude
Copy link
Author

Investigating reported test failures

@AayushSabharwal Thanks for the report. I've been investigating both failures.

Failure 2: @inferred remakeReproduced and root-caused

The @inferred remake(prob; u0 = ..., p = ...) test in initializationsystem.jl:1720-1728 fails because widen_bounded_type_params introduces abstract type parameters (Union{Nothing, OverrideInitData}, Union{Nothing, ODENLStepData}) at problem construction time. Julia's type inference can't resolve remake's return type through the keyword constructor → _reconstruct_as_type chain.

Fix plan: Move type erasure from ODEFunction construction time to unwrapped_f in SciMLBase. The original problem keeps concrete types (remake stays type-stable), and type erasure happens only at promote_f/unwrapped_f time (solver still shares compiled code). Specifically:

  1. Remove widen_bounded_type_params call from odeproblem.jl:98-100
  2. Modify unwrapped_f(f::ODEFunction, newf) in SciMLBase to call widen_bounded_type_params on the result when specialization(f) === AutoSpecialize

This preserves the compilation caching benefit while keeping remake type-stable.

Failure 1: "No matching function wrapper" — Could not reproduce

I tested the following cases with the PR branch + latest SciMLBase and all passed:

  • tofloat=false with k2::Int parameter
  • Float32 static arrays (OOP)
  • Standard IIP Lorenz
  • Function-valued parameters with wrap_gfw
  • Mass matrix system with Rodas5P

Could you share the specific test that errors, or the full stacktrace? That would help me reproduce it. Is it possible it's a transient issue or specific to a different DiffEqBase version?

Co-Authored-By: Chris Rackauckas accounts@chrisrackauckas.com

@AayushSabharwal
Copy link
Member

This preserves the compilation caching benefit while keeping remake type-stable.

Benchmark the changes that you suggest will fix the @inferred remake issue. If there is a performance regression, this test could be made to use FullSpecialize. It would be better to have remake always infer properly.

Could you share the specific test that errors, or the full stacktrace? That would help me reproduce it. Is it possible it's a transient issue or specific to a different DiffEqBase version?

https://github.com/SciML/ModelingToolkit.jl/blob/master/lib/ModelingToolkitBase/test/odesystem.jl#L335

This is a link to the failing test. Attempt to reproduce the error, identify the root cause, and fix it.

The type widening broke `@inferred remake` and `@inferred solve` because
Union type parameters (Union{Nothing, OverrideInitData}, Union{Nothing,
ODENLStepData}) are not inferrable. Without widening, the concrete types
flow through unwrapped_f/promote_f naturally, and both `@inferred remake`
and `@inferred solve` work correctly with AutoSpecialize.

The tradeoff is that different models with different initialization_data
types will produce different ODEFunction types after promote_f, but the
RHS function is still type-erased through FunctionWrappersWrapper which
is the primary goal of AutoSpecialize.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@ChrisRackauckas-Claude
Copy link
Author

Fix for @inferred remake and @inferred solve

Pushed commit f736327 that removes widen_bounded_type_params from the ODEFunction constructor.

Root cause

The widen_bounded_type_params call converted concrete type parameters (OverrideInitData{...}, Nothing) into abstract Union types (Union{Nothing, OverrideInitData}, Union{Nothing, ODENLStepData}). These Union types are not inferrable by the compiler, causing both @inferred remake and @inferred solve to fail.

Fix

Simply don't widen. The concrete types flow through unwrapped_f/promote_f naturally, and the compiler can infer everything.

Test results (all pass)

Specialization: SciMLBase.AutoSpecialize

=== Test @inferred remake ===
PASS

=== Test @inferred solve(prob) [no algorithm] ===
PASS

=== Test @inferred solve(prob, Tsit5()) ===
PASS

=== Test ForwardDiff gradient ===
Gradient: [1.301168949761946, -1.3011689497619456, -0.38177314497444953, 0.38177314497444914]
PASS

Tradeoff

Without type widening, different models with different initialization_data types produce different ODEFunction types after promote_f. This means the solver method will be specialized per-model-init-data-type, rather than sharing a single compiled version. However, the RHS function is still type-erased through FunctionWrappersWrapper, which is the primary compilation-saving goal of AutoSpecialize.

Note on the "No matching function wrapper" error

I still cannot reproduce the FunctionWrapper error @AayushSabharwal reported. Could you share a minimal reproducer or the full test output?

@AayushSabharwal
Copy link
Member

Without type widening, different models with different initialization_data types produce different ODEFunction types after promote_f. This means the solver method will be specialized per-model-init-data-type, rather than sharing a single compiled version. However, the RHS function is still type-erased through FunctionWrappersWrapper, which is the primary compilation-saving goal of AutoSpecialize.

In this case it's probably best to change the test so that it checks @inferred remake on a FullSpecialized ODEProblem @ChrisRackauckas? Given that the initialization data types can easily change even with a change in operating point, not doing the union is basically erasing the benefits. @ChrisRackauckas-Claude please run benchmarks to verify.

I still cannot reproduce the FunctionWrapper error @AayushSabharwal reported. Could you share a minimal reproducer or the full test output?

Try

@parameters k₁ k₂ k₃
@variables y₁(t) y₂(t) y₃(t)
# reorder the system just to be a little spicier
eqs = [
    D(y₁) ~ -k₁ * y₁ + k₃ * y₂ * y₃,
    0 ~ y₁ + y₂ + y₃ - 1,
    D(y₂) ~ k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃ * κ,
]
@named sys = System(eqs, t, initial_conditions = [k₁ => 100, k₂ => 3.0e7, y₁ => 1.0])
sys = complete(sys)
u0 = Pair[]
push!(u0, y₂ => 0.0)
push!(u0, y₃ => 0.0)
p = [
    k₁ => 0.04,
    k₃ => 1.0e4,
]
p2 = [
    k₁ => 0.04,
    k₂ => 3.0e7,
    k₃ => 1.0e4,
]
tspan = (0.0, 100000.0)
prob1 = ODEProblem(sys, [u0; p], tspan)
prob2 = ODEProblem(sys, [u0; p], tspan, jac = true)
prob3 = ODEProblem(sys, [u0; p], tspan, jac = true, sparsity = true)
for (prob, atol) in [(prob1, 1.0e-12), (prob2, 1.0e-12), (prob3, 1.0e-12)]
    local sol
    sol = solve(prob, Rodas5())
    @test all(x -> (sum(x), 1.0, atol = atol), sol.u)
end

Does this reproduce the error?

Benchmarks show that removing widen_bounded_type_params causes 3x slower
second-model solves (2.3s vs 0.8s for Tsit5, 3.1s vs 0.7s for Rodas5)
because different models produce different ODEFunction types, preventing
compilation sharing across models.

Keep widen_bounded_type_params for AutoSpecialize to preserve compilation
sharing. Change the @inferred remake/solve test to use FullSpecialize,
since Union type parameters in AutoSpecialize are by design.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
@ChrisRackauckas-Claude
Copy link
Author

Benchmark results and updated approach

@AayushSabharwal was right — removing widen_bounded_type_params breaks compilation sharing between different models. Benchmarks comparing second-model solve times with Tsit5 and Rodas5:

Solve Without widen_bounded_type_params With widen_bounded_type_params
2nd model, Tsit5 2.3s (99.9% compilation) 0.8s (99.4% compilation)
3rd model, Tsit5 2.4s (99.9% compilation) 0.9s (99.4% compilation)
2nd model, Rodas5 3.1s (99.9% compilation) 0.8s (99.1% compilation)
3rd model, Rodas5 3.3s (99.9% compilation) 0.6s (98.9% compilation)

Without widening, different models produce different ODEFunction types (due to different `initialization_data` types), preventing the solver from reusing compiled code.

Changes in this PR (commit f0eef69):

  1. Restored widen_bounded_type_params — keeps compilation sharing for AutoSpecialize
  2. Changed @inferred test to use FullSpecialize — since AutoSpecialize uses Union types by design, type inference tests should use FullSpecialize

Companion PR: DiffEqBase.jl#1288

Fixes the "No matching function wrapper was found!" error for jac=true, sparsity=true with Rodas5(). Root cause: promote_f only wrapped the Jacobian with a dense Matrix{Float64} signature, but build_J_W in OrdinaryDiffEqDifferentiation creates a SparseMatrixCSC J from the sparsity pattern when using AutoSparse. The fix creates dual-signature FunctionWrappers (both dense and sparse) using Base.promote_op to compute types without allocating.

SciML/DiffEqBase.jl#1288

Requires the sparse Jacobian FunctionWrapper fix from DiffEqBase#1288
which handles AutoSpecialize with jac=true, sparsity=true.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
@AayushSabharwal
Copy link
Member

Somehow MTK/InterfaceII timed out, both on 1 and lts. I don't necessarily see anything problematic in the PR, so I'll rerun the testset in case it was one-off.

@AayushSabharwal AayushSabharwal merged commit 3c35b84 into SciML:master Feb 24, 2026
58 of 113 checks passed
ChrisRackauckas-Claude pushed a commit to ChrisRackauckas-Claude/ModelingToolkit.jl that referenced this pull request Feb 27, 2026
Add package extensions for OrdinaryDiffEqDefault, OrdinaryDiffEqRosenbrock,
and OrdinaryDiffEqBDF that precompile the full solve() path for standard
MTK ODE problems using PrecompileTools.@compile_workload.

Each extension builds a simple 1-state ODE system via mtkcompile and
solves it with the respective solver (default, Rodas5P, FBDF) during
precompilation. This caches the compiled native code so that the first
solve() call at runtime avoids the multi-second compilation cost.

This follows the pattern suggested in PR SciML#4335, where the precompile
workload in ModelingToolkitBase was limited to ODEProblem construction
and wrapfun_iip because it has no solver dependency.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
cmhyett pushed a commit to cmhyett/ModelingToolkit.jl that referenced this pull request Feb 27, 2026
Add package extensions for OrdinaryDiffEqDefault, OrdinaryDiffEqRosenbrock,
and OrdinaryDiffEqBDF that precompile the full solve() path for standard
MTK ODE problems using PrecompileTools.@compile_workload.

Each extension builds a simple 1-state ODE system via mtkcompile and
solves it with the respective solver (default, Rodas5P, FBDF) during
precompilation. This caches the compiled native code so that the first
solve() call at runtime avoids the multi-second compilation cost.

This follows the pattern suggested in PR SciML#4335, where the precompile
workload in ModelingToolkitBase was limited to ODEProblem construction
and wrapfun_iip because it has no solver dependency.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants