-
-
Notifications
You must be signed in to change notification settings - Fork 13
optimize lagrangian implementation in oop dispatch for autosparse backends #134
Conversation
|
If |
|
|
|
If multiple dispatch is an option, you could add a method that evaluates If MD is not an option, you could use the |
Julia will also evaluate both branches, negating the optimization |
|
Is sigma related to the input here? Like, if x was a dual number, would sigma be one too? Or is it more of a fixed external parameter? Because depending on the answer, the |
|
not sure why I closed
It's up to the optimizer, so we cannot tell in general. I guess we could do something like function lagrangian(θ, σ, λ, p)
if eltype(θ) <: SCT.TracerType || !iszero(θ)
return σ * f.f(θ, p) + dot(λ, cons_oop(θ))
else
return dot(λ, cons_oop(θ))
end
end |
|
That should be sufficient. |
|
This branch exists for most cases, but it looks like the oop case with autosparse was missed, so I have changed the title to reflect that. |
|
Do we need a compat bound on something based on the test failure? |
|
The new title isn't correct, it's also optimizing the non-autosparse cases... |
No, we just need to keep pinging @vchuravy until the type piracy of Base goes away 😅 |
|
|
If you're pointing to this PR then yes it does the non sparse case... |
σ = 0is a common special case and it makes sense to optimize for it by not calling the cost function in this case