StatProfilerHTML.jl report
Generated on Mon, 01 Apr 2024 21:01:18
File source code
Line Exclusive Inclusive Code
1 const RECOMPILE_BY_DEFAULT = true
2
3 """
4 $(TYPEDEF)
5
6 Supertype for the specialization types. Controls the compilation and
7 function specialization behavior of SciMLFunctions, ultimately controlling
8 the runtime vs compile-time trade-off.
9 """
10 abstract type AbstractSpecialization end
11
12 """
13 $(TYPEDEF)
14
15 The default specialization level for problem functions. `AutoSpecialize`
16 works by applying a function wrap just-in-time before the solve process
17 to disable just-in-time re-specialization of the solver to the specific
18 choice of model `f` and thus allow for using a cached solver compilation
19 from a different `f`. This wrapping process can lead to a small decreased
20 runtime performance with a benefit of a greatly decreased compile-time.
21
22 ## Note About Benchmarking and Runtime Optimality
23
24 It is recommended that `AutoSpecialize` is not used in any benchmarking
25 due to the potential effect of function wrapping on runtimes. `AutoSpecialize`'s
26 use case is targeted at decreased latency for REPL performance and
27 not for cases where where top runtime performance is required (such as in
28 optimization loops). Generally, for non-stiff equations the cost will be minimal
29 and potentially not even measurable. For stiff equations, function wrapping
30 has the limitation that only chunk sized 1 Dual numbers are allowed, which
31 can decrease Jacobian construction performance.
32
33 ## Limitations of `AutoSpecialize`
34
35 The following limitations are not fundamental to the implementation of `AutoSpecialize`,
36 but are instead chosen as a compromise between default precompilation times and
37 ease of maintenance. Please open an issue to discuss lifting any potential
38 limitations.
39
40 * `AutoSpecialize` is only setup to wrap the functions from in-place ODEs. Other
41 cases are excluded for the time being due to time limitations.
42 * `AutoSpecialize` will only lead to compilation reuse if the ODEFunction's other
43 functions (such as jac and tgrad) are the default `nothing`. These could be
44 JIT wrapped as well in a future version.
45 * `AutoSpecialize`'d functions are only compatible with Jacobian calculations
46 performed with chunk size 1, and only with tag `DiffEqBase.OrdinaryDiffEqTag()`.
47 Thus ODE solvers written on the common interface must be careful to detect
48 the `AutoSpecialize` case and perform differentiation under these constraints,
49 use finite differencing, or manually unwrap before solving. This will lead
50 to decreased runtime performance for sufficiently large Jacobians.
51 * `AutoSpecialize` only wraps on Julia v1.8 and higher.
52 * `AutoSpecialize` does not handle cases with units. If unitful values are detected,
53 wrapping is automatically disabled.
54 * `AutoSpecialize` only wraps cases for which `promote_rule` is defined between `u0`
55 and dual numbers, `u0` and `t`, and for which `ArrayInterface.promote_eltype`
56 is defined on `u0` to dual numbers.
57 * `AutoSpecialize` only wraps cases for which `f.mass_matrix isa UniformScaling`, the
58 default.
59 * `AutoSpecialize` does not wrap cases where `f isa AbstractSciMLOperator`
60 * By default, only the `u0 isa Vector{Float64}`, `eltype(tspan) isa Float64`, and
61 `typeof(p) isa Union{Vector{Float64},SciMLBase.NullParameters}` are specialized
62 by the solver libraries. Other forms can be specialized with
63 `AutoSpecialize`, but must be done in the precompilation of downstream libraries.
64 * `AutoSpecialize`d functions are manually unwrapped in adjoint methods in
65 SciMLSensitivity.jl in order to allow compiler support for automatic differentiation.
66 Improved versions of adjoints which decrease the recompilation surface will come
67 in non-breaking updates.
68
69 Cases where automatic wrapping is disabled are equivalent to `FullSpecialize`.
70
71 ## Example
72
73 ```
74 f(du,u,p,t) = (du .= u)
75
76 # Note this is the same as ODEProblem(f, [1.0], (0.0,1.0))
77 # If no preferences are set
78 ODEProblem{true, SciMLBase.AutoSpecialize}(f, [1.0], (0.0,1.0))
79 ```
80 """
81 struct AutoSpecialize <: AbstractSpecialization end
82
83 """
84 $(TYPEDEF)
85
86 `NoSpecialize` forces SciMLFunctions to not specialize on the types
87 of functions wrapped within it. This ultimately contributes to a
88 form such that every `prob.f` type is the same, meaning compilation
89 caches are fully reused, with the downside of losing runtime performance.
90 `NoSpecialize` is the form that most fully trades off runtime for compile
91 time. Unlike `AutoSpecialize`, `NoSpecialize` can be used with any
92 `SciMLFunction`.
93
94 ## Example
95
96 ```
97 f(du,u,p,t) = (du .= u)
98 ODEProblem{true, SciMLBase.NoSpecialize}(f, [1.0], (0.0,1.0))
99 ```
100 """
101 struct NoSpecialize <: AbstractSpecialization end
102
103 """
104 $(TYPEDEF)
105
106 `FunctionWrapperSpecialize` is an eager wrapping choice which
107 performs a function wrapping during the `ODEProblem` construction.
108 This performs the function wrapping at the earliest possible point,
109 giving the best compile-time vs runtime performance, but with the
110 difficulty that any usage of `prob.f` needs to account for the
111 function wrapper's presence. While optimal in a performance sense,
112 this method has many usability issues with nonstandard solvers
113 and analyses as it requires unwrapping before re-wrapping for any
114 type changes. Thus this method is not used by default. Given that
115 the compile-time different is almost undetectable from AutoSpecialize,
116 this method is mostly used as a benchmarking reference for speed
117 of light for `AutoSpecialize`.
118
119 ## Limitations of `FunctionWrapperSpecialize`
120
121 `FunctionWrapperSpecialize` has all of the limitations of `AutoSpecialize`,
122 but also includes the limitations:
123
124 * `prob.f` is directly specialized to the types of `(u,p,t)`, and any usage
125 of `prob.f` on other types first requires using
126 `SciMLBase.unwrapped_f(prob.f)` to remove the function wrapper.
127 * `FunctionWrapperSpecialize` can only be used by the `ODEProblem` constructor.
128 If an `ODEFunction` is being constructed, the user must manually use
129 `DiffEqBase.wrap_iip` on `f` before calling
130 `ODEFunction{true,FunctionWrapperSpecialize}(f)`. This is a fundamental
131 limitation of the approach as the types of `(u,p,t)` are required in the
132 construction process and not accessible in the `AbstractSciMLFunction` constructors.
133
134 ## Example
135
136 ```
137 f(du,u,p,t) = (du .= u)
138 ODEProblem{true, SciMLBase.FunctionWrapperSpecialize}(f, [1.0], (0.0,1.0))
139 ```
140 """
141 struct FunctionWrapperSpecialize <: AbstractSpecialization end
142
143 """
144 $(TYPEDEF)
145
146 `FullSpecialize` is an eager specialization choice which
147 directly types the `AbstractSciMLFunction` struct to match the type
148 of the model `f`. This forces recompilation of the solver on each
149 new function type `f`, leading to the most compile times with the
150 benefit of having the best runtime performance.
151
152 `FullSpecialize` should be used in all cases where top runtime performance
153 is required, such as in long-running simulations and benchmarking.
154
155 ## Example
156
157 ```
158 f(du,u,p,t) = (du .= u)
159 ODEProblem{true, SciMLBase.FullSpecialize}(f, [1.0], (0.0,1.0))
160 ```
161 """
162 struct FullSpecialize <: AbstractSpecialization end
163
164 specstring = Preferences.@load_preference("SpecializationLevel", "AutoSpecialize")
165 if specstring ∉
166 ("NoSpecialize", "FullSpecialize", "AutoSpecialize", "FunctionWrapperSpecialize")
167 error("SpecializationLevel preference $specstring is not in the allowed set of choices (NoSpecialize, FullSpecialize, AutoSpecialize, FunctionWrapperSpecialize).")
168 end
169
170 const DEFAULT_SPECIALIZATION = getproperty(SciMLBase, Symbol(specstring))
171
172 function DEFAULT_OBSERVED(sym, u, p, t)
173 error("Indexing symbol $sym is unknown.")
174 end
175
176 function DEFAULT_OBSERVED_NO_TIME(sym, u, p)
177 error("Indexing symbol $sym is unknown.")
178 end
179
180 function Base.summary(io::IO, prob::AbstractSciMLFunction)
181 type_color, no_color = get_colorizers(io)
182 print(io,
183 type_color, nameof(typeof(prob)),
184 no_color, ". In-place: ",
185 type_color, isinplace(prob),
186 no_color)
187 end
188
189 const NONCONFORMING_FUNCTIONS_ERROR_MESSAGE = """
190 Nonconforming functions detected. If a model function `f` is defined
191 as in-place, then all constituent functions like `jac` and `paramjac`
192 must be in-place (and vice versa with out-of-place). Detected that
193 some overloads did not conform to the same convention as `f`.
194 """
195
196 struct NonconformingFunctionsError <: Exception
197 nonconforming::Vector{String}
198 end
199
200 function Base.showerror(io::IO, e::NonconformingFunctionsError)
201 println(io, NONCONFORMING_FUNCTIONS_ERROR_MESSAGE)
202 print(io, "Nonconforming functions: ")
203 printstyled(io, e.nonconforming; bold = true, color = :red)
204 end
205
206 const INTEGRAND_MISMATCH_FUNCTIONS_ERROR_MESSAGE = """
207 Nonconforming functions detected. If an integrand function `f` is defined
208 as out-of-place (`f(u,p)`), then no integrand_prototype can be passed into the
209 function constructor. Likewise if `f` is defined as in-place (`f(out,u,p)`), then
210 an integrand_prototype is required. Either change the use of the function
211 constructor or define the appropriate dispatch for `f`.
212 """
213
214 struct IntegrandMismatchFunctionError <: Exception
215 iip::Bool
216 integrand_passed::Bool
217 end
218
219 function Base.showerror(io::IO, e::IntegrandMismatchFunctionError)
220 println(io, INTEGRAND_MISMATCH_FUNCTIONS_ERROR_MESSAGE)
221 print(io, "Mismatch: IIP=")
222 printstyled(io, e.iip; bold = true, color = :red)
223 print(io, ", Integrand passed=")
224 printstyled(io, e.integrand_passed; bold = true, color = :red)
225 end
226
227 """
228 $(TYPEDEF)
229 """
230 abstract type AbstractODEFunction{iip} <: AbstractDiffEqFunction{iip} end
231
232 @doc doc"""
233 $(TYPEDEF)
234
235 A representation of an ODE function `f`, defined by:
236
237 ```math
238 M \frac{du}{dt} = f(u,p,t)
239 ```
240
241 and all of its related functions, such as the Jacobian of `f`, its gradient
242 with respect to time, and more. For all cases, `u0` is the initial condition,
243 `p` are the parameters, and `t` is the independent variable.
244
245 ## Constructor
246
247 ```julia
248 ODEFunction{iip,specialize}(f;
249 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
250 analytic = __has_analytic(f) ? f.analytic : nothing,
251 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
252 jac = __has_jac(f) ? f.jac : nothing,
253 jvp = __has_jvp(f) ? f.jvp : nothing,
254 vjp = __has_vjp(f) ? f.vjp : nothing,
255 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
256 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
257 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
258 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
259 sys = __has_sys(f) ? f.sys : nothing)
260 ```
261
262 Note that only the function `f` itself is required. This function should
263 be given as `f!(du,u,p,t)` or `du = f(u,p,t)`. See the section on `iip`
264 for more details on in-place vs out-of-place handling.
265
266 All of the remaining functions are optional for improving or accelerating
267 the usage of `f`. These include:
268
269 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
270 to determine that the equation is actually a differential-algebraic equation (DAE)
271 if `M` is singular. Note that in this case special solvers are required, see the
272 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
273 Must be an AbstractArray or an AbstractSciMLOperator.
274 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
275 solution of the ODE. Generally only used for testing and development of the solvers.
276 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
277 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
278 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
279 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
280 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
281 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
282 as the prototype and integrators will specialize on this structure where possible. Non-structured
283 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
284 The default is `nothing`, which means a dense Jacobian.
285 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
286 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
287 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
288 finite differences and automatic differentiation to be computed in an accelerated manner
289 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
290 internally computed on demand when required. The cost of this operation is highly dependent
291 on the sparsity pattern.
292
293 ## iip: In-Place vs Out-Of-Place
294
295 `iip` is the optional boolean for determining whether a given function is written to
296 be used in-place or out-of-place. In-place functions are `f!(du,u,p,t)` where the return
297 is ignored, and the result is expected to be mutated into the value of `du`. Out-of-place
298 functions are `du=f(u,p,t)`.
299
300 Normally, this is determined automatically by looking at the method table for `f` and seeing
301 the maximum number of arguments in available dispatches. For this reason, the constructor
302 `ODEFunction(f)` generally works (but is type-unstable). However, for type-stability or
303 to enforce correctness, this option is passed via `ODEFunction{true}(f)`.
304
305 ## specialize: Controlling Compilation and Specialization
306
307 The `specialize` parameter controls the specialization level of the ODEFunction
308 on the function `f`. This allows for a trade-off between compile and run time performance.
309 The available specialization levels are:
310
311 * `SciMLBase.AutoSpecialize`: this form performs a lazy function wrapping on the
312 functions of the ODE in order to stop recompilation of the ODE solver, but allow
313 for the `prob.f` to stay unwrapped for normal usage. This is the default specialization
314 level and strikes a balance in compile time vs runtime performance.
315 * `SciMLBase.FullSpecialize`: this form fully specializes the `ODEFunction` on the
316 constituent functions that make its fields. As such, each `ODEFunction` in this
317 form is uniquely typed, requiring re-specialization and compilation for each new
318 ODE definition. This form has the highest compile-time at the cost of being the
319 most optimal in runtime. This form should be preferred for long-running calculations
320 (such as within optimization loops) and for benchmarking.
321 * `SciMLBase.NoSpecialize`: this form fully unspecializes the function types in the ODEFunction
322 definition by using an `Any` type declaration. As a result, it can result in reduced runtime
323 performance, but is the form that induces the least compile-time.
324 * `SciMLBase.FunctionWrapperSpecialize`: this is an eager function wrapping form. It is
325 unsafe with many solvers, and thus is mostly used for development testing.
326
327 For more details, see the
328 [specialization levels section of the SciMLBase documentation](https://docs.sciml.ai/SciMLBase/stable/interfaces/Problems/#Specialization-Levels).
329
330 ## Fields
331
332 The fields of the ODEFunction type directly match the names of the inputs.
333
334 ## More Details on Jacobians
335
336 The following example creates an inplace `ODEFunction` whose Jacobian is a `Diagonal`:
337
338 ```julia
339 using LinearAlgebra
340 f = (du,u,p,t) -> du .= t .* u
341 jac = (J,u,p,t) -> (J[1,1] = t; J[2,2] = t; J)
342 jp = Diagonal(zeros(2))
343 fun = ODEFunction(f; jac=jac, jac_prototype=jp)
344 ```
345
346 Note that the integrators will always make a deep copy of `fun.jac_prototype`, so
347 there's no worry of aliasing.
348
349 In general, the Jacobian prototype can be anything that has `mul!` defined, in
350 particular sparse matrices or custom lazy types that support `mul!`. A special case
351 is when the `jac_prototype` is a `AbstractSciMLOperator`, in which case you
352 do not need to supply `jac` as it is automatically set to `update_coefficients!`.
353 Refer to the AbstractSciMLOperators documentation for more information
354 on setting up time/parameter dependent operators.
355
356 ## Examples
357
358 ### Declaring Explicit Jacobians for ODEs
359
360 The most standard case, declaring a function for a Jacobian is done by overloading
361 the function `f(du,u,p,t)` with an in-place updating function for the Jacobian:
362 `f_jac(J,u,p,t)` where the value type is used for dispatch. For example,
363 take the Lotka-Volterra model:
364
365 ```julia
366 function f(du,u,p,t)
367 du[1] = 2.0 * u[1] - 1.2 * u[1]*u[2]
368 du[2] = -3 * u[2] + u[1]*u[2]
369 end
370 ```
371
372 To declare the Jacobian, we simply add the dispatch:
373
374 ```julia
375 function f_jac(J,u,p,t)
376 J[1,1] = 2.0 - 1.2 * u[2]
377 J[1,2] = -1.2 * u[1]
378 J[2,1] = 1 * u[2]
379 J[2,2] = -3 + u[1]
380 nothing
381 end
382 ```
383
384 Then we can supply the Jacobian with our ODE as:
385
386 ```julia
387 ff = ODEFunction(f;jac=f_jac)
388 ```
389
390 and use this in an `ODEProblem`:
391
392 ```julia
393 prob = ODEProblem(ff,ones(2),(0.0,10.0))
394 ```
395
396 ## Symbolically Generating the Functions
397
398 See the `modelingtoolkitize` function from
399 [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) for
400 automatically symbolically generating the Jacobian and more from the
401 numerically-defined functions.
402 """
403 struct ODEFunction{iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, WP, TPJ,
404 O, TCV,
405 SYS, IProb, IProbMap} <: AbstractODEFunction{iip}
406 f::F
407 mass_matrix::TMM
408 analytic::Ta
409 tgrad::Tt
410 jac::TJ
411 jvp::JVP
412 vjp::VJP
413 jac_prototype::JP
414 sparsity::SP
415 Wfact::TW
416 Wfact_t::TWt
417 W_prototype::WP
418 paramjac::TPJ
419 observed::O
420 colorvec::TCV
421 sys::SYS
422 initializeprob::IProb
423 initializeprobmap::IProbMap
424 end
425
426 @doc doc"""
427 $(TYPEDEF)
428
429 A representation of a split ODE function `f`, defined by:
430
431 ```math
432 M \frac{du}{dt} = f_1(u,p,t) + f_2(u,p,t)
433 ```
434
435 and all of its related functions, such as the Jacobian of `f`, its gradient
436 with respect to time, and more. For all cases, `u0` is the initial condition,
437 `p` are the parameters, and `t` is the independent variable.
438
439 Generally, for ODE integrators the `f_1` portion should be considered the
440 "stiff portion of the model" with larger timescale separation, while the
441 `f_2` portion should be considered the "non-stiff portion". This interpretation
442 is directly used in integrators like IMEX (implicit-explicit integrators)
443 and exponential integrators.
444
445 ## Constructor
446
447 ```julia
448 SplitFunction{iip,specialize}(f1,f2;
449 mass_matrix = __has_mass_matrix(f1) ? f1.mass_matrix : I,
450 analytic = __has_analytic(f1) ? f1.analytic : nothing,
451 tgrad= __has_tgrad(f1) ? f1.tgrad : nothing,
452 jac = __has_jac(f1) ? f1.jac : nothing,
453 jvp = __has_jvp(f1) ? f1.jvp : nothing,
454 vjp = __has_vjp(f1) ? f1.vjp : nothing,
455 jac_prototype = __has_jac_prototype(f1) ? f1.jac_prototype : nothing,
456 sparsity = __has_sparsity(f1) ? f1.sparsity : jac_prototype,
457 paramjac = __has_paramjac(f1) ? f1.paramjac : nothing,
458 colorvec = __has_colorvec(f1) ? f1.colorvec : nothing,
459 sys = __has_sys(f1) ? f1.sys : nothing)
460 ```
461
462 Note that only the functions `f_i` themselves are required. These functions should
463 be given as `f_i!(du,u,p,t)` or `du = f_i(u,p,t)`. See the section on `iip`
464 for more details on in-place vs out-of-place handling.
465
466 All of the remaining functions are optional for improving or accelerating
467 the usage of the `SplitFunction`. These include:
468
469 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
470 to determine that the equation is actually a differential-algebraic equation (DAE)
471 if `M` is singular. Note that in this case special solvers are required, see the
472 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
473 Must be an AbstractArray or an AbstractSciMLOperator.
474 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
475 solution of the ODE. Generally only used for testing and development of the solvers.
476 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f_1(u,p,t)}{\partial t}``
477 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df_1}{du}``
478 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df_1}{du} v``
479 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df_1}{du}^\ast v``
480 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
481 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
482 as the prototype and integrators will specialize on this structure where possible. Non-structured
483 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
484 The default is `nothing`, which means a dense Jacobian.
485 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df_1}{dp}``.
486 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
487 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
488 finite differences and automatic differentiation to be computed in an accelerated manner
489 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
490 internally computed on demand when required. The cost of this operation is highly dependent
491 on the sparsity pattern.
492
493 ## Note on the Derivative Definition
494
495 The derivatives, such as the Jacobian, are only defined on the `f1` portion of the split ODE.
496 This is used to treat the `f1` implicit while keeping the `f2` portion explicit.
497
498 ## iip: In-Place vs Out-Of-Place
499
500 For more details on this argument, see the ODEFunction documentation.
501
502 ## specialize: Controlling Compilation and Specialization
503
504 For more details on this argument, see the ODEFunction documentation.
505
506 ## Fields
507
508 The fields of the SplitFunction type directly match the names of the inputs.
509
510 ## Symbolically Generating the Functions
511
512 See the `modelingtoolkitize` function from
513 [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) for
514 automatically symbolically generating the Jacobian and more from the
515 numerically-defined functions. See `ModelingToolkit.SplitODEProblem` for
516 information on generating the SplitFunction from this symbolic engine.
517 """
518 struct SplitFunction{
519 iip, specialize, F1, F2, TMM, C, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt,
520 TPJ, O,
521 TCV, SYS} <: AbstractODEFunction{iip}
522 f1::F1
523 f2::F2
524 mass_matrix::TMM
525 cache::C
526 analytic::Ta
527 tgrad::Tt
528 jac::TJ
529 jvp::JVP
530 vjp::VJP
531 jac_prototype::JP
532 sparsity::SP
533 Wfact::TW
534 Wfact_t::TWt
535 paramjac::TPJ
536 observed::O
537 colorvec::TCV
538 sys::SYS
539 end
540
541 @doc doc"""
542 $(TYPEDEF)
543
544 A representation of an ODE function `f`, defined by:
545
546 ```math
547 M \frac{du}{dt} = f(u,p,t)
548 ```
549
550 as a partitioned ODE:
551
552 ```math
553 M_1 \frac{du}{dt} = f_1(u,p,t)
554 M_2 \frac{du}{dt} = f_2(u,p,t)
555 ```
556
557 and all of its related functions, such as the Jacobian of `f`, its gradient
558 with respect to time, and more. For all cases, `u0` is the initial condition,
559 `p` are the parameters, and `t` is the independent variable.
560
561 ## Constructor
562
563 ```julia
564 DynamicalODEFunction{iip,specialize}(f1,f2;
565 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
566 analytic = __has_analytic(f) ? f.analytic : nothing,
567 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
568 jac = __has_jac(f) ? f.jac : nothing,
569 jvp = __has_jvp(f) ? f.jvp : nothing,
570 vjp = __has_vjp(f) ? f.vjp : nothing,
571 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
572 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
573 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
574 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
575 sys = __has_sys(f) ? f.sys : nothing)
576 ```
577
578 Note that only the functions `f_i` themselves are required. These functions should
579 be given as `f_i!(du,u,p,t)` or `du = f_i(u,p,t)`. See the section on `iip`
580 for more details on in-place vs out-of-place handling.
581
582 All of the remaining functions are optional for improving or accelerating
583 the usage of `f`. These include:
584
585 - `mass_matrix`: the mass matrix `M_i` represented in the ODE function. Can be used
586 to determine that the equation is actually a differential-algebraic equation (DAE)
587 if `M` is singular. Note that in this case special solvers are required, see the
588 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
589 Must be an AbstractArray or an AbstractSciMLOperator. Should be given as a tuple
590 of mass matrices, i.e. `(M_1, M_2)` for the mass matrices of equations 1 and 2
591 respectively.
592 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
593 solution of the ODE. Generally only used for testing and development of the solvers.
594 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
595 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
596 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
597 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
598 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
599 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
600 as the prototype and integrators will specialize on this structure where possible. Non-structured
601 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
602 The default is `nothing`, which means a dense Jacobian.
603 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
604 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
605 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
606 finite differences and automatic differentiation to be computed in an accelerated manner
607 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
608 internally computed on demand when required. The cost of this operation is highly dependent
609 on the sparsity pattern.
610
611 ## iip: In-Place vs Out-Of-Place
612
613 For more details on this argument, see the ODEFunction documentation.
614
615 ## specialize: Controlling Compilation and Specialization
616
617 For more details on this argument, see the ODEFunction documentation.
618
619 ## Fields
620
621 The fields of the DynamicalODEFunction type directly match the names of the inputs.
622 """
623 struct DynamicalODEFunction{iip, specialize, F1, F2, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW,
624 TWt, TPJ,
625 O, TCV, SYS} <: AbstractODEFunction{iip}
626 f1::F1
627 f2::F2
628 mass_matrix::TMM
629 analytic::Ta
630 tgrad::Tt
631 jac::TJ
632 jvp::JVP
633 vjp::VJP
634 jac_prototype::JP
635 sparsity::SP
636 Wfact::TW
637 Wfact_t::TWt
638 paramjac::TPJ
639 observed::O
640 colorvec::TCV
641 sys::SYS
642 end
643
644 """
645 $(TYPEDEF)
646 """
647 abstract type AbstractDDEFunction{iip} <: AbstractDiffEqFunction{iip} end
648
649 @doc doc"""
650 $(TYPEDEF)
651
652 A representation of a DDE function `f`, defined by:
653
654 ```math
655 M \frac{du}{dt} = f(u,h,p,t)
656 ```
657
658 and all of its related functions, such as the Jacobian of `f`, its gradient
659 with respect to time, and more. For all cases, `u0` is the initial condition,
660 `p` are the parameters, and `t` is the independent variable.
661
662 ## Constructor
663
664 ```julia
665 DDEFunction{iip,specialize}(f;
666 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
667 analytic = __has_analytic(f) ? f.analytic : nothing,
668 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
669 jac = __has_jac(f) ? f.jac : nothing,
670 jvp = __has_jvp(f) ? f.jvp : nothing,
671 vjp = __has_vjp(f) ? f.vjp : nothing,
672 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
673 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
674 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
675 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
676 sys = __has_sys(f) ? f.sys : nothing)
677 ```
678
679 Note that only the function `f` itself is required. This function should
680 be given as `f!(du,u,h,p,t)` or `du = f(u,h,p,t)`. See the section on `iip`
681 for more details on in-place vs out-of-place handling. The history function
682 `h` acts as an interpolator over time, i.e. `h(t)` with options matching
683 the solution interface, i.e. `h(t; save_idxs = 2)`.
684
685 All of the remaining functions are optional for improving or accelerating
686 the usage of `f`. These include:
687
688 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
689 to determine that the equation is actually a differential-algebraic equation (DAE)
690 if `M` is singular. Note that in this case special solvers are required, see the
691 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
692 Must be an AbstractArray or an AbstractSciMLOperator.
693 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
694 solution of the ODE. Generally only used for testing and development of the solvers.
695 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
696 - `jac(J,u,h,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
697 - `jvp(Jv,v,h,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
698 - `vjp(Jv,v,h,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
699 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
700 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
701 as the prototype and integrators will specialize on this structure where possible. Non-structured
702 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
703 The default is `nothing`, which means a dense Jacobian.
704 - `paramjac(pJ,h,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
705 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
706 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
707 finite differences and automatic differentiation to be computed in an accelerated manner
708 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
709 internally computed on demand when required. The cost of this operation is highly dependent
710 on the sparsity pattern.
711
712 ## iip: In-Place vs Out-Of-Place
713
714 For more details on this argument, see the ODEFunction documentation.
715
716 ## specialize: Controlling Compilation and Specialization
717
718 For more details on this argument, see the ODEFunction documentation.
719
720 ## Fields
721
722 The fields of the DDEFunction type directly match the names of the inputs.
723 """
724 struct DDEFunction{
725 iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ, O, TCV, SYS
726 } <:
727 AbstractDDEFunction{iip}
728 f::F
729 mass_matrix::TMM
730 analytic::Ta
731 tgrad::Tt
732 jac::TJ
733 jvp::JVP
734 vjp::VJP
735 jac_prototype::JP
736 sparsity::SP
737 Wfact::TW
738 Wfact_t::TWt
739 paramjac::TPJ
740 observed::O
741 colorvec::TCV
742 sys::SYS
743 end
744
745 @doc doc"""
746 $(TYPEDEF)
747
748 A representation of a DDE function `f`, defined by:
749
750 ```math
751 M \frac{du}{dt} = f(u,h,p,t)
752 ```
753
754 as a partitioned ODE:
755
756 ```math
757 M_1 \frac{du}{dt} = f_1(u,h,p,t)
758 M_2 \frac{du}{dt} = f_2(u,h,p,t)
759 ```
760
761 and all of its related functions, such as the Jacobian of `f`, its gradient
762 with respect to time, and more. For all cases, `u0` is the initial condition,
763 `p` are the parameters, and `t` is the independent variable.
764
765 ## Constructor
766
767 ```julia
768 DynamicalDDEFunction{iip,specialize}(f1,f2;
769 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
770 analytic = __has_analytic(f) ? f.analytic : nothing,
771 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
772 jac = __has_jac(f) ? f.jac : nothing,
773 jvp = __has_jvp(f) ? f.jvp : nothing,
774 vjp = __has_vjp(f) ? f.vjp : nothing,
775 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
776 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
777 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
778 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
779 sys = __has_sys(f) ? f.sys : nothing)
780 ```
781
782 Note that only the functions `f_i` themselves are required. These functions should
783 be given as `f_i!(du,u,h,p,t)` or `du = f_i(u,h,p,t)`. See the section on `iip`
784 for more details on in-place vs out-of-place handling. The history function
785 `h` acts as an interpolator over time, i.e. `h(t)` with options matching
786 the solution interface, i.e. `h(t; save_idxs = 2)`.
787
788 All of the remaining functions are optional for improving or accelerating
789 the usage of `f`. These include:
790
791 - `mass_matrix`: the mass matrix `M_i` represented in the ODE function. Can be used
792 to determine that the equation is actually a differential-algebraic equation (DAE)
793 if `M` is singular. Note that in this case special solvers are required, see the
794 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
795 Must be an AbstractArray or an AbstractSciMLOperator. Should be given as a tuple
796 of mass matrices, i.e. `(M_1, M_2)` for the mass matrices of equations 1 and 2
797 respectively.
798 - `analytic(u0,h,p,t)`: used to pass an analytical solution function for the analytical
799 solution of the ODE. Generally only used for testing and development of the solvers.
800 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,h,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
801 - `jac(J,u,h,p,t)` or `J=jac(u,h,p,t)`: returns ``\frac{df}{du}``
802 - `jvp(Jv,v,u,h,p,t)` or `Jv=jvp(v,u,h,p,t)`: returns the directional derivative``\frac{df}{du} v``
803 - `vjp(Jv,v,u,h,p,t)` or `Jv=vjp(v,u,h,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
804 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
805 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
806 as the prototype and integrators will specialize on this structure where possible. Non-structured
807 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
808 The default is `nothing`, which means a dense Jacobian.
809 - `paramjac(pJ,u,h,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
810 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
811 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
812 finite differences and automatic differentiation to be computed in an accelerated manner
813 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
814 internally computed on demand when required. The cost of this operation is highly dependent
815 on the sparsity pattern.
816
817 ## iip: In-Place vs Out-Of-Place
818
819 For more details on this argument, see the ODEFunction documentation.
820
821 ## specialize: Controlling Compilation and Specialization
822
823 For more details on this argument, see the ODEFunction documentation.
824
825 ## Fields
826
827 The fields of the DynamicalDDEFunction type directly match the names of the inputs.
828 """
829 struct DynamicalDDEFunction{iip, specialize, F1, F2, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW,
830 TWt, TPJ,
831 O, TCV, SYS} <: AbstractDDEFunction{iip}
832 f1::F1
833 f2::F2
834 mass_matrix::TMM
835 analytic::Ta
836 tgrad::Tt
837 jac::TJ
838 jvp::JVP
839 vjp::VJP
840 jac_prototype::JP
841 sparsity::SP
842 Wfact::TW
843 Wfact_t::TWt
844 paramjac::TPJ
845 observed::O
846 colorvec::TCV
847 sys::SYS
848 end
849
850 """
851 $(TYPEDEF)
852 """
853 abstract type AbstractDiscreteFunction{iip} <:
854 AbstractDiffEqFunction{iip} end
855
856 @doc doc"""
857 $(TYPEDEF)
858
859 A representation of a discrete dynamical system `f`, defined by:
860
861 ```math
862 u_{n+1} = f(u,p,t_{n+1})
863 ```
864
865 and all of its related functions, such as the Jacobian of `f`, its gradient
866 with respect to time, and more. For all cases, `u0` is the initial condition,
867 `p` are the parameters, and `t` is the independent variable.
868
869 ## Constructor
870
871 ```julia
872 DiscreteFunction{iip,specialize}(f;
873 analytic = __has_analytic(f) ? f.analytic : nothing)
874 ```
875
876 Note that only the function `f` itself is required. This function should
877 be given as `f!(du,u,p,t)` or `du = f(u,p,t)`. See the section on `iip`
878 for more details on in-place vs out-of-place handling.
879
880 All of the remaining functions are optional for improving or accelerating
881 the usage of `f`. These include:
882
883 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
884 solution of the ODE. Generally only used for testing and development of the solvers.
885
886 ## iip: In-Place vs Out-Of-Place
887
888 For more details on this argument, see the ODEFunction documentation.
889
890 ## specialize: Controlling Compilation and Specialization
891
892 For more details on this argument, see the ODEFunction documentation.
893
894 ## Fields
895
896 The fields of the DiscreteFunction type directly match the names of the inputs.
897 """
898 struct DiscreteFunction{iip, specialize, F, Ta, O, SYS} <:
899 AbstractDiscreteFunction{iip}
900 f::F
901 analytic::Ta
902 observed::O
903 sys::SYS
904 end
905
906 @doc doc"""
907 $(TYPEDEF)
908
909 A representation of an discrete dynamical system `f`, defined by:
910
911 ```math
912 0 = f(u_{n+1}, u_{n}, p, t_{n+1}, integ)
913 ```
914
915 and all of its related functions, such as the Jacobian of `f`, its gradient
916 with respect to time, and more. For all cases, `u0` is the initial condition,
917 `p` are the parameters, and `t` is the independent variable.
918 `integ` contains the fields:
919 ```julia
920 dt: the time step
921 ```
922
923 ## Constructor
924
925 ```julia
926 ImplicitDiscreteFunction{iip,specialize}(f;
927 analytic = __has_analytic(f) ? f.analytic : nothing)
928 ```
929
930 Note that only the function `f` itself is required. This function should
931 be given as `f!(residual, u_next, u, p, t)` or `residual = f(u_next, u, p, t)`. See the section on `iip`
932 for more details on in-place vs out-of-place handling.
933
934 All of the remaining functions are optional for improving or accelerating
935 the usage of `f`. These include:
936
937 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
938 solution of the ODE. Generally only used for testing and development of the solvers.
939
940 ## iip: In-Place vs Out-Of-Place
941
942 For more details on this argument, see the ODEFunction documentation.
943
944 ## specialize: Controlling Compilation and Specialization
945
946 For more details on this argument, see the ODEFunction documentation.
947
948 ## Fields
949
950 The fields of the ImplicitDiscreteFunction type directly match the names of the inputs.
951 """
952 struct ImplicitDiscreteFunction{iip, specialize, F, Ta, O, SYS} <:
953 AbstractDiscreteFunction{iip}
954 f::F
955 analytic::Ta
956 observed::O
957 sys::SYS
958 end
959
960 """
961 $(TYPEDEF)
962 """
963 abstract type AbstractSDEFunction{iip} <: AbstractDiffEqFunction{iip} end
964
965 @doc doc"""
966 $(TYPEDEF)
967
968 A representation of an SDE function `f`, defined by:
969
970 ```math
971 M du = f(u,p,t)dt + g(u,p,t) dW
972 ```
973
974 and all of its related functions, such as the Jacobian of `f`, its gradient
975 with respect to time, and more. For all cases, `u0` is the initial condition,
976 `p` are the parameters, and `t` is the independent variable.
977
978 ## Constructor
979
980 ```julia
981 SDEFunction{iip,specialize}(f,g;
982 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
983 analytic = __has_analytic(f) ? f.analytic : nothing,
984 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
985 jac = __has_jac(f) ? f.jac : nothing,
986 jvp = __has_jvp(f) ? f.jvp : nothing,
987 vjp = __has_vjp(f) ? f.vjp : nothing,
988 ggprime = nothing,
989 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
990 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
991 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
992 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
993 sys = __has_sys(f) ? f.sys : nothing)
994 ```
995
996 Note that both the function `f` and `g` are required. This function should
997 be given as `f!(du,u,p,t)` or `du = f(u,p,t)`. See the section on `iip`
998 for more details on in-place vs out-of-place handling.
999
1000 All of the remaining functions are optional for improving or accelerating
1001 the usage of `f`. These include:
1002
1003 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
1004 to determine that the equation is actually a differential-algebraic equation (DAE)
1005 if `M` is singular. Note that in this case special solvers are required, see the
1006 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
1007 Must be an AbstractArray or an AbstractSciMLOperator.
1008 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1009 solution of the ODE. Generally only used for testing and development of the solvers.
1010 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
1011 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
1012 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
1013 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1014 - `ggprime(J,u,p,t)` or `J = ggprime(u,p,t)`: returns the Milstein derivative
1015 ``\frac{dg(u,p,t)}{du} g(u,p,t)``
1016 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1017 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1018 as the prototype and integrators will specialize on this structure where possible. Non-structured
1019 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1020 The default is `nothing`, which means a dense Jacobian.
1021 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1022 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1023 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1024 finite differences and automatic differentiation to be computed in an accelerated manner
1025 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1026 internally computed on demand when required. The cost of this operation is highly dependent
1027 on the sparsity pattern.
1028
1029 ## iip: In-Place vs Out-Of-Place
1030
1031 For more details on this argument, see the ODEFunction documentation.
1032
1033 ## specialize: Controlling Compilation and Specialization
1034
1035 For more details on this argument, see the ODEFunction documentation.
1036
1037 ## Fields
1038
1039 The fields of the ODEFunction type directly match the names of the inputs.
1040 """
1041 struct SDEFunction{iip, specialize, F, G, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ,
1042 GG, O,
1043 TCV, SYS
1044 } <: AbstractSDEFunction{iip}
1045 f::F
1046 g::G
1047 mass_matrix::TMM
1048 analytic::Ta
1049 tgrad::Tt
1050 jac::TJ
1051 jvp::JVP
1052 vjp::VJP
1053 jac_prototype::JP
1054 sparsity::SP
1055 Wfact::TW
1056 Wfact_t::TWt
1057 paramjac::TPJ
1058 ggprime::GG
1059 observed::O
1060 colorvec::TCV
1061 sys::SYS
1062 end
1063
1064 @doc doc"""
1065 $(TYPEDEF)
1066
1067 A representation of a split SDE function `f`, defined by:
1068
1069 ```math
1070 M \frac{du}{dt} = f_1(u,p,t) + f_2(u,p,t) + g(u,p,t) dW
1071 ```
1072
1073 and all of its related functions, such as the Jacobian of `f`, its gradient
1074 with respect to time, and more. For all cases, `u0` is the initial condition,
1075 `p` are the parameters, and `t` is the independent variable.
1076
1077 Generally, for SDE integrators the `f_1` portion should be considered the
1078 "stiff portion of the model" with larger timescale separation, while the
1079 `f_2` portion should be considered the "non-stiff portion". This interpretation
1080 is directly used in integrators like IMEX (implicit-explicit integrators)
1081 and exponential integrators.
1082
1083 ## Constructor
1084
1085 ```julia
1086 SplitSDEFunction{iip,specialize}(f1,f2,g;
1087 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1088 analytic = __has_analytic(f) ? f.analytic : nothing,
1089 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1090 jac = __has_jac(f) ? f.jac : nothing,
1091 jvp = __has_jvp(f) ? f.jvp : nothing,
1092 vjp = __has_vjp(f) ? f.vjp : nothing,
1093 ggprime = nothing,
1094 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1095 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1096 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1097 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1098 sys = __has_sys(f) ? f.sys : nothing)
1099 ```
1100
1101 Note that only the function `f` itself is required. All of the remaining functions
1102 are optional for improving or accelerating the usage of `f`. These include:
1103
1104 - `mass_matrix`: the mass matrix `M` represented in the SDE function. Can be used
1105 to determine that the equation is actually a stochastic differential-algebraic equation (SDAE)
1106 if `M` is singular. Note that in this case special solvers are required, see the
1107 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/sdae_solve/.
1108 Must be an AbstractArray or an AbstractSciMLOperator.
1109 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1110 solution of the ODE. Generally only used for testing and development of the solvers.
1111 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f_1(u,p,t)}{\partial t}``
1112 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df_1}{du}``
1113 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df_1}{du} v``
1114 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df_1}{du}^\ast v``
1115 - `ggprime(J,u,p,t)` or `J = ggprime(u,p,t)`: returns the Milstein derivative
1116 ``\frac{dg(u,p,t)}{du} g(u,p,t)``
1117 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1118 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1119 as the prototype and integrators will specialize on this structure where possible. Non-structured
1120 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1121 The default is `nothing`, which means a dense Jacobian.
1122 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df_1}{dp}``.
1123 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1124 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1125 finite differences and automatic differentiation to be computed in an accelerated manner
1126 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1127 internally computed on demand when required. The cost of this operation is highly dependent
1128 on the sparsity pattern.
1129
1130 ## Note on the Derivative Definition
1131
1132 The derivatives, such as the Jacobian, are only defined on the `f1` portion of the split ODE.
1133 This is used to treat the `f1` implicit while keeping the `f2` portion explicit.
1134
1135 ## iip: In-Place vs Out-Of-Place
1136
1137 For more details on this argument, see the ODEFunction documentation.
1138
1139 ## specialize: Controlling Compilation and Specialization
1140
1141 For more details on this argument, see the ODEFunction documentation.
1142
1143 ## Fields
1144
1145 The fields of the SplitSDEFunction type directly match the names of the inputs.
1146 """
1147 struct SplitSDEFunction{iip, specialize, F1, F2, G, TMM, C, Ta, Tt, TJ, JVP, VJP, JP, SP,
1148 TW,
1149 TWt, TPJ,
1150 O, TCV, SYS} <: AbstractSDEFunction{iip}
1151 f1::F1
1152 f2::F2
1153 g::G
1154 mass_matrix::TMM
1155 cache::C
1156 analytic::Ta
1157 tgrad::Tt
1158 jac::TJ
1159 jvp::JVP
1160 vjp::VJP
1161 jac_prototype::JP
1162 sparsity::SP
1163 Wfact::TW
1164 Wfact_t::TWt
1165 paramjac::TPJ
1166 observed::O
1167 colorvec::TCV
1168 sys::SYS
1169 end
1170
1171 @doc doc"""
1172 $(TYPEDEF)
1173
1174 A representation of an SDE function `f` and `g`, defined by:
1175
1176 ```math
1177 M du = f(u,p,t) dt + g(u,p,t) dW_t
1178 ```
1179
1180 as a partitioned ODE:
1181
1182 ```math
1183 M_1 du = f_1(u,p,t) dt + g(u,p,t) dW_t
1184 M_2 du = f_2(u,p,t) dt + g(u,p,t) dW_t
1185 ```
1186
1187 and all of its related functions, such as the Jacobian of `f`, its gradient
1188 with respect to time, and more. For all cases, `u0` is the initial condition,
1189 `p` are the parameters, and `t` is the independent variable.
1190
1191 ## Constructor
1192
1193 ```julia
1194 DynamicalSDEFunction{iip,specialize}(f1,f2;
1195 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1196 analytic = __has_analytic(f) ? f.analytic : nothing,
1197 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1198 jac = __has_jac(f) ? f.jac : nothing,
1199 jvp = __has_jvp(f) ? f.jvp : nothing,
1200 vjp = __has_vjp(f) ? f.vjp : nothing,
1201 ggprime=nothing,
1202 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1203 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1204 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1205 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1206 sys = __has_sys(f) ? f.sys : nothing)
1207 ```
1208
1209 Note that only the functions `f_i` themselves are required. These functions should
1210 be given as `f_i!(du,u,p,t)` or `du = f_i(u,p,t)`. See the section on `iip`
1211 for more details on in-place vs out-of-place handling.
1212
1213 All of the remaining functions are optional for improving or accelerating
1214 the usage of `f`. These include:
1215
1216 - `mass_matrix`: the mass matrix `M_i` represented in the ODE function. Can be used
1217 to determine that the equation is actually a differential-algebraic equation (DAE)
1218 if `M` is singular. Note that in this case special solvers are required, see the
1219 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/dae_solve/.
1220 Must be an AbstractArray or an AbstractSciMLOperator. Should be given as a tuple
1221 of mass matrices, i.e. `(M_1, M_2)` for the mass matrices of equations 1 and 2
1222 respectively.
1223 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1224 solution of the ODE. Generally only used for testing and development of the solvers.
1225 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
1226 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
1227 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
1228 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1229 - `ggprime(J,u,p,t)` or `J = ggprime(u,p,t)`: returns the Milstein derivative
1230 ``\frac{dg(u,p,t)}{du} g(u,p,t)``
1231 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1232 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1233 as the prototype and integrators will specialize on this structure where possible. Non-structured
1234 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1235 The default is `nothing`, which means a dense Jacobian.
1236 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1237 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1238 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1239 finite differences and automatic differentiation to be computed in an accelerated manner
1240 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1241 internally computed on demand when required. The cost of this operation is highly dependent
1242 on the sparsity pattern.
1243
1244 ## iip: In-Place vs Out-Of-Place
1245
1246 For more details on this argument, see the ODEFunction documentation.
1247
1248 ## specialize: Controlling Compilation and Specialization
1249
1250 For more details on this argument, see the ODEFunction documentation.
1251
1252 ## Fields
1253
1254 The fields of the DynamicalSDEFunction type directly match the names of the inputs.
1255 """
1256 struct DynamicalSDEFunction{iip, specialize, F1, F2, G, TMM, C, Ta, Tt, TJ, JVP, VJP, JP,
1257 SP,
1258 TW, TWt,
1259 TPJ, O, TCV, SYS} <: AbstractSDEFunction{iip}
1260 # This is a direct copy of the SplitSDEFunction, maybe it's not necessary and the above can be used instead.
1261 f1::F1
1262 f2::F2
1263 g::G
1264 mass_matrix::TMM
1265 cache::C
1266 analytic::Ta
1267 tgrad::Tt
1268 jac::TJ
1269 jvp::JVP
1270 vjp::VJP
1271 jac_prototype::JP
1272 sparsity::SP
1273 Wfact::TW
1274 Wfact_t::TWt
1275 paramjac::TPJ
1276 observed::O
1277 colorvec::TCV
1278 sys::SYS
1279 end
1280
1281 """
1282 $(TYPEDEF)
1283 """
1284 abstract type AbstractRODEFunction{iip} <: AbstractDiffEqFunction{iip} end
1285
1286 @doc doc"""
1287 $(TYPEDEF)
1288
1289 A representation of a RODE function `f`, defined by:
1290
1291 ```math
1292 M \frac{du}{dt} = f(u,p,t,W)dt
1293 ```
1294
1295 and all of its related functions, such as the Jacobian of `f`, its gradient
1296 with respect to time, and more. For all cases, `u0` is the initial condition,
1297 `p` are the parameters, and `t` is the independent variable.
1298
1299 ## Constructor
1300
1301 ```julia
1302 RODEFunction{iip,specialize}(f;
1303 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1304 analytic = __has_analytic(f) ? f.analytic : nothing,
1305 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1306 jac = __has_jac(f) ? f.jac : nothing,
1307 jvp = __has_jvp(f) ? f.jvp : nothing,
1308 vjp = __has_vjp(f) ? f.vjp : nothing,
1309 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1310 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1311 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1312 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1313 sys = __has_sys(f) ? f.sys : nothing,
1314 analytic_full = __has_analytic_full(f) ? f.analytic_full : false)
1315 ```
1316
1317 Note that only the function `f` itself is required. This function should
1318 be given as `f!(du,u,p,t,W)` or `du = f(u,p,t,W)`. See the section on `iip`
1319 for more details on in-place vs out-of-place handling.
1320
1321 All of the remaining functions are optional for improving or accelerating
1322 the usage of `f`. These include:
1323
1324 - `mass_matrix`: the mass matrix `M` represented in the RODE function. Can be used
1325 to determine that the equation is actually a random differential-algebraic equation (RDAE)
1326 if `M` is singular.
1327 - `analytic`: (u0,p,t,W)` or `analytic(sol)`: used to pass an analytical solution function for the analytical
1328 solution of the RODE. Generally only used for testing and development of the solvers.
1329 The exact form depends on the field `analytic_full`.
1330 - `analytic_full`: a boolean to indicate whether to use the form `analytic(u0,p,t,W)` (if `false`)
1331 or the form `analytic!(sol)` (if `true`). The former is expected to return the solution `u(t)` of
1332 the equation, given the initial condition `u0`, the parameter `p`, the current time `t` and the
1333 value `W=W(t)` of the noise at the given time `t`. The latter case is useful when the solution
1334 of the RODE depends on the whole history of the noise, which is available in `sol.W.W`, at
1335 times `sol.W.t`. In this case, `analytic(sol)` must mutate explicitly the field `sol.u_analytic`
1336 with the corresponding expected solution at `sol.W.t` or `sol.t`.
1337 - `tgrad(dT,u,p,t,W)` or dT=tgrad(u,p,t,W): returns ``\frac{\partial f(u,p,t,W)}{\partial t}``
1338 - `jac(J,u,p,t,W)` or `J=jac(u,p,t,W)`: returns ``\frac{df}{du}``
1339 - `jvp(Jv,v,u,p,t,W)` or `Jv=jvp(v,u,p,t,W)`: returns the directional derivative``\frac{df}{du} v``
1340 - `vjp(Jv,v,u,p,t,W)` or `Jv=vjp(v,u,p,t,W)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1341 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1342 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1343 as the prototype and integrators will specialize on this structure where possible. Non-structured
1344 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1345 The default is `nothing`, which means a dense Jacobian.
1346 - `paramjac(pJ,u,p,t,W)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1347 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1348 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1349 finite differences and automatic differentiation to be computed in an accelerated manner
1350 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1351 internally computed on demand when required. The cost of this operation is highly dependent
1352 on the sparsity pattern.
1353
1354 ## iip: In-Place vs Out-Of-Place
1355
1356 For more details on this argument, see the ODEFunction documentation.
1357
1358 ## specialize: Controlling Compilation and Specialization
1359
1360 For more details on this argument, see the ODEFunction documentation.
1361
1362 ## Fields
1363
1364 The fields of the RODEFunction type directly match the names of the inputs.
1365 """
1366 struct RODEFunction{
1367 iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ, O, TCV, SYS
1368 } <:
1369 AbstractRODEFunction{iip}
1370 f::F
1371 mass_matrix::TMM
1372 analytic::Ta
1373 tgrad::Tt
1374 jac::TJ
1375 jvp::JVP
1376 vjp::VJP
1377 jac_prototype::JP
1378 sparsity::SP
1379 Wfact::TW
1380 Wfact_t::TWt
1381 paramjac::TPJ
1382 observed::O
1383 colorvec::TCV
1384 sys::SYS
1385 analytic_full::Bool
1386 end
1387
1388 """
1389 $(TYPEDEF)
1390 """
1391 abstract type AbstractDAEFunction{iip} <: AbstractDiffEqFunction{iip} end
1392
1393 @doc doc"""
1394 $(TYPEDEF)
1395
1396 A representation of an implicit DAE function `f`, defined by:
1397
1398 ```math
1399 0 = f(\frac{du}{dt},u,p,t)
1400 ```
1401
1402 and all of its related functions, such as the Jacobian of `f`, its gradient
1403 with respect to time, and more. For all cases, `u0` is the initial condition,
1404 `p` are the parameters, and `t` is the independent variable.
1405
1406 ## Constructor
1407
1408 ```julia
1409 DAEFunction{iip,specialize}(f;
1410 analytic = __has_analytic(f) ? f.analytic : nothing,
1411 jac = __has_jac(f) ? f.jac : nothing,
1412 jvp = __has_jvp(f) ? f.jvp : nothing,
1413 vjp = __has_vjp(f) ? f.vjp : nothing,
1414 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1415 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1416 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1417 sys = __has_sys(f) ? f.sys : nothing)
1418 ```
1419
1420 Note that only the function `f` itself is required. This function should
1421 be given as `f!(out,du,u,p,t)` or `out = f(du,u,p,t)`. See the section on `iip`
1422 for more details on in-place vs out-of-place handling.
1423
1424 All of the remaining functions are optional for improving or accelerating
1425 the usage of `f`. These include:
1426
1427 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1428 solution of the ODE. Generally only used for testing and development of the solvers.
1429 - `jac(J,du,u,p,gamma,t)` or `J=jac(du,u,p,gamma,t)`: returns the implicit DAE Jacobian
1430 defined as ``gamma \frac{dG}{d(du)} + \frac{dG}{du}``
1431 - `jvp(Jv,v,du,u,p,gamma,t)` or `Jv=jvp(v,du,u,p,gamma,t)`: returns the directional
1432 derivative``\frac{df}{du} v``
1433 - `vjp(Jv,v,du,u,p,gamma,t)` or `Jv=vjp(v,du,u,p,gamma,t)`: returns the adjoint
1434 derivative``\frac{df}{du}^\ast v``
1435 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1436 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1437 as the prototype and integrators will specialize on this structure where possible. Non-structured
1438 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1439 The default is `nothing`, which means a dense Jacobian.
1440 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1441 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1442 finite differences and automatic differentiation to be computed in an accelerated manner
1443 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1444 internally computed on demand when required. The cost of this operation is highly dependent
1445 on the sparsity pattern.
1446
1447 ## iip: In-Place vs Out-Of-Place
1448
1449 For more details on this argument, see the ODEFunction documentation.
1450
1451 ## specialize: Controlling Compilation and Specialization
1452
1453 For more details on this argument, see the ODEFunction documentation.
1454
1455 ## Fields
1456
1457 The fields of the DAEFunction type directly match the names of the inputs.
1458
1459 ## Examples
1460
1461
1462 ### Declaring Explicit Jacobians for DAEs
1463
1464 For fully implicit ODEs (`DAEProblem`s), a slightly different Jacobian function
1465 is necessary. For the DAE
1466
1467 ```math
1468 G(du,u,p,t) = res
1469 ```
1470
1471 The Jacobian should be given in the form `gamma*dG/d(du) + dG/du ` where `gamma`
1472 is given by the solver. This means that the signature is:
1473
1474 ```julia
1475 f(J,du,u,p,gamma,t)
1476 ```
1477
1478 For example, for the equation
1479
1480 ```julia
1481 function testjac(res,du,u,p,t)
1482 res[1] = du[1] - 2.0 * u[1] + 1.2 * u[1]*u[2]
1483 res[2] = du[2] -3 * u[2] - u[1]*u[2]
1484 end
1485 ```
1486
1487 we would define the Jacobian as:
1488
1489 ```julia
1490 function testjac(J,du,u,p,gamma,t)
1491 J[1,1] = gamma - 2.0 + 1.2 * u[2]
1492 J[1,2] = 1.2 * u[1]
1493 J[2,1] = - 1 * u[2]
1494 J[2,2] = gamma - 3 - u[1]
1495 nothing
1496 end
1497 ```
1498
1499 ## Symbolically Generating the Functions
1500
1501 See the `modelingtoolkitize` function from
1502 [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) for
1503 automatically symbolically generating the Jacobian and more from the
1504 numerically-defined functions.
1505 """
1506 struct DAEFunction{iip, specialize, F, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ, O, TCV,
1507 SYS, IProb, IProbMap} <:
1508 AbstractDAEFunction{iip}
1509 f::F
1510 analytic::Ta
1511 tgrad::Tt
1512 jac::TJ
1513 jvp::JVP
1514 vjp::VJP
1515 jac_prototype::JP
1516 sparsity::SP
1517 Wfact::TW
1518 Wfact_t::TWt
1519 paramjac::TPJ
1520 observed::O
1521 colorvec::TCV
1522 sys::SYS
1523 initializeprob::IProb
1524 initializeprobmap::IProbMap
1525 end
1526
1527 """
1528 $(TYPEDEF)
1529 """
1530 abstract type AbstractSDDEFunction{iip} <: AbstractDiffEqFunction{iip} end
1531
1532 @doc doc"""
1533 $(TYPEDEF)
1534
1535 A representation of a SDDE function `f`, defined by:
1536
1537 ```math
1538 M du = f(u,h,p,t) dt + g(u,h,p,t) dW_t
1539 ```
1540
1541 and all of its related functions, such as the Jacobian of `f`, its gradient
1542 with respect to time, and more. For all cases, `u0` is the initial condition,
1543 `p` are the parameters, and `t` is the independent variable.
1544
1545 ## Constructor
1546
1547 ```julia
1548 SDDEFunction{iip,specialize}(f,g;
1549 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1550 analytic = __has_analytic(f) ? f.analytic : nothing,
1551 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1552 jac = __has_jac(f) ? f.jac : nothing,
1553 jvp = __has_jvp(f) ? f.jvp : nothing,
1554 vjp = __has_vjp(f) ? f.vjp : nothing,
1555 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1556 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1557 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1558 colorvec = __has_colorvec(f) ? f.colorvec : nothing
1559 sys = __has_sys(f) ? f.sys : nothing)
1560 ```
1561
1562 Note that only the function `f` itself is required. This function should
1563 be given as `f!(du,u,h,p,t)` or `du = f(u,h,p,t)`. See the section on `iip`
1564 for more details on in-place vs out-of-place handling. The history function
1565 `h` acts as an interpolator over time, i.e. `h(t)` with options matching
1566 the solution interface, i.e. `h(t; save_idxs = 2)`.
1567
1568 All of the remaining functions are optional for improving or accelerating
1569 the usage of `f`. These include:
1570
1571 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
1572 to determine that the equation is actually a differential-algebraic equation (DAE)
1573 if `M` is singular. Note that in this case special solvers are required, see the
1574 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
1575 Must be an AbstractArray or an AbstractSciMLOperator.
1576 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1577 solution of the ODE. Generally only used for testing and development of the solvers.
1578 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
1579 - `jac(J,u,h,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
1580 - `jvp(Jv,v,h,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
1581 - `vjp(Jv,v,h,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1582 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1583 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1584 as the prototype and integrators will specialize on this structure where possible. Non-structured
1585 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1586 The default is `nothing`, which means a dense Jacobian.
1587 - `paramjac(pJ,h,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1588 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1589 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1590 finite differences and automatic differentiation to be computed in an accelerated manner
1591 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1592 internally computed on demand when required. The cost of this operation is highly dependent
1593 on the sparsity pattern.
1594
1595 ## iip: In-Place vs Out-Of-Place
1596
1597 For more details on this argument, see the ODEFunction documentation.
1598
1599 ## specialize: Controlling Compilation and Specialization
1600
1601 For more details on this argument, see the ODEFunction documentation.
1602
1603 ## Fields
1604
1605 The fields of the DDEFunction type directly match the names of the inputs.
1606 """
1607 struct SDDEFunction{iip, specialize, F, G, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ,
1608 GG, O,
1609 TCV, SYS} <: AbstractSDDEFunction{iip}
1610 f::F
1611 g::G
1612 mass_matrix::TMM
1613 analytic::Ta
1614 tgrad::Tt
1615 jac::TJ
1616 jvp::JVP
1617 vjp::VJP
1618 jac_prototype::JP
1619 sparsity::SP
1620 Wfact::TW
1621 Wfact_t::TWt
1622 paramjac::TPJ
1623 ggprime::GG
1624 observed::O
1625 colorvec::TCV
1626 sys::SYS
1627 end
1628
1629 """
1630 $(TYPEDEF)
1631 """
1632 abstract type AbstractNonlinearFunction{iip} <: AbstractSciMLFunction{iip} end
1633
1634 @doc doc"""
1635 $(TYPEDEF)
1636
1637 A representation of a nonlinear system of equations `f`, defined by:
1638
1639 ```math
1640 0 = f(u,p)
1641 ```
1642
1643 and all of its related functions, such as the Jacobian of `f`, its gradient
1644 with respect to time, and more. For all cases, `u0` is the initial condition,
1645 `p` are the parameters, and `t` is the independent variable.
1646
1647 ## Constructor
1648
1649 ```julia
1650 NonlinearFunction{iip, specialize}(f;
1651 analytic = __has_analytic(f) ? f.analytic : nothing,
1652 jac = __has_jac(f) ? f.jac : nothing,
1653 jvp = __has_jvp(f) ? f.jvp : nothing,
1654 vjp = __has_vjp(f) ? f.vjp : nothing,
1655 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1656 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1657 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1658 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1659 sys = __has_sys(f) ? f.sys : nothing)
1660 ```
1661
1662 Note that only the function `f` itself is required. This function should
1663 be given as `f!(du,u,p)` or `du = f(u,p)`. See the section on `iip`
1664 for more details on in-place vs out-of-place handling.
1665
1666 All of the remaining functions are optional for improving or accelerating
1667 the usage of `f`. These include:
1668
1669 - `analytic(u0,p)`: used to pass an analytical solution function for the analytical
1670 solution of the ODE. Generally only used for testing and development of the solvers.
1671 - `jac(J,u,p)` or `J=jac(u,p)`: returns ``\frac{df}{du}``
1672 - `jvp(Jv,v,u,p)` or `Jv=jvp(v,u,p)`: returns the directional derivative``\frac{df}{du} v``
1673 - `vjp(Jv,v,u,p)` or `Jv=vjp(v,u,p)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1674 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1675 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1676 as the prototype and integrators will specialize on this structure where possible. Non-structured
1677 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1678 The default is `nothing`, which means a dense Jacobian.
1679 - `paramjac(pJ,u,p)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1680 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1681 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1682 finite differences and automatic differentiation to be computed in an accelerated manner
1683 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1684 internally computed on demand when required. The cost of this operation is highly dependent
1685 on the sparsity pattern.
1686
1687 ## iip: In-Place vs Out-Of-Place
1688
1689 For more details on this argument, see the ODEFunction documentation.
1690
1691 ## specialize: Controlling Compilation and Specialization
1692
1693 For more details on this argument, see the ODEFunction documentation.
1694
1695 ## Fields
1696
1697 The fields of the NonlinearFunction type directly match the names of the inputs.
1698 """
1699 struct NonlinearFunction{iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt,
1700 TPJ, O, TCV, SYS, RP} <: AbstractNonlinearFunction{iip}
1701 f::F
1702 mass_matrix::TMM
1703 analytic::Ta
1704 tgrad::Tt
1705 jac::TJ
1706 jvp::JVP
1707 vjp::VJP
1708 jac_prototype::JP
1709 sparsity::SP
1710 Wfact::TW
1711 Wfact_t::TWt
1712 paramjac::TPJ
1713 observed::O
1714 colorvec::TCV
1715 sys::SYS
1716 resid_prototype::RP
1717 end
1718
1719 """
1720 $(TYPEDEF)
1721 """
1722 abstract type AbstractIntervalNonlinearFunction{iip} <: AbstractSciMLFunction{iip} end
1723
1724 @doc doc"""
1725 $(TYPEDEF)
1726
1727 A representation of an interval nonlinear system of equations `f`, defined by:
1728
1729 ```math
1730 f(t,p) = u = 0
1731 ```
1732
1733 and all of its related functions. For all cases, `p` are the parameters and `t` is the
1734 interval variable.
1735
1736 ## Constructor
1737
1738 ```julia
1739 IntervalNonlinearFunction{iip, specialize}(f;
1740 analytic = __has_analytic(f) ? f.analytic : nothing,
1741 sys = __has_sys(f) ? f.sys : nothing)
1742 ```
1743
1744 Note that only the function `f` itself is required. This function should
1745 be given as `f!(u,t,p)` or `u = f(t,p)`. See the section on `iip`
1746 for more details on in-place vs out-of-place handling.
1747
1748 All of the remaining functions are optional for improving or accelerating
1749 the usage of `f`. These include:
1750
1751 - `analytic(p)`: used to pass an analytical solution function for the analytical
1752 solution of the ODE. Generally only used for testing and development of the solvers.
1753
1754 ## iip: In-Place vs Out-Of-Place
1755
1756 For more details on this argument, see the ODEFunction documentation.
1757
1758 ## specialize: Controlling Compilation and Specialization
1759
1760 For more details on this argument, see the ODEFunction documentation.
1761
1762 ## Fields
1763
1764 The fields of the IntervalNonlinearFunction type directly match the names of the inputs.
1765 """
1766 struct IntervalNonlinearFunction{iip, specialize, F, Ta,
1767 O, SYS
1768 } <: AbstractIntervalNonlinearFunction{iip}
1769 f::F
1770 analytic::Ta
1771 observed::O
1772 sys::SYS
1773 end
1774
1775 """
1776 $(TYPEDEF)
1777
1778 A representation of an objective function `f`, defined by:
1779
1780 ```math
1781 \\min_{u} f(u,p)
1782 ```
1783
1784 and all of its related functions, such as the gradient of `f`, its Hessian,
1785 and more. For all cases, `u` is the state and `p` are the parameters.
1786
1787 ## Constructor
1788
1789 ```julia
1790 OptimizationFunction{iip}(f, adtype::AbstractADType = NoAD();
1791 grad = nothing, hess = nothing, hv = nothing,
1792 cons = nothing, cons_j = nothing, cons_h = nothing,
1793 hess_prototype = nothing,
1794 cons_jac_prototype = nothing,
1795 cons_hess_prototype = nothing,
1796 observed = __has_observed(f) ? f.observed : DEFAULT_OBSERVED_NO_TIME,
1797 lag_h = nothing,
1798 hess_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1799 cons_jac_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1800 cons_hess_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1801 lag_hess_colorvec = nothing,
1802 sys = __has_sys(f) ? f.sys : nothing)
1803 ```
1804
1805 ## Positional Arguments
1806
1807 - `f(u,p,args...)`: the function to optimize. `u` are the optimization variables and `p` are parameters used in definition of
1808 the objective, even if no such parameters are used in the objective it should be an argument in the function. This can also take
1809 any additional arguments that are relevant to the objective function, for example minibatches used in machine learning,
1810 take a look at the minibatching tutorial [here](https://docs.sciml.ai/Optimization/stable/tutorials/minibatch/). This should return
1811 a scalar, the loss value, as the first return output and if any additional outputs are returned, they will be passed to the `callback`
1812 function described in [Callback Functions](https://docs.sciml.ai/Optimization/stable/API/solve/#Common-Solver-Options-(Solve-Keyword-Arguments)).
1813 - `adtype`: see the Defining Optimization Functions via AD section below.
1814
1815 ## Keyword Arguments
1816
1817 - `grad(G,u,p)` or `G=grad(u,p)`: the gradient of `f` with respect to `u`. If `f` takes additional arguments
1818 then `grad(G,u,p,args...)` or `G=grad(u,p,args...)` should be used.
1819 - `hess(H,u,p)` or `H=hess(u,p)`: the Hessian of `f` with respect to `u`. If `f` takes additional arguments
1820 then `hess(H,u,p,args...)` or `H=hess(u,p,args...)` should be used.
1821 - `hv(Hv,u,v,p)` or `Hv=hv(u,v,p)`: the Hessian-vector product ``(d^2 f / du^2) v``. If `f` takes additional arguments
1822 then `hv(Hv,u,v,p,args...)` or `Hv=hv(u,v,p, args...)` should be used.
1823 - `cons(res,x,p)` or `res=cons(x,p)` : the constraints function, should mutate the passed `res` array
1824 with value of the `i`th constraint, evaluated at the current values of variables
1825 inside the optimization routine. This takes just the function evaluations
1826 and the equality or inequality assertion is applied by the solver based on the constraint
1827 bounds passed as `lcons` and `ucons` to [`OptimizationProblem`](@ref), in case of equality
1828 constraints `lcons` and `ucons` should be passed equal values.
1829 - `cons_j(J,x,p)` or `J=cons_j(x,p)`: the Jacobian of the constraints.
1830 - `cons_h(H,x,p)` or `H=cons_h(x,p)`: the Hessian of the constraints, provided as
1831 an array of Hessians with `res[i]` being the Hessian with respect to the `i`th output on `cons`.
1832 - `hess_prototype`: a prototype matrix matching the type that matches the Hessian. For example,
1833 if the Hessian is tridiagonal, then an appropriately sized `Hessian` matrix can be used
1834 as the prototype and optimization solvers will specialize on this structure where possible. Non-structured
1835 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Hessian.
1836 The default is `nothing`, which means a dense Hessian.
1837 - `cons_jac_prototype`: a prototype matrix matching the type that matches the constraint Jacobian.
1838 The default is `nothing`, which means a dense constraint Jacobian.
1839 - `cons_hess_prototype`: a prototype matrix matching the type that matches the constraint Hessian.
1840 This is defined as an array of matrices, where `hess[i]` is the Hessian w.r.t. the `i`th output.
1841 For example, if the Hessian is sparse, then `hess` is a `Vector{SparseMatrixCSC}`.
1842 The default is `nothing`, which means a dense constraint Hessian.
1843 - `lag_h(res,x,sigma,mu,p)` or `res=lag_h(x,sigma,mu,p)`: the Hessian of the Lagrangian,
1844 where `sigma` is a multiplier of the cost function and `mu` are the Lagrange multipliers
1845 multiplying the constraints. This can be provided instead of `hess` and `cons_h`
1846 to solvers that directly use the Hessian of the Lagrangian.
1847 - `hess_colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1848 pattern of the `hess_prototype`. This specializes the Hessian construction when using
1849 finite differences and automatic differentiation to be computed in an accelerated manner
1850 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1851 internally computed on demand when required. The cost of this operation is highly dependent
1852 on the sparsity pattern.
1853 - `cons_jac_colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1854 pattern of the `cons_jac_prototype`.
1855 - `cons_hess_colorvec`: an array of color vector according to the SparseDiffTools.jl definition for
1856 the sparsity pattern of the `cons_hess_prototype`.
1857
1858 When [Symbolic Problem Building with ModelingToolkit](https://docs.sciml.ai/Optimization/stable/tutorials/symbolic/) interface is used the following arguments are also relevant:
1859
1860 - `observed`: an algebraic combination of optimization variables that is of interest to the user
1861 which will be available in the solution. This can be single or multiple expressions.
1862 - `sys`: field that stores the `OptimizationSystem`.
1863
1864 ## Defining Optimization Functions via AD
1865
1866 While using the keyword arguments gives the user control over defining
1867 all of the possible functions, the simplest way to handle the generation
1868 of an `OptimizationFunction` is by specifying an option from ADTypes.jl
1869 which lets the user choose the Automatic Differentiation backend to use
1870 for automatically filling in all of the extra functions. For example,
1871
1872 ```julia
1873 OptimizationFunction(f,AutoForwardDiff())
1874 ```
1875
1876 will use [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) to define
1877 all of the necessary functions. Note that if any functions are defined
1878 directly, the auto-AD definition does not overwrite the user's choice.
1879
1880 Each of the AD-based constructors are documented separately via their
1881 own dispatches below in the [Automatic Differentiation Construction Choice Recommendations](@ref ad) section.
1882
1883 ## iip: In-Place vs Out-Of-Place
1884
1885 For more details on this argument, see the ODEFunction documentation.
1886
1887 ## specialize: Controlling Compilation and Specialization
1888
1889 For more details on this argument, see the ODEFunction documentation.
1890
1891 ## Fields
1892
1893 The fields of the OptimizationFunction type directly match the names of the inputs.
1894 """
1895 struct OptimizationFunction{iip, AD, F, G, H, HV, C, CJ, CH, HP, CJP, CHP, O,
1896 EX, CEX, SYS, LH, LHP, HCV, CJCV, CHCV, LHCV} <:
1897 AbstractOptimizationFunction{iip}
1898 f::F
1899 adtype::AD
1900 grad::G
1901 hess::H
1902 hv::HV
1903 cons::C
1904 cons_j::CJ
1905 cons_h::CH
1906 hess_prototype::HP
1907 cons_jac_prototype::CJP
1908 cons_hess_prototype::CHP
1909 observed::O
1910 expr::EX
1911 cons_expr::CEX
1912 sys::SYS
1913 lag_h::LH
1914 lag_hess_prototype::LHP
1915 hess_colorvec::HCV
1916 cons_jac_colorvec::CJCV
1917 cons_hess_colorvec::CHCV
1918 lag_hess_colorvec::LHCV
1919 end
1920
1921 """
1922 $(TYPEDEF)
1923 """
1924 abstract type AbstractBVPFunction{iip, twopoint} <: AbstractDiffEqFunction{iip} end
1925
1926 @doc doc"""
1927 $(TYPEDEF)
1928
1929 A representation of a BVP function `f`, defined by:
1930
1931 ```math
1932 \frac{du}{dt} = f(u, p, t)
1933 ```
1934
1935 and the constraints:
1936
1937 ```math
1938 g(u, p, t) = 0
1939 ```
1940
1941 If the size of `g(u, p, t)` is different from the size of `u`, then the constraints are
1942 interpreted as a least squares problem, i.e. the objective function is:
1943
1944 ```math
1945 \min_{u} \| g_i(u, p, t) \|^2
1946 ```
1947
1948 and all of its related functions, such as the Jacobian of `f`, its gradient
1949 with respect to time, and more. For all cases, `u0` is the initial condition,
1950 `p` are the parameters, and `t` is the independent variable.
1951
1952 ```julia
1953 BVPFunction{iip, specialize}(f, bc;
1954 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1955 analytic = __has_analytic(f) ? f.analytic : nothing,
1956 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1957 jac = __has_jac(f) ? f.jac : nothing,
1958 bcjac = __has_jac(bc) ? bc.jac : nothing,
1959 jvp = __has_jvp(f) ? f.jvp : nothing,
1960 vjp = __has_vjp(f) ? f.vjp : nothing,
1961 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1962 bcjac_prototype = __has_jac_prototype(bc) ? bc.jac_prototype : nothing,
1963 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1964 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1965 syms = nothing,
1966 indepsym= nothing,
1967 paramsyms = nothing,
1968 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1969 bccolorvec = __has_colorvec(f) ? bc.colorvec : nothing,
1970 sys = __has_sys(f) ? f.sys : nothing,
1971 twopoint::Union{Val, Bool} = Val(false)
1972 ```
1973
1974 Note that both the function `f` and boundary condition `bc` are required. `f` should
1975 be given as `f(du,u,p,t)` or `out = f(u,p,t)`. `bc` should be given as `bc(res, u, p, t)`.
1976 See the section on `iip` for more details on in-place vs out-of-place handling.
1977
1978 All of the remaining functions are optional for improving or accelerating
1979 the usage of `f` and `bc`. These include:
1980
1981 - `mass_matrix`: the mass matrix `M` represented in the BVP function. Can be used
1982 to determine that the equation is actually a BVP for differential algebraic equation (DAE)
1983 if `M` is singular.
1984 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1985 solution of the BVP. Generally only used for testing and development of the solvers.
1986 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
1987 - `jac(J,du,u,p,gamma,t)` or `J=jac(du,u,p,gamma,t)`: returns ``\frac{df}{du}``
1988 - `bcjac(J,du,u,p,gamma,t)` or `J=jac(du,u,p,gamma,t)`: erturns ``\frac{dbc}{du}``
1989 - `jvp(Jv,v,du,u,p,gamma,t)` or `Jv=jvp(v,du,u,p,gamma,t)`: returns the directional
1990 derivative``\frac{df}{du} v``
1991 - `vjp(Jv,v,du,u,p,gamma,t)` or `Jv=vjp(v,du,u,p,gamma,t)`: returns the adjoint
1992 derivative``\frac{df}{du}^\ast v``
1993 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1994 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1995 as the prototype and integrators will specialize on this structure where possible. Non-structured
1996 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1997 The default is `nothing`, which means a dense Jacobian.
1998 - `bcjac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1999 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
2000 as the prototype and integrators will specialize on this structure where possible. Non-structured
2001 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
2002 The default is `nothing`, which means a dense Jacobian.
2003 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
2004 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
2005 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
2006 finite differences and automatic differentiation to be computed in an accelerated manner
2007 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
2008 internally computed on demand when required. The cost of this operation is highly dependent
2009 on the sparsity pattern.
2010 - `bccolorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
2011 pattern of the `bcjac_prototype`. This specializes the Jacobian construction when using
2012 finite differences and automatic differentiation to be computed in an accelerated manner
2013 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
2014 internally computed on demand when required. The cost of this operation is highly dependent
2015 on the sparsity pattern.
2016
2017 Additional Options:
2018
2019 - `twopoint`: Specify that the BVP is a two-point boundary value problem. Use `Val(true)` or
2020 `Val(false)` for type stability.
2021
2022 ## iip: In-Place vs Out-Of-Place
2023
2024 For more details on this argument, see the ODEFunction documentation.
2025
2026 ## specialize: Controlling Compilation and Specialization
2027
2028 For more details on this argument, see the ODEFunction documentation.
2029
2030 ## Fields
2031
2032 The fields of the BVPFunction type directly match the names of the inputs.
2033 """
2034 struct BVPFunction{iip, specialize, twopoint, F, BF, TMM, Ta, Tt, TJ, BCTJ, JVP, VJP,
2035 JP, BCJP, BCRP, SP, TW, TWt, TPJ, O, TCV, BCTCV,
2036 SYS} <: AbstractBVPFunction{iip, twopoint}
2037 f::F
2038 bc::BF
2039 mass_matrix::TMM
2040 analytic::Ta
2041 tgrad::Tt
2042 jac::TJ
2043 bcjac::BCTJ
2044 jvp::JVP
2045 vjp::VJP
2046 jac_prototype::JP
2047 bcjac_prototype::BCJP
2048 bcresid_prototype::BCRP
2049 sparsity::SP
2050 Wfact::TW
2051 Wfact_t::TWt
2052 paramjac::TPJ
2053 observed::O
2054 colorvec::TCV
2055 bccolorvec::BCTCV
2056 sys::SYS
2057 end
2058
2059 @doc doc"""
2060 IntegralFunction{iip,specialize,F,T} <: AbstractIntegralFunction{iip}
2061
2062 A representation of an integrand `f` defined by:
2063
2064 ```math
2065 f(u, p)
2066 ```
2067
2068 For an in-place form of `f` see the `iip` section below for details on in-place or
2069 out-of-place handling.
2070
2071 ```julia
2072 IntegralFunction{iip,specialize}(f, [integrand_prototype])
2073 ```
2074
2075 Note that only `f` is required, and in the case of inplace integrands a mutable array
2076 `integrand_prototype` to store the result of the integrand. If `integrand_prototype` is
2077 present for either in-place or out-of-place integrands it is used to infer the return type
2078 of the integrand.
2079
2080 ## iip: In-Place vs Out-Of-Place
2081
2082 Out-of-place functions must be of the form ``y = f(u, p)`` and in-place functions of the form
2083 ``f(y, u, p)``, where `y` is a number or array containing the output. Since `f` is allowed to return any type (e.g. real or complex numbers or
2084 arrays), in-place functions must provide a container `integrand_prototype` that is of the
2085 right type and size for the variable ``y``, and the result is written to this container in-place.
2086 When in-place forms are used, in-place array operations, i.e. broadcasting, may be used by
2087 algorithms to reduce allocations. If `integrand_prototype` is not provided, `f` is assumed
2088 to be out-of-place.
2089
2090 ## specialize
2091
2092 This field is currently unused
2093
2094 ## Fields
2095
2096 The fields of the IntegralFunction type directly match the names of the inputs.
2097 """
2098 struct IntegralFunction{iip, specialize, F, T} <:
2099 AbstractIntegralFunction{iip}
2100 f::F
2101 integrand_prototype::T
2102 end
2103
2104 @doc doc"""
2105 BatchIntegralFunction{iip,specialize,F,T} <: AbstractIntegralFunction{iip}
2106
2107 A batched representation of an (non-batched) integrand `f(u, p)` that can be
2108 evaluated at multiple points simultaneously using threads, the gpu, or
2109 distributed memory defined by:
2110
2111 ```math
2112 by = bf(bu, p)
2113 ```
2114
2115 Here we prefix variables with `b` to indicate they are batched variables, which
2116 implies that they are arrays whose **last** dimension is reserved for batching
2117 different evaluation points, or function values, and may be of a variable
2118 length. ``bu`` is an array whose elements correspond to distinct evaluation
2119 points to `f`, and `bf` is a function to evaluate `f` 'point-wise' so that
2120 `f(bu[..., i], p) == bf(bu, p)[..., i]`. For example, a simple batching implementation
2121 of a scalar, univariate function is via broadcasting: `bf(bu, p) = f.(bu, Ref(p))`,
2122 although this interface exists in order to allow user parallelization.
2123 In general, the integration algorithm is allowed to vary the number of
2124 evaluation points between subsequent calls to `bf`.
2125
2126 For an in-place form of `bf` see the `iip` section below for details on in-place
2127 or out-of-place handling.
2128
2129 ```julia
2130 BatchIntegralFunction{iip,specialize}(bf, [integrand_prototype];
2131 max_batch=typemax(Int))
2132 ```
2133 Note that only `bf` is required, and in the case of inplace integrands a mutable
2134 array `integrand_prototype` to store a batch of integrand evaluations, with
2135 a last "batching" dimension.
2136
2137 The keyword `max_batch` is used to set a soft limit on the number of points to
2138 batch at the same time so that memory usage is controlled.
2139
2140 If `integrand_prototype` is present for either in-place or out-of-place integrands it is
2141 used to infer the return type of the integrand.
2142
2143 ## iip: In-Place vs Out-Of-Place
2144
2145 Out-of-place functions must be of the form `by = bf(bu, p)` and in-place
2146 functions of the form `bf(by, bu, p)` where `by` is a batch array containing the
2147 output. Since the algorithm may vary the number of points to batch, the batching
2148 dimension can be of any length, including zero, and since `bf` is allowed to
2149 return arrays of any type (e.g. real or complex) or size, in-place functions
2150 must provide a container `integrand_prototype` of the desired type and size for
2151 `by`. If `integrand_prototype` is not provided, `bf` is assumed to be
2152 out-of-place.
2153
2154 In the out-of-place case, we require `f(bu[..., i], p) == bf(bu, p)[..., i]`,
2155 and certain algorithms, such as those implemented in C, may infer the type or
2156 shape of `by` by calling `bf` with an empty array of input points, i.e. `bu`
2157 with `size(bu)[end] == 0`. Then it is expected for the resulting `by` to have
2158 the same type and `size(by)[begin:end-1]` for all subsequent calls.
2159
2160 When the in-place form is used, we require `f(by[..., i], bu[..., i], p) ==
2161 bf(by, bu, p)[..., i]` and `size(by)[begin:end-1] ==
2162 size(integrand_prototype)[begin:end-1]`. The algorithm should always pass the
2163 integrand `by` arrays that are `similar` to `integrand_prototype`, and may use
2164 views and in-place array operations to reduce allocations.
2165
2166 ## specialize
2167
2168 This field is currently unused
2169
2170 ## Fields
2171
2172 The fields of the BatchIntegralFunction type are `f`, corresponding to `bf`
2173 above, and `integrand_prototype`.
2174 """
2175 struct BatchIntegralFunction{iip, specialize, F, T} <:
2176 AbstractIntegralFunction{iip}
2177 f::F
2178 integrand_prototype::T
2179 max_batch::Int
2180 end
2181
2182 ######### Backwards Compatibility Overloads
2183
2184 (f::ODEFunction)(args...) = f.f(args...)
2185 (f::NonlinearFunction)(args...) = f.f(args...)
2186 (f::IntervalNonlinearFunction)(args...) = f.f(args...)
2187 (f::IntegralFunction)(args...) = f.f(args...)
2188 (f::BatchIntegralFunction)(args...) = f.f(args...)
2189
2190 function (f::DynamicalODEFunction)(u, p, t)
2191 ArrayPartition(f.f1(u.x[1], u.x[2], p, t), f.f2(u.x[1], u.x[2], p, t))
2192 end
2193 function (f::DynamicalODEFunction)(du, u, p, t)
2194 f.f1(du.x[1], u.x[1], u.x[2], p, t)
2195 f.f2(du.x[2], u.x[1], u.x[2], p, t)
2196 end
2197
2198 (f::SplitFunction)(u, p, t) = f.f1(u, p, t) + f.f2(u, p, t)
2199 function (f::SplitFunction)(du, u, p, t)
2200 f.f1(f.cache, u, p, t)
2201 f.f2(du, u, p, t)
2202 du .+= f.cache
2203 end
2204
2205 (f::DiscreteFunction)(args...) = f.f(args...)
2206 (f::ImplicitDiscreteFunction)(args...) = f.f(args...)
2207 48 (83 %)
48 (83 %) samples spent in DAEFunction
48 (100 %) (incl.) when called from idasolfun line 93
48 (100 %) samples spent calling residual!
(f::DAEFunction)(args...) = f.f(args...)
2208 (f::DDEFunction)(args...) = f.f(args...)
2209
2210 function (f::DynamicalDDEFunction)(u, h, p, t)
2211 ArrayPartition(f.f1(u.x[1], u.x[2], h, p, t), f.f2(u.x[1], u.x[2], h, p, t))
2212 end
2213 function (f::DynamicalDDEFunction)(du, u, h, p, t)
2214 f.f1(du.x[1], u.x[1], u.x[2], h, p, t)
2215 f.f2(du.x[2], u.x[1], u.x[2], h, p, t)
2216 end
2217 function Base.getproperty(f::DynamicalDDEFunction, name::Symbol)
2218 if name === :f
2219 # Use the f property as an alias for calling the function itself, so DynamicalDDEFunction fits the same interface as DDEFunction as expected by the ODEFunctionWrapper in DelayDiffEq.jl.
2220 return f
2221 end
2222 return getfield(f, name)
2223 end
2224
2225 (f::SDEFunction)(args...) = f.f(args...)
2226 (f::SDDEFunction)(args...) = f.f(args...)
2227 (f::SplitSDEFunction)(u, p, t) = f.f1(u, p, t) + f.f2(u, p, t)
2228
2229 function (f::SplitSDEFunction)(du, u, p, t)
2230 f.f1(f.cache, u, p, t)
2231 f.f2(du, u, p, t)
2232 du .+= f.cache
2233 end
2234
2235 (f::RODEFunction)(args...) = f.f(args...)
2236
2237 (f::BVPFunction)(args...) = f.f(args...)
2238
2239 ######### Basic Constructor
2240
2241 function ODEFunction{iip, specialize}(f;
2242 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
2243 I,
2244 analytic = __has_analytic(f) ? f.analytic : nothing,
2245 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
2246 jac = __has_jac(f) ? f.jac : nothing,
2247 jvp = __has_jvp(f) ? f.jvp : nothing,
2248 vjp = __has_vjp(f) ? f.vjp : nothing,
2249 jac_prototype = __has_jac_prototype(f) ?
2250 f.jac_prototype :
2251 nothing,
2252 sparsity = __has_sparsity(f) ? f.sparsity :
2253 jac_prototype,
2254 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
2255 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
2256 W_prototype = __has_W_prototype(f) ? f.W_prototype : nothing,
2257 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
2258 syms = nothing,
2259 indepsym = nothing,
2260 paramsyms = nothing,
2261 observed = __has_observed(f) ? f.observed :
2262 DEFAULT_OBSERVED,
2263 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
2264 sys = __has_sys(f) ? f.sys : nothing,
2265 initializeprob = __has_initializeprob(f) ? f.initializeprob : nothing,
2266 initializeprobmap = __has_initializeprobmap(f) ? f.initializeprobmap : nothing
2267 ) where {iip,
2268 specialize
2269 }
2270 if mass_matrix === I && f isa Tuple
2271 mass_matrix = ((I for i in 1:length(f))...,)
2272 end
2273
2274 if (specialize === FunctionWrapperSpecialize) &&
2275 !(f isa FunctionWrappersWrappers.FunctionWrappersWrapper)
2276 error("FunctionWrapperSpecialize must be used on the problem constructor for access to u0, p, and t types!")
2277 end
2278
2279 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
2280 if iip
2281 jac = update_coefficients! #(J,u,p,t)
2282 else
2283 jac = (u, p, t) -> update_coefficients(deepcopy(jac_prototype), u, p, t)
2284 end
2285 end
2286
2287 if jac_prototype !== nothing && colorvec === nothing &&
2288 ArrayInterface.fast_matrix_colors(jac_prototype)
2289 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
2290 else
2291 _colorvec = colorvec
2292 end
2293
2294 jaciip = jac !== nothing ? isinplace(jac, 4, "jac", iip) : iip
2295 tgradiip = tgrad !== nothing ? isinplace(tgrad, 4, "tgrad", iip) : iip
2296 jvpiip = jvp !== nothing ? isinplace(jvp, 5, "jvp", iip) : iip
2297 vjpiip = vjp !== nothing ? isinplace(vjp, 5, "vjp", iip) : iip
2298 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 5, "Wfact", iip) : iip
2299 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 5, "Wfact_t", iip) : iip
2300 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 4, "paramjac", iip) : iip
2301
2302 nonconforming = (jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
2303 paramjaciip) .!= iip
2304 if any(nonconforming)
2305 nonconforming = findall(nonconforming)
2306 functions = ["jac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t", "paramjac"][nonconforming]
2307 throw(NonconformingFunctionsError(functions))
2308 end
2309
2310 _f = prepare_function(f)
2311
2312 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
2313
2314 @assert typeof(initializeprob) <:
2315 Union{Nothing, NonlinearProblem, NonlinearLeastSquaresProblem}
2316
2317 if specialize === NoSpecialize
2318 ODEFunction{iip, specialize,
2319 Any, Any, Any, Any,
2320 Any, Any, Any, typeof(jac_prototype),
2321 typeof(sparsity), Any, Any, typeof(W_prototype), Any,
2322 Any,
2323 typeof(_colorvec),
2324 typeof(sys), Any, Any}(_f, mass_matrix, analytic, tgrad, jac,
2325 jvp, vjp, jac_prototype, sparsity, Wfact,
2326 Wfact_t, W_prototype, paramjac,
2327 observed, _colorvec, sys, initializeprob, initializeprobmap)
2328 elseif specialize === false
2329 ODEFunction{iip, FunctionWrapperSpecialize,
2330 typeof(_f), typeof(mass_matrix), typeof(analytic), typeof(tgrad),
2331 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
2332 typeof(sparsity), typeof(Wfact), typeof(Wfact_t), typeof(W_prototype),
2333 typeof(paramjac),
2334 typeof(observed),
2335 typeof(_colorvec),
2336 typeof(sys), typeof(initializeprob),
2337 typeof(initializeprobmap)}(_f, mass_matrix, analytic, tgrad, jac,
2338 jvp, vjp, jac_prototype, sparsity, Wfact,
2339 Wfact_t, W_prototype, paramjac,
2340 observed, _colorvec, sys, initializeprob, initializeprobmap)
2341 else
2342 ODEFunction{iip, specialize,
2343 typeof(_f), typeof(mass_matrix), typeof(analytic), typeof(tgrad),
2344 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
2345 typeof(sparsity), typeof(Wfact), typeof(Wfact_t), typeof(W_prototype),
2346 typeof(paramjac),
2347 typeof(observed),
2348 typeof(_colorvec),
2349 typeof(sys), typeof(initializeprob),
2350 typeof(initializeprobmap)}(_f, mass_matrix, analytic, tgrad, jac,
2351 jvp, vjp, jac_prototype, sparsity, Wfact,
2352 Wfact_t, W_prototype, paramjac,
2353 observed, _colorvec, sys, initializeprob, initializeprobmap)
2354 end
2355 end
2356
2357 function ODEFunction{iip}(f; kwargs...) where {iip}
2358 ODEFunction{iip, FullSpecialize}(f; kwargs...)
2359 end
2360 ODEFunction{iip}(f::ODEFunction; kwargs...) where {iip} = f
2361 ODEFunction(f; kwargs...) = ODEFunction{isinplace(f, 4), FullSpecialize}(f; kwargs...)
2362 ODEFunction(f::ODEFunction; kwargs...) = f
2363
2364 function unwrapped_f(f::ODEFunction, newf = unwrapped_f(f.f))
2365 if specialization(f) === NoSpecialize
2366 ODEFunction{isinplace(f), specialization(f), Any, Any, Any,
2367 Any, Any, Any, Any, typeof(f.jac_prototype),
2368 typeof(f.sparsity), Any, Any, Any,
2369 Any, typeof(f.colorvec),
2370 typeof(f.sys), Any, Any}(newf, f.mass_matrix, f.analytic, f.tgrad, f.jac,
2371 f.jvp, f.vjp, f.jac_prototype, f.sparsity, f.Wfact,
2372 f.Wfact_t, f.W_prototype, f.paramjac,
2373 f.observed, f.colorvec, f.sys, f.initializeprob, f.initializeprobmap)
2374 else
2375 ODEFunction{isinplace(f), specialization(f), typeof(newf), typeof(f.mass_matrix),
2376 typeof(f.analytic), typeof(f.tgrad),
2377 typeof(f.jac), typeof(f.jvp), typeof(f.vjp), typeof(f.jac_prototype),
2378 typeof(f.sparsity), typeof(f.Wfact), typeof(f.Wfact_t), typeof(f.W_prototype),
2379 typeof(f.paramjac),
2380 typeof(f.observed), typeof(f.colorvec),
2381 typeof(f.sys), typeof(f.initializeprob),
2382 typeof(f.initializeprobmap)}(newf, f.mass_matrix, f.analytic, f.tgrad, f.jac,
2383 f.jvp, f.vjp, f.jac_prototype, f.sparsity, f.Wfact,
2384 f.Wfact_t, f.W_prototype, f.paramjac,
2385 f.observed, f.colorvec, f.sys, f.initializeprob,
2386 f.initializeprobmap)
2387 end
2388 end
2389
2390 """
2391 $(SIGNATURES)
2392
2393 Converts a NonlinearFunction into an ODEFunction.
2394 """
2395 function ODEFunction(f::NonlinearFunction)
2396 iip = isinplace(f)
2397 ODEFunction{iip}(f)
2398 end
2399
2400 function ODEFunction{iip}(f::NonlinearFunction) where {iip}
2401 _f = iip ? (du, u, p, t) -> (f.f(du, u, p); nothing) : (u, p, t) -> f.f(u, p)
2402 if f.analytic !== nothing
2403 _analytic = (u0, p, t) -> f.analytic(u0, p)
2404 else
2405 _analytic = nothing
2406 end
2407 if f.jac !== nothing
2408 _jac = iip ? (J, u, p, t) -> (f.jac(J, u, p); nothing) : (u, p, t) -> f.jac(u, p)
2409 else
2410 _jac = nothing
2411 end
2412 if f.jvp !== nothing
2413 _jvp = iip ? (Jv, u, p, t) -> (f.jvp(Jv, u, p); nothing) : (u, p, t) -> f.jvp(u, p)
2414 else
2415 _jvp = nothing
2416 end
2417 if f.vjp !== nothing
2418 _vjp = iip ? (vJ, u, p, t) -> (f.vjp(vJ, u, p); nothing) : (u, p, t) -> f.vjp(u, p)
2419 else
2420 _vjp = nothing
2421 end
2422
2423 ODEFunction{iip, specialization(f)}(_f;
2424 mass_matrix = f.mass_matrix,
2425 analytic = _analytic,
2426 jac = _jac,
2427 jvp = _jvp,
2428 vjp = _vjp,
2429 jac_prototype = f.jac_prototype,
2430 sparsity = f.sparsity,
2431 paramjac = f.paramjac,
2432 sys = f.sys,
2433 observed = f.observed,
2434 colorvec = f.colorvec)
2435 end
2436
2437 """
2438 $(SIGNATURES)
2439
2440 Converts an ODEFunction into a NonlinearFunction.
2441 """
2442 function NonlinearFunction(f::ODEFunction)
2443 iip = isinplace(f)
2444 NonlinearFunction{iip}(f)
2445 end
2446
2447 function NonlinearFunction{iip}(f::ODEFunction) where {iip}
2448 _f = iip ? (du, u, p) -> (f.f(du, u, p, Inf); nothing) : (u, p) -> f.f(u, p, Inf)
2449 if f.analytic !== nothing
2450 _analytic = (u0, p) -> f.analytic(u0, p, Inf)
2451 else
2452 _analytic = nothing
2453 end
2454 if f.jac !== nothing
2455 _jac = iip ? (J, u, p) -> (f.jac(J, u, p, Inf); nothing) :
2456 (u, p) -> f.jac(u, p, Inf)
2457 else
2458 _jac = nothing
2459 end
2460 if f.jvp !== nothing
2461 _jvp = iip ? (Jv, u, p) -> (f.jvp(Jv, u, p, Inf); nothing) :
2462 (u, p) -> f.jvp(u, p, Inf)
2463 else
2464 _jvp = nothing
2465 end
2466 if f.vjp !== nothing
2467 _vjp = iip ? (vJ, u, p) -> (f.vjp(vJ, u, p, Inf); nothing) :
2468 (u, p) -> f.vjp(u, p, Inf)
2469 else
2470 _vjp = nothing
2471 end
2472
2473 NonlinearFunction{iip, specialization(f)}(_f;
2474 analytic = _analytic,
2475 jac = _jac,
2476 jvp = _jvp,
2477 vjp = _vjp,
2478 jac_prototype = f.jac_prototype,
2479 sparsity = f.sparsity,
2480 paramjac = f.paramjac,
2481 sys = f.sys,
2482 observed = f.observed,
2483 colorvec = f.colorvec)
2484 end
2485
2486 @add_kwonly function SplitFunction(f1, f2, mass_matrix, cache, analytic, tgrad, jac, jvp,
2487 vjp, jac_prototype, sparsity, Wfact, Wfact_t, paramjac,
2488 observed, colorvec, sys)
2489 f1 = ODEFunction(f1)
2490 f2 = ODEFunction(f2)
2491
2492 if !(f1 isa AbstractSciMLOperator || f1.f isa AbstractSciMLOperator) &&
2493 isinplace(f1) != isinplace(f2)
2494 throw(NonconformingFunctionsError(["f2"]))
2495 end
2496
2497 SplitFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2),
2498 typeof(mass_matrix),
2499 typeof(cache), typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
2500 typeof(vjp), typeof(jac_prototype), typeof(sparsity),
2501 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed), typeof(colorvec),
2502 typeof(sys)}(f1, f2, mass_matrix, cache, analytic, tgrad, jac, jvp, vjp,
2503 jac_prototype, sparsity, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
2504 end
2505 function SplitFunction{iip, specialize}(f1, f2;
2506 mass_matrix = __has_mass_matrix(f1) ?
2507 f1.mass_matrix : I,
2508 _func_cache = nothing,
2509 analytic = __has_analytic(f1) ? f1.analytic :
2510 nothing,
2511 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
2512 jac = __has_jac(f1) ? f1.jac : nothing,
2513 jvp = __has_jvp(f1) ? f1.jvp : nothing,
2514 vjp = __has_vjp(f1) ? f1.vjp : nothing,
2515 jac_prototype = __has_jac_prototype(f1) ?
2516 f1.jac_prototype :
2517 nothing,
2518 sparsity = __has_sparsity(f1) ? f1.sparsity :
2519 jac_prototype,
2520 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
2521 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t : nothing,
2522 paramjac = __has_paramjac(f1) ? f1.paramjac :
2523 nothing,
2524 syms = nothing,
2525 indepsym = nothing,
2526 paramsyms = nothing,
2527 observed = __has_observed(f1) ? f1.observed :
2528 DEFAULT_OBSERVED,
2529 colorvec = __has_colorvec(f1) ? f1.colorvec :
2530 nothing,
2531 sys = __has_sys(f1) ? f1.sys : nothing) where {iip,
2532 specialize
2533 }
2534 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
2535 if specialize === NoSpecialize
2536 SplitFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any, Any, Any,
2537 Any, Any, Any, Any, Any,
2538 Any, Any, Any}(f1, f2, mass_matrix, _func_cache,
2539 analytic,
2540 tgrad, jac, jvp, vjp, jac_prototype,
2541 sparsity, Wfact, Wfact_t, paramjac,
2542 observed, colorvec, sys)
2543 else
2544 SplitFunction{iip, specialize, typeof(f1), typeof(f2), typeof(mass_matrix),
2545 typeof(_func_cache), typeof(analytic),
2546 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
2547 typeof(jac_prototype), typeof(sparsity),
2548 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2549 typeof(colorvec),
2550 typeof(sys)}(f1, f2, mass_matrix, _func_cache, analytic, tgrad, jac,
2551 jvp, vjp, jac_prototype,
2552 sparsity, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
2553 end
2554 end
2555
2556 SplitFunction(f1, f2; kwargs...) = SplitFunction{isinplace(f2, 4)}(f1, f2; kwargs...)
2557 function SplitFunction{iip}(f1, f2; kwargs...) where {iip}
2558 SplitFunction{iip, FullSpecialize}(ODEFunction(f1), ODEFunction{iip}(f2);
2559 kwargs...)
2560 end
2561 SplitFunction(f::SplitFunction; kwargs...) = f
2562
2563 @add_kwonly function DynamicalODEFunction{iip}(f1, f2, mass_matrix, analytic, tgrad, jac,
2564 jvp, vjp, jac_prototype, sparsity, Wfact,
2565 Wfact_t, paramjac,
2566 observed, colorvec, sys) where {iip}
2567 f1 = f1 isa AbstractSciMLOperator ? f1 : ODEFunction(f1)
2568 f2 = ODEFunction(f2)
2569
2570 if isinplace(f1) != isinplace(f2)
2571 throw(NonconformingFunctionsError(["f2"]))
2572 end
2573 DynamicalODEFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2),
2574 typeof(mass_matrix),
2575 typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
2576 typeof(vjp),
2577 typeof(jac_prototype),
2578 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2579 typeof(colorvec),
2580 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
2581 vjp, jac_prototype, sparsity, Wfact, Wfact_t,
2582 paramjac, observed,
2583 colorvec, sys)
2584 end
2585
2586 function DynamicalODEFunction{iip, specialize}(f1, f2;
2587 mass_matrix = __has_mass_matrix(f1) ?
2588 f1.mass_matrix : I,
2589 analytic = __has_analytic(f1) ? f1.analytic :
2590 nothing,
2591 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
2592 jac = __has_jac(f1) ? f1.jac : nothing,
2593 jvp = __has_jvp(f1) ? f1.jvp : nothing,
2594 vjp = __has_vjp(f1) ? f1.vjp : nothing,
2595 jac_prototype = __has_jac_prototype(f1) ?
2596 f1.jac_prototype : nothing,
2597 sparsity = __has_sparsity(f1) ? f1.sparsity :
2598 jac_prototype,
2599 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
2600 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
2601 nothing,
2602 paramjac = __has_paramjac(f1) ? f1.paramjac :
2603 nothing,
2604 syms = nothing,
2605 indepsym = nothing,
2606 paramsyms = nothing,
2607 observed = __has_observed(f1) ? f1.observed :
2608 DEFAULT_OBSERVED,
2609 colorvec = __has_colorvec(f1) ? f1.colorvec :
2610 nothing,
2611 sys = __has_sys(f1) ? f1.sys : nothing) where {
2612 iip,
2613 specialize
2614 }
2615 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
2616
2617 if specialize === NoSpecialize
2618 DynamicalODEFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any,
2619 Any, Any, Any, Any, Any,
2620 Any, Any, Any, Any}(f1, f2, mass_matrix,
2621 analytic,
2622 tgrad,
2623 jac, jvp, vjp,
2624 jac_prototype,
2625 sparsity,
2626 Wfact, Wfact_t, paramjac,
2627 observed, colorvec, sys)
2628 else
2629 DynamicalODEFunction{iip, specialize, typeof(f1), typeof(f2), typeof(mass_matrix),
2630 typeof(analytic),
2631 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
2632 typeof(jac_prototype), typeof(sparsity),
2633 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2634 typeof(colorvec),
2635 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
2636 vjp, jac_prototype, sparsity,
2637 Wfact, Wfact_t, paramjac, observed,
2638 colorvec, sys)
2639 end
2640 end
2641
2642 function DynamicalODEFunction(f1, f2 = nothing; kwargs...)
2643 DynamicalODEFunction{isinplace(f1, 5)}(f1, f2; kwargs...)
2644 end
2645 function DynamicalODEFunction{iip}(f1, f2; kwargs...) where {iip}
2646 DynamicalODEFunction{iip, FullSpecialize}(ODEFunction{iip}(f1),
2647 ODEFunction{iip}(f2); kwargs...)
2648 end
2649 DynamicalODEFunction(f::DynamicalODEFunction; kwargs...) = f
2650
2651 function DiscreteFunction{iip, specialize}(f;
2652 analytic = __has_analytic(f) ? f.analytic :
2653 nothing,
2654 syms = nothing,
2655 indepsym = nothing,
2656 paramsyms = nothing,
2657 observed = __has_observed(f) ? f.observed :
2658 DEFAULT_OBSERVED,
2659 sys = __has_sys(f) ? f.sys : nothing) where {iip,
2660 specialize
2661 }
2662 _f = prepare_function(f)
2663 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
2664
2665 if specialize === NoSpecialize
2666 DiscreteFunction{iip, specialize, Any, Any, Any, Any}(_f, analytic,
2667 observed, sys)
2668 else
2669 DiscreteFunction{iip, specialize, typeof(_f), typeof(analytic),
2670 typeof(observed), typeof(sys)}(_f, analytic, observed, sys)
2671 end
2672 end
2673
2674 function DiscreteFunction{iip}(f; kwargs...) where {iip}
2675 DiscreteFunction{iip, FullSpecialize}(f; kwargs...)
2676 end
2677 DiscreteFunction{iip}(f::DiscreteFunction; kwargs...) where {iip} = f
2678 function DiscreteFunction(f; kwargs...)
2679 DiscreteFunction{isinplace(f, 4), FullSpecialize}(f; kwargs...)
2680 end
2681 DiscreteFunction(f::DiscreteFunction; kwargs...) = f
2682
2683 function unwrapped_f(f::DiscreteFunction, newf = unwrapped_f(f.f))
2684 specialize = specialization(f)
2685
2686 if specialize === NoSpecialize
2687 DiscreteFunction{isinplace(f), specialize, Any, Any,
2688 Any, Any}(newf, f.analytic, f.observed, f.sys)
2689 else
2690 DiscreteFunction{isinplace(f), specialize, typeof(newf), typeof(f.analytic),
2691 typeof(f.observed), typeof(f.sys)}(newf, f.analytic,
2692 f.observed, f.sys)
2693 end
2694 end
2695
2696 function ImplicitDiscreteFunction{iip, specialize}(f;
2697 analytic = __has_analytic(f) ?
2698 f.analytic :
2699 nothing,
2700 syms = nothing,
2701 indepsym = nothing,
2702 paramsyms = nothing,
2703 observed = __has_observed(f) ?
2704 f.observed :
2705 DEFAULT_OBSERVED,
2706 sys = __has_sys(f) ? f.sys : nothing) where {
2707 iip,
2708 specialize
2709 }
2710 _f = prepare_function(f)
2711 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
2712
2713 if specialize === NoSpecialize
2714 ImplicitDiscreteFunction{iip, specialize, Any, Any, Any, Any}(_f,
2715 analytic,
2716 observed,
2717 sys)
2718 else
2719 ImplicitDiscreteFunction{
2720 iip, specialize, typeof(_f), typeof(analytic), typeof(observed), typeof(sys)}(
2721 _f, analytic, observed, sys)
2722 end
2723 end
2724
2725 function ImplicitDiscreteFunction{iip}(f; kwargs...) where {iip}
2726 ImplicitDiscreteFunction{iip, FullSpecialize}(f; kwargs...)
2727 end
2728 ImplicitDiscreteFunction{iip}(f::ImplicitDiscreteFunction; kwargs...) where {iip} = f
2729 function ImplicitDiscreteFunction(f; kwargs...)
2730 ImplicitDiscreteFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
2731 end
2732 ImplicitDiscreteFunction(f::ImplicitDiscreteFunction; kwargs...) = f
2733
2734 function unwrapped_f(f::ImplicitDiscreteFunction, newf = unwrapped_f(f.f))
2735 specialize = specialization(f)
2736
2737 if specialize === NoSpecialize
2738 ImplicitDiscreteFunction{isinplace(f, 6), specialize, Any, Any,
2739 Any, Any}(newf, f.analytic, f.observed, f.sys)
2740 else
2741 ImplicitDiscreteFunction{isinplace(f, 6), specialize, typeof(newf),
2742 typeof(f.analytic),
2743 typeof(f.observed), typeof(f.sys)}(newf, f.analytic,
2744 f.observed, f.sys)
2745 end
2746 end
2747
2748 function SDEFunction{iip, specialize}(f, g;
2749 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
2750 I,
2751 analytic = __has_analytic(f) ? f.analytic : nothing,
2752 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
2753 jac = __has_jac(f) ? f.jac : nothing,
2754 jvp = __has_jvp(f) ? f.jvp : nothing,
2755 vjp = __has_vjp(f) ? f.vjp : nothing,
2756 jac_prototype = __has_jac_prototype(f) ?
2757 f.jac_prototype :
2758 nothing,
2759 sparsity = __has_sparsity(f) ? f.sparsity :
2760 jac_prototype,
2761 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
2762 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
2763 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
2764 ggprime = nothing,
2765 syms = nothing,
2766 indepsym = nothing,
2767 paramsyms = nothing,
2768 observed = __has_observed(f) ? f.observed :
2769 DEFAULT_OBSERVED,
2770 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
2771 sys = __has_sys(f) ? f.sys : nothing) where {iip,
2772 specialize
2773 }
2774 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
2775 if iip
2776 jac = update_coefficients! #(J,u,p,t)
2777 else
2778 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
2779 end
2780 end
2781
2782 if jac_prototype !== nothing && colorvec === nothing &&
2783 ArrayInterface.fast_matrix_colors(jac_prototype)
2784 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
2785 else
2786 _colorvec = colorvec
2787 end
2788
2789 giip = isinplace(g, 4, "g", iip)
2790 jaciip = jac !== nothing ? isinplace(jac, 4, "jac", iip) : iip
2791 tgradiip = tgrad !== nothing ? isinplace(tgrad, 4, "tgrad", iip) : iip
2792 jvpiip = jvp !== nothing ? isinplace(jvp, 5, "jvp", iip) : iip
2793 vjpiip = vjp !== nothing ? isinplace(vjp, 5, "vjp", iip) : iip
2794 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 5, "Wfact", iip) : iip
2795 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 5, "Wfact_t", iip) : iip
2796 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 4, "paramjac", iip) : iip
2797
2798 nonconforming = (giip, jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
2799 paramjaciip) .!= iip
2800 if any(nonconforming)
2801 nonconforming = findall(nonconforming)
2802 functions = ["g", "jac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t", "paramjac"][nonconforming]
2803 throw(NonconformingFunctionsError(functions))
2804 end
2805
2806 _f = prepare_function(f)
2807 _g = prepare_function(g)
2808
2809 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
2810
2811 if specialize === NoSpecialize
2812 SDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any,
2813 Any, Any, Any, Any, Any,
2814 typeof(_colorvec), typeof(sys)}(_f, _g, mass_matrix, analytic,
2815 tgrad, jac, jvp, vjp,
2816 jac_prototype, sparsity,
2817 Wfact, Wfact_t, paramjac, ggprime, observed,
2818 _colorvec, sys)
2819 else
2820 SDEFunction{iip, specialize, typeof(_f), typeof(_g),
2821 typeof(mass_matrix), typeof(analytic), typeof(tgrad),
2822 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
2823 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
2824 typeof(paramjac), typeof(ggprime), typeof(observed), typeof(_colorvec), typeof(sys)}(
2825 _f, _g, mass_matrix,
2826 analytic, tgrad, jac,
2827 jvp, vjp,
2828 jac_prototype,
2829 sparsity, Wfact,
2830 Wfact_t,
2831 paramjac, ggprime,
2832 observed, _colorvec,
2833 sys)
2834 end
2835 end
2836
2837 function unwrapped_f(f::SDEFunction, newf = unwrapped_f(f.f),
2838 newg = unwrapped_f(f.g))
2839 specialize = specialization(f)
2840
2841 if specialize === NoSpecialize
2842 SDEFunction{isinplace(f), specialize, Any, Any,
2843 typeof(f.mass_matrix), Any, Any,
2844 Any, Any, Any, typeof(f.jac_prototype),
2845 typeof(f.sparsity), Any, Any,
2846 Any, Any,
2847 typeof(f.observed), typeof(f.colorvec), typeof(f.sys)}(newf, newg,
2848 f.mass_matrix,
2849 f.analytic,
2850 f.tgrad, f.jac,
2851 f.jvp, f.vjp,
2852 f.jac_prototype,
2853 f.sparsity,
2854 f.Wfact,
2855 f.Wfact_t,
2856 f.paramjac,
2857 f.ggprime,
2858 f.observed,
2859 f.colorvec,
2860 f.sys)
2861 else
2862 SDEFunction{isinplace(f), specialize, typeof(newf), typeof(newg),
2863 typeof(f.mass_matrix), typeof(f.analytic), typeof(f.tgrad),
2864 typeof(f.jac), typeof(f.jvp), typeof(f.vjp), typeof(f.jac_prototype),
2865 typeof(f.sparsity), typeof(f.Wfact), typeof(f.Wfact_t),
2866 typeof(f.paramjac), typeof(f.ggprime),
2867 typeof(f.observed), typeof(f.colorvec), typeof(f.sys)}(newf, newg,
2868 f.mass_matrix,
2869 f.analytic,
2870 f.tgrad, f.jac,
2871 f.jvp, f.vjp,
2872 f.jac_prototype,
2873 f.sparsity,
2874 f.Wfact,
2875 f.Wfact_t,
2876 f.paramjac,
2877 f.ggprime,
2878 f.observed,
2879 f.colorvec,
2880 f.sys)
2881 end
2882 end
2883
2884 function SDEFunction{iip}(f, g; kwargs...) where {iip}
2885 SDEFunction{iip, FullSpecialize}(f, g; kwargs...)
2886 end
2887 SDEFunction{iip}(f::SDEFunction, g; kwargs...) where {iip} = f
2888 function SDEFunction(f, g; kwargs...)
2889 SDEFunction{isinplace(f, 4), FullSpecialize}(f, g; kwargs...)
2890 end
2891 SDEFunction(f::SDEFunction; kwargs...) = f
2892
2893 @add_kwonly function SplitSDEFunction(f1, f2, g, mass_matrix, cache, analytic, tgrad, jac,
2894 jvp, vjp,
2895 jac_prototype, Wfact, Wfact_t, paramjac, observed,
2896 colorvec, sys)
2897 f1 = f1 isa AbstractSciMLOperator ? f1 : SDEFunction(f1)
2898 f2 = SDEFunction(f2)
2899
2900 SplitFunction{isinplace(f2), typeof(f1), typeof(f2), typeof(g), typeof(mass_matrix),
2901 typeof(cache), typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
2902 typeof(vjp),
2903 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2904 typeof(colorvec),
2905 typeof(sys)}(f1, f2, mass_matrix, cache, analytic, tgrad, jac,
2906 jac_prototype, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
2907 end
2908
2909 function SplitSDEFunction{iip, specialize}(f1, f2, g;
2910 mass_matrix = __has_mass_matrix(f1) ?
2911 f1.mass_matrix :
2912 I,
2913 _func_cache = nothing,
2914 analytic = __has_analytic(f1) ? f1.analytic :
2915 nothing,
2916 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
2917 jac = __has_jac(f1) ? f1.jac : nothing,
2918 jac_prototype = __has_jac_prototype(f1) ?
2919 f1.jac_prototype : nothing,
2920 sparsity = __has_sparsity(f1) ? f1.sparsity :
2921 jac_prototype,
2922 jvp = __has_jvp(f1) ? f1.jvp : nothing,
2923 vjp = __has_vjp(f1) ? f1.vjp : nothing,
2924 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
2925 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
2926 nothing,
2927 paramjac = __has_paramjac(f1) ? f1.paramjac :
2928 nothing,
2929 syms = nothing,
2930 indepsym = nothing,
2931 paramsyms = nothing,
2932 observed = __has_observed(f1) ? f1.observed :
2933 DEFAULT_OBSERVED,
2934 colorvec = __has_colorvec(f1) ? f1.colorvec :
2935 nothing,
2936 sys = __has_sys(f1) ? f1.sys : nothing) where {
2937 iip,
2938 specialize
2939 }
2940 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
2941
2942 if specialize === NoSpecialize
2943 SplitSDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any,
2944 Any, Any, Any, Any, Any, Any, Any, Any, Any,
2945 Any, Any, Any}(f1, f2, g, mass_matrix, _func_cache,
2946 analytic,
2947 tgrad, jac, jvp, vjp, jac_prototype,
2948 sparsity,
2949 Wfact, Wfact_t, paramjac, observed,
2950 colorvec, sys)
2951 else
2952 SplitSDEFunction{iip, specialize, typeof(f1), typeof(f2), typeof(g),
2953 typeof(mass_matrix), typeof(_func_cache),
2954 typeof(analytic),
2955 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
2956 typeof(jac_prototype), typeof(sparsity),
2957 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2958 typeof(colorvec),
2959 typeof(sys)}(f1, f2, g, mass_matrix, _func_cache, analytic,
2960 tgrad, jac, jvp, vjp, jac_prototype, sparsity,
2961 Wfact, Wfact_t, paramjac,
2962 observed, colorvec, sys)
2963 end
2964 end
2965
2966 function SplitSDEFunction(f1, f2, g; kwargs...)
2967 SplitSDEFunction{isinplace(f2, 4)}(f1, f2, g; kwargs...)
2968 end
2969 function SplitSDEFunction{iip}(f1, f2, g; kwargs...) where {iip}
2970 SplitSDEFunction{iip, FullSpecialize}(SDEFunction(f1, g), SDEFunction{iip}(f2, g),
2971 g; kwargs...)
2972 end
2973 SplitSDEFunction(f::SplitSDEFunction; kwargs...) = f
2974
2975 @add_kwonly function DynamicalSDEFunction(f1, f2, g, mass_matrix, cache, analytic, tgrad,
2976 jac, jvp, vjp,
2977 jac_prototype, Wfact, Wfact_t, paramjac,
2978 observed, colorvec,
2979 sys)
2980 f1 = f1 isa AbstractSciMLOperator ? f1 : SDEFunction(f1)
2981 f2 = SDEFunction(f2)
2982
2983 DynamicalSDEFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2), typeof(g),
2984 typeof(mass_matrix),
2985 typeof(cache), typeof(analytic), typeof(tgrad), typeof(jac),
2986 typeof(jvp), typeof(vjp),
2987 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2988 typeof(colorvec),
2989 typeof(sys)}(f1, f2, g, mass_matrix, cache, analytic, tgrad,
2990 jac, jac_prototype, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
2991 end
2992
2993 function DynamicalSDEFunction{iip, specialize}(f1, f2, g;
2994 mass_matrix = __has_mass_matrix(f1) ?
2995 f1.mass_matrix : I,
2996 _func_cache = nothing,
2997 analytic = __has_analytic(f1) ? f1.analytic :
2998 nothing,
2999 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
3000 jac = __has_jac(f1) ? f1.jac : nothing,
3001 jac_prototype = __has_jac_prototype(f1) ?
3002 f1.jac_prototype : nothing,
3003 sparsity = __has_sparsity(f1) ? f1.sparsity :
3004 jac_prototype,
3005 jvp = __has_jvp(f1) ? f1.jvp : nothing,
3006 vjp = __has_vjp(f1) ? f1.vjp : nothing,
3007 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
3008 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
3009 nothing,
3010 paramjac = __has_paramjac(f1) ? f1.paramjac :
3011 nothing,
3012 syms = nothing,
3013 indepsym = nothing,
3014 paramsyms = nothing,
3015 observed = __has_observed(f1) ? f1.observed :
3016 DEFAULT_OBSERVED,
3017 colorvec = __has_colorvec(f1) ? f1.colorvec :
3018 nothing,
3019 sys = __has_sys(f1) ? f1.sys : nothing) where {
3020 iip,
3021 specialize
3022 }
3023 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
3024
3025 if specialize === NoSpecialize
3026 DynamicalSDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any,
3027 Any, Any, Any, Any, Any, Any,
3028 Any, Any, Any, Any, Any, Any}(f1, f2, g, mass_matrix,
3029 _func_cache,
3030 analytic, tgrad, jac, jvp, vjp,
3031 jac_prototype, sparsity,
3032 Wfact, Wfact_t, paramjac, observed,
3033 colorvec, sys)
3034 else
3035 DynamicalSDEFunction{iip, specialize, typeof(f1), typeof(f2), typeof(g),
3036 typeof(mass_matrix), typeof(_func_cache),
3037 typeof(analytic),
3038 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
3039 typeof(jac_prototype), typeof(sparsity),
3040 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3041 typeof(colorvec),
3042 typeof(sys)}(f1, f2, g, mass_matrix, _func_cache, analytic,
3043 tgrad, jac, jvp, vjp, jac_prototype, sparsity,
3044 Wfact, Wfact_t, paramjac, observed, colorvec, sys)
3045 end
3046 end
3047
3048 function DynamicalSDEFunction(f1, f2, g; kwargs...)
3049 DynamicalSDEFunction{isinplace(f2, 5)}(f1, f2, g; kwargs...)
3050 end
3051 function DynamicalSDEFunction{iip}(f1, f2, g; kwargs...) where {iip}
3052 DynamicalSDEFunction{iip, FullSpecialize}(SDEFunction{iip}(f1, g),
3053 SDEFunction{iip}(f2, g), g; kwargs...)
3054 end
3055 DynamicalSDEFunction(f::DynamicalSDEFunction; kwargs...) = f
3056
3057 function RODEFunction{iip, specialize}(f;
3058 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
3059 I,
3060 analytic = __has_analytic(f) ? f.analytic : nothing,
3061 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3062 jac = __has_jac(f) ? f.jac : nothing,
3063 jvp = __has_jvp(f) ? f.jvp : nothing,
3064 vjp = __has_vjp(f) ? f.vjp : nothing,
3065 jac_prototype = __has_jac_prototype(f) ?
3066 f.jac_prototype :
3067 nothing,
3068 sparsity = __has_sparsity(f) ? f.sparsity :
3069 jac_prototype,
3070 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3071 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3072 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3073 syms = nothing,
3074 indepsym = nothing,
3075 paramsyms = nothing,
3076 observed = __has_observed(f) ? f.observed :
3077 DEFAULT_OBSERVED,
3078 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3079 sys = __has_sys(f) ? f.sys : nothing,
3080 analytic_full = __has_analytic_full(f) ?
3081 f.analytic_full : false) where {iip,
3082 specialize
3083 }
3084 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3085 if iip
3086 jac = update_coefficients! #(J,u,p,t)
3087 else
3088 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3089 end
3090 end
3091
3092 if jac_prototype !== nothing && colorvec === nothing &&
3093 ArrayInterface.fast_matrix_colors(jac_prototype)
3094 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3095 else
3096 _colorvec = colorvec
3097 end
3098
3099 # Setup when the design is finalized by useful integrators
3100
3101 #=
3102 jaciip = jac !== nothing ? isinplace(jac,4,"jac",iip) : iip
3103 tgradiip = tgrad !== nothing ? isinplace(tgrad,4,"tgrad",iip) : iip
3104 jvpiip = jvp !== nothing ? isinplace(jvp,5,"jvp",iip) : iip
3105 vjpiip = vjp !== nothing ? isinplace(vjp,5,"vjp",iip) : iip
3106 Wfactiip = Wfact !== nothing ? isinplace(Wfact,4,"Wfact",iip) : iip
3107 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t,4,"Wfact_t",iip) : iip
3108 paramjaciip = paramjac !== nothing ? isinplace(paramjac,4,"paramjac",iip) : iip
3109
3110 nonconforming = (jaciip,tgradiip,jvpiip,vjpiip,Wfactiip,Wfact_tiip,paramjaciip) .!= iip
3111 if any(nonconforming)
3112 nonconforming = findall(nonconforming)
3113 functions = ["jac","tgrad","jvp","vjp","Wfact","Wfact_t","paramjac"][nonconforming]
3114 throw(NonconformingFunctionsError(functions))
3115 end
3116 =#
3117
3118 _f = prepare_function(f)
3119 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
3120
3121 if specialize === NoSpecialize
3122 RODEFunction{iip, specialize, Any, Any, Any, Any, Any,
3123 Any, Any, Any, Any, Any, Any, Any,
3124 Any,
3125 typeof(_colorvec), Any}(_f, mass_matrix, analytic,
3126 tgrad,
3127 jac, jvp, vjp,
3128 jac_prototype,
3129 sparsity, Wfact, Wfact_t,
3130 paramjac, observed,
3131 _colorvec, sys,
3132 analytic_full)
3133 else
3134 RODEFunction{iip, specialize, typeof(_f), typeof(mass_matrix),
3135 typeof(analytic), typeof(tgrad),
3136 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3137 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3138 typeof(paramjac),
3139 typeof(observed), typeof(_colorvec),
3140 typeof(sys)}(_f, mass_matrix, analytic, tgrad,
3141 jac, jvp, vjp, jac_prototype, sparsity,
3142 Wfact, Wfact_t, paramjac,
3143 observed, _colorvec, sys, analytic_full)
3144 end
3145 end
3146
3147 function RODEFunction{iip}(f; kwargs...) where {iip}
3148 RODEFunction{iip, FullSpecialize}(f; kwargs...)
3149 end
3150 RODEFunction{iip}(f::RODEFunction; kwargs...) where {iip} = f
3151 function RODEFunction(f; kwargs...)
3152 RODEFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
3153 end
3154 RODEFunction(f::RODEFunction; kwargs...) = f
3155
3156 function DAEFunction{iip, specialize}(f;
3157 analytic = __has_analytic(f) ? f.analytic : nothing,
3158 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3159 jac = __has_jac(f) ? f.jac : nothing,
3160 jvp = __has_jvp(f) ? f.jvp : nothing,
3161 vjp = __has_vjp(f) ? f.vjp : nothing,
3162 jac_prototype = __has_jac_prototype(f) ?
3163 f.jac_prototype :
3164 nothing,
3165 sparsity = __has_sparsity(f) ? f.sparsity :
3166 jac_prototype,
3167 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3168 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3169 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3170 syms = nothing,
3171 indepsym = nothing,
3172 paramsyms = nothing,
3173 observed = __has_observed(f) ? f.observed :
3174 DEFAULT_OBSERVED,
3175 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3176 sys = __has_sys(f) ? f.sys : nothing,
3177 initializeprob = __has_initializeprob(f) ? f.initializeprob : nothing,
3178 initializeprobmap = __has_initializeprobmap(f) ? f.initializeprobmap : nothing) where {
3179 iip,
3180 specialize
3181 }
3182 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3183 if iip
3184 jac = update_coefficients! #(J,u,p,t)
3185 else
3186 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3187 end
3188 end
3189
3190 if jac_prototype !== nothing && colorvec === nothing &&
3191 ArrayInterface.fast_matrix_colors(jac_prototype)
3192 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3193 else
3194 _colorvec = colorvec
3195 end
3196
3197 jaciip = jac !== nothing ? isinplace(jac, 6, "jac", iip) : iip
3198 jvpiip = jvp !== nothing ? isinplace(jvp, 7, "jvp", iip) : iip
3199 vjpiip = vjp !== nothing ? isinplace(vjp, 7, "vjp", iip) : iip
3200
3201 nonconforming = (jaciip, jvpiip, vjpiip) .!= iip
3202 if any(nonconforming)
3203 nonconforming = findall(nonconforming)
3204 functions = ["jac", "jvp", "vjp"][nonconforming]
3205 throw(NonconformingFunctionsError(functions))
3206 end
3207
3208 _f = prepare_function(f)
3209 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
3210
3211 @assert typeof(initializeprob) <:
3212 Union{Nothing, NonlinearProblem, NonlinearLeastSquaresProblem}
3213
3214 if specialize === NoSpecialize
3215 DAEFunction{iip, specialize, Any, Any, Any,
3216 Any, Any, Any, Any, Any,
3217 Any, Any, Any,
3218 Any, typeof(_colorvec), Any, Any, Any}(_f, analytic, tgrad, jac, jvp,
3219 vjp, jac_prototype, sparsity,
3220 Wfact, Wfact_t, paramjac, observed,
3221 _colorvec, sys, initializeprob, initializeprobmap)
3222 else
3223 DAEFunction{iip, specialize, typeof(_f), typeof(analytic), typeof(tgrad),
3224 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3225 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3226 typeof(paramjac),
3227 typeof(observed), typeof(_colorvec),
3228 typeof(sys), typeof(initializeprob), typeof(initializeprobmap)}(
3229 _f, analytic, tgrad, jac, jvp, vjp,
3230 jac_prototype, sparsity, Wfact, Wfact_t,
3231 paramjac, observed,
3232 _colorvec, sys, initializeprob, initializeprobmap)
3233 end
3234 end
3235
3236 function DAEFunction{iip}(f; kwargs...) where {iip}
3237 DAEFunction{iip, FullSpecialize}(f; kwargs...)
3238 end
3239 DAEFunction{iip}(f::DAEFunction; kwargs...) where {iip} = f
3240 DAEFunction(f; kwargs...) = DAEFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
3241 DAEFunction(f::DAEFunction; kwargs...) = f
3242
3243 function DDEFunction{iip, specialize}(f;
3244 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
3245 I,
3246 analytic = __has_analytic(f) ? f.analytic : nothing,
3247 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3248 jac = __has_jac(f) ? f.jac : nothing,
3249 jvp = __has_jvp(f) ? f.jvp : nothing,
3250 vjp = __has_vjp(f) ? f.vjp : nothing,
3251 jac_prototype = __has_jac_prototype(f) ?
3252 f.jac_prototype :
3253 nothing,
3254 sparsity = __has_sparsity(f) ? f.sparsity :
3255 jac_prototype,
3256 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3257 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3258 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3259 syms = nothing,
3260 indepsym = nothing,
3261 paramsyms = nothing,
3262 observed = __has_observed(f) ? f.observed :
3263 DEFAULT_OBSERVED,
3264 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3265 sys = __has_sys(f) ? f.sys : nothing) where {iip,
3266 specialize
3267 }
3268 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3269 if iip
3270 jac = update_coefficients! #(J,u,p,t)
3271 else
3272 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3273 end
3274 end
3275
3276 if jac_prototype !== nothing && colorvec === nothing &&
3277 ArrayInterface.fast_matrix_colors(jac_prototype)
3278 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3279 else
3280 _colorvec = colorvec
3281 end
3282
3283 jaciip = jac !== nothing ? isinplace(jac, 5, "jac", iip) : iip
3284 tgradiip = tgrad !== nothing ? isinplace(tgrad, 5, "tgrad", iip) : iip
3285 jvpiip = jvp !== nothing ? isinplace(jvp, 6, "jvp", iip) : iip
3286 vjpiip = vjp !== nothing ? isinplace(vjp, 6, "vjp", iip) : iip
3287 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 6, "Wfact", iip) : iip
3288 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 6, "Wfact_t", iip) : iip
3289 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 5, "paramjac", iip) : iip
3290
3291 nonconforming = (jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
3292 paramjaciip) .!= iip
3293 if any(nonconforming)
3294 nonconforming = findall(nonconforming)
3295 functions = ["jac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t", "paramjac"][nonconforming]
3296 throw(NonconformingFunctionsError(functions))
3297 end
3298
3299 _f = prepare_function(f)
3300 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
3301
3302 if specialize === NoSpecialize
3303 DDEFunction{iip, specialize, Any, Any, Any, Any,
3304 Any, Any, Any, Any, Any, Any, Any,
3305 Any,
3306 Any, typeof(_colorvec), Any}(_f, mass_matrix,
3307 analytic,
3308 tgrad,
3309 jac, jvp, vjp,
3310 jac_prototype,
3311 sparsity, Wfact,
3312 Wfact_t,
3313 paramjac,
3314 observed,
3315 _colorvec, sys)
3316 else
3317 DDEFunction{iip, specialize, typeof(_f), typeof(mass_matrix), typeof(analytic),
3318 typeof(tgrad),
3319 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3320 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3321 typeof(paramjac),
3322 typeof(observed),
3323 typeof(_colorvec), typeof(sys)}(_f, mass_matrix, analytic,
3324 tgrad, jac, jvp, vjp,
3325 jac_prototype, sparsity,
3326 Wfact, Wfact_t, paramjac,
3327 observed,
3328 _colorvec, sys)
3329 end
3330 end
3331
3332 function DDEFunction{iip}(f; kwargs...) where {iip}
3333 DDEFunction{iip, FullSpecialize}(f; kwargs...)
3334 end
3335 DDEFunction{iip}(f::DDEFunction; kwargs...) where {iip} = f
3336 DDEFunction(f; kwargs...) = DDEFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
3337 DDEFunction(f::DDEFunction; kwargs...) = f
3338
3339 @add_kwonly function DynamicalDDEFunction{iip}(f1, f2, mass_matrix, analytic, tgrad, jac,
3340 jvp, vjp,
3341 jac_prototype, sparsity, Wfact, Wfact_t,
3342 paramjac,
3343 observed,
3344 colorvec) where {iip}
3345 f1 = f1 isa AbstractSciMLOperator ? f1 : DDEFunction(f1)
3346 f2 = DDEFunction(f2)
3347
3348 DynamicalDDEFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2),
3349 typeof(mass_matrix),
3350 typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
3351 typeof(vjp),
3352 typeof(jac_prototype),
3353 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3354 typeof(colorvec),
3355 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
3356 vjp, jac_prototype, sparsity, Wfact, Wfact_t,
3357 paramjac, observed,
3358 colorvec, sys)
3359 end
3360 function DynamicalDDEFunction{iip, specialize}(f1, f2;
3361 mass_matrix = __has_mass_matrix(f1) ?
3362 f1.mass_matrix : I,
3363 analytic = __has_analytic(f1) ? f1.analytic :
3364 nothing,
3365 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
3366 jac = __has_jac(f1) ? f1.jac : nothing,
3367 jvp = __has_jvp(f1) ? f1.jvp : nothing,
3368 vjp = __has_vjp(f1) ? f1.vjp : nothing,
3369 jac_prototype = __has_jac_prototype(f1) ?
3370 f1.jac_prototype : nothing,
3371 sparsity = __has_sparsity(f1) ? f1.sparsity :
3372 jac_prototype,
3373 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
3374 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
3375 nothing,
3376 paramjac = __has_paramjac(f1) ? f1.paramjac :
3377 nothing,
3378 syms = nothing,
3379 indepsym = nothing,
3380 paramsyms = nothing,
3381 observed = __has_observed(f1) ? f1.observed :
3382 DEFAULT_OBSERVED,
3383 colorvec = __has_colorvec(f1) ? f1.colorvec :
3384 nothing,
3385 sys = __has_sys(f1) ? f1.sys : nothing) where {
3386 iip,
3387 specialize
3388 }
3389 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
3390
3391 if specialize === NoSpecialize
3392 DynamicalDDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any, Any, Any,
3393 Any, Any, Any, Any, Any, Any, Any}(f1, f2, mass_matrix,
3394 analytic,
3395 tgrad,
3396 jac, jvp, vjp,
3397 jac_prototype,
3398 sparsity,
3399 Wfact, Wfact_t,
3400 paramjac,
3401 observed, colorvec,
3402 sys)
3403 else
3404 DynamicalDDEFunction{iip, typeof(f1), typeof(f2), typeof(mass_matrix),
3405 typeof(analytic),
3406 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
3407 typeof(jac_prototype), typeof(sparsity),
3408 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3409 typeof(colorvec),
3410 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
3411 vjp, jac_prototype, sparsity,
3412 Wfact, Wfact_t, paramjac, observed,
3413 colorvec, sys)
3414 end
3415 end
3416
3417 function DynamicalDDEFunction(f1, f2 = nothing; kwargs...)
3418 DynamicalDDEFunction{isinplace(f1, 6)}(f1, f2; kwargs...)
3419 end
3420 function DynamicalDDEFunction{iip}(f1, f2; kwargs...) where {iip}
3421 DynamicalDDEFunction{iip, FullSpecialize}(DDEFunction{iip}(f1),
3422 DDEFunction{iip}(f2); kwargs...)
3423 end
3424 DynamicalDDEFunction(f::DynamicalDDEFunction; kwargs...) = f
3425
3426 function SDDEFunction{iip, specialize}(f, g;
3427 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
3428 I,
3429 analytic = __has_analytic(f) ? f.analytic : nothing,
3430 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3431 jac = __has_jac(f) ? f.jac : nothing,
3432 jvp = __has_jvp(f) ? f.jvp : nothing,
3433 vjp = __has_vjp(f) ? f.vjp : nothing,
3434 jac_prototype = __has_jac_prototype(f) ?
3435 f.jac_prototype :
3436 nothing,
3437 sparsity = __has_sparsity(f) ? f.sparsity :
3438 jac_prototype,
3439 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3440 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3441 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3442 ggprime = nothing,
3443 syms = nothing,
3444 indepsym = nothing,
3445 paramsyms = nothing,
3446 observed = __has_observed(f) ? f.observed :
3447 DEFAULT_OBSERVED,
3448 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3449 sys = __has_sys(f) ? f.sys : nothing) where {iip,
3450 specialize
3451 }
3452 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3453 if iip
3454 jac = update_coefficients! #(J,u,p,t)
3455 else
3456 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3457 end
3458 end
3459
3460 if jac_prototype !== nothing && colorvec === nothing &&
3461 ArrayInterface.fast_matrix_colors(jac_prototype)
3462 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3463 else
3464 _colorvec = colorvec
3465 end
3466
3467 _f = prepare_function(f)
3468 _g = prepare_function(g)
3469 sys = sys_or_symbolcache(sys, syms, paramsyms, indepsym)
3470
3471 if specialize === NoSpecialize
3472 SDDEFunction{iip, specialize, Any, Any, Any, Any, Any,
3473 Any, Any, Any, Any, Any, Any, Any,
3474 Any, Any,
3475 Any, typeof(_colorvec), Any}(_f, _g, mass_matrix,
3476 analytic, tgrad,
3477 jac,
3478 jvp,
3479 vjp,
3480 jac_prototype,
3481 sparsity, Wfact,
3482 Wfact_t,
3483 paramjac, ggprime,
3484 observed,
3485 _colorvec,
3486 sys)
3487 else
3488 SDDEFunction{iip, specialize, typeof(_f), typeof(_g),
3489 typeof(mass_matrix), typeof(analytic), typeof(tgrad),
3490 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3491 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3492 typeof(paramjac), typeof(ggprime), typeof(observed),
3493 typeof(_colorvec), typeof(sys)}(_f, _g, mass_matrix,
3494 analytic, tgrad, jac,
3495 jvp, vjp, jac_prototype,
3496 sparsity, Wfact,
3497 Wfact_t,
3498 paramjac, ggprime,
3499 observed, _colorvec, sys)
3500 end
3501 end
3502
3503 function SDDEFunction{iip}(f, g; kwargs...) where {iip}
3504 SDDEFunction{iip, FullSpecialize}(f, g; kwargs...)
3505 end
3506 SDDEFunction{iip}(f::SDDEFunction, g; kwargs...) where {iip} = f
3507 function SDDEFunction(f, g; kwargs...)
3508 SDDEFunction{isinplace(f, 5), FullSpecialize}(f, g; kwargs...)
3509 end
3510 SDDEFunction(f::SDDEFunction; kwargs...) = f
3511
3512 function NonlinearFunction{iip, specialize}(f;
3513 mass_matrix = __has_mass_matrix(f) ?
3514 f.mass_matrix :
3515 I,
3516 analytic = __has_analytic(f) ? f.analytic :
3517 nothing,
3518 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3519 jac = __has_jac(f) ? f.jac : nothing,
3520 jvp = __has_jvp(f) ? f.jvp : nothing,
3521 vjp = __has_vjp(f) ? f.vjp : nothing,
3522 jac_prototype = __has_jac_prototype(f) ?
3523 f.jac_prototype : nothing,
3524 sparsity = __has_sparsity(f) ? f.sparsity :
3525 jac_prototype,
3526 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3527 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t :
3528 nothing,
3529 paramjac = __has_paramjac(f) ? f.paramjac :
3530 nothing,
3531 syms = nothing,
3532 paramsyms = nothing,
3533 observed = __has_observed(f) ? f.observed :
3534 DEFAULT_OBSERVED_NO_TIME,
3535 colorvec = __has_colorvec(f) ? f.colorvec :
3536 nothing,
3537 sys = __has_sys(f) ? f.sys : nothing,
3538 resid_prototype = __has_resid_prototype(f) ? f.resid_prototype : nothing) where {
3539 iip, specialize}
3540 if mass_matrix === I && f isa Tuple
3541 mass_matrix = ((I for i in 1:length(f))...,)
3542 end
3543
3544 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3545 if iip
3546 jac = update_coefficients! #(J,u,p,t)
3547 else
3548 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3549 end
3550 end
3551
3552 if jac_prototype !== nothing && colorvec === nothing &&
3553 ArrayInterface.fast_matrix_colors(jac_prototype)
3554 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3555 else
3556 _colorvec = colorvec
3557 end
3558
3559 jaciip = jac !== nothing ? isinplace(jac, 3, "jac", iip) : iip
3560 jvpiip = jvp !== nothing ? isinplace(jvp, 4, "jvp", iip) : iip
3561 vjpiip = vjp !== nothing ? isinplace(vjp, 4, "vjp", iip) : iip
3562
3563 nonconforming = (jaciip, jvpiip, vjpiip) .!= iip
3564 if any(nonconforming)
3565 nonconforming = findall(nonconforming)
3566 functions = ["jac", "jvp", "vjp"][nonconforming]
3567 throw(NonconformingFunctionsError(functions))
3568 end
3569
3570 _f = prepare_function(f)
3571 sys = sys_or_symbolcache(sys, syms, paramsyms)
3572 if specialize === NoSpecialize
3573 NonlinearFunction{iip, specialize,
3574 Any, Any, Any, Any, Any,
3575 Any, Any, Any, Any, Any,
3576 Any, Any, Any,
3577 typeof(_colorvec), Any, Any}(_f, mass_matrix,
3578 analytic, tgrad, jac,
3579 jvp, vjp,
3580 jac_prototype,
3581 sparsity, Wfact,
3582 Wfact_t, paramjac,
3583 observed,
3584 _colorvec, sys, resid_prototype)
3585 else
3586 NonlinearFunction{iip, specialize,
3587 typeof(_f), typeof(mass_matrix), typeof(analytic), typeof(tgrad),
3588 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3589 typeof(sparsity), typeof(Wfact),
3590 typeof(Wfact_t), typeof(paramjac),
3591 typeof(observed),
3592 typeof(_colorvec), typeof(sys), typeof(resid_prototype)}(_f, mass_matrix,
3593 analytic, tgrad, jac,
3594 jvp, vjp, jac_prototype, sparsity,
3595 Wfact,
3596 Wfact_t, paramjac,
3597 observed, _colorvec, sys, resid_prototype)
3598 end
3599 end
3600
3601 function NonlinearFunction{iip}(f; kwargs...) where {iip}
3602 NonlinearFunction{iip, FullSpecialize}(f; kwargs...)
3603 end
3604 NonlinearFunction{iip}(f::NonlinearFunction; kwargs...) where {iip} = f
3605 function NonlinearFunction(f; kwargs...)
3606 NonlinearFunction{isinplace(f, 3), FullSpecialize}(f; kwargs...)
3607 end
3608 NonlinearFunction(f::NonlinearFunction; kwargs...) = f
3609
3610 function IntervalNonlinearFunction{iip, specialize}(f;
3611 analytic = __has_analytic(f) ?
3612 f.analytic :
3613 nothing,
3614 syms = nothing,
3615 paramsyms = nothing,
3616 observed = __has_observed(f) ?
3617 f.observed :
3618 DEFAULT_OBSERVED_NO_TIME,
3619 sys = __has_sys(f) ? f.sys : nothing) where {
3620 iip,
3621 specialize
3622 }
3623 _f = prepare_function(f)
3624 sys = sys_or_symbolcache(sys, syms, paramsyms)
3625
3626 if specialize === NoSpecialize
3627 IntervalNonlinearFunction{iip, specialize,
3628 Any, Any, Any, Any}(_f, analytic, observed, sys)
3629 else
3630 IntervalNonlinearFunction{iip, specialize,
3631 typeof(_f), typeof(analytic),
3632 typeof(observed),
3633 typeof(sys)}(_f, analytic,
3634 observed, sys)
3635 end
3636 end
3637
3638 function IntervalNonlinearFunction{iip}(f; kwargs...) where {iip}
3639 IntervalNonlinearFunction{iip, FullSpecialize}(f; kwargs...)
3640 end
3641 IntervalNonlinearFunction{iip}(f::IntervalNonlinearFunction; kwargs...) where {iip} = f
3642 function IntervalNonlinearFunction(f; kwargs...)
3643 IntervalNonlinearFunction{isinplace(f, 3), FullSpecialize}(f; kwargs...)
3644 end
3645 IntervalNonlinearFunction(f::IntervalNonlinearFunction; kwargs...) = f
3646
3647 struct NoAD <: AbstractADType end
3648
3649 (f::OptimizationFunction)(args...) = f.f(args...)
3650 OptimizationFunction(args...; kwargs...) = OptimizationFunction{true}(args...; kwargs...)
3651
3652 function OptimizationFunction{iip}(f, adtype::AbstractADType = NoAD();
3653 grad = nothing, hess = nothing, hv = nothing,
3654 cons = nothing, cons_j = nothing, cons_h = nothing,
3655 hess_prototype = nothing,
3656 cons_jac_prototype = __has_jac_prototype(f) ?
3657 f.jac_prototype : nothing,
3658 cons_hess_prototype = nothing,
3659 syms = nothing,
3660 paramsyms = nothing,
3661 observed = __has_observed(f) ? f.observed :
3662 DEFAULT_OBSERVED_NO_TIME,
3663 expr = nothing, cons_expr = nothing,
3664 sys = __has_sys(f) ? f.sys : nothing,
3665 lag_h = nothing, lag_hess_prototype = nothing,
3666 hess_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3667 cons_jac_colorvec = __has_colorvec(f) ? f.colorvec :
3668 nothing,
3669 cons_hess_colorvec = __has_colorvec(f) ? f.colorvec :
3670 nothing,
3671 lag_hess_colorvec = nothing) where {iip}
3672 isinplace(f, 2; has_two_dispatches = false, isoptimization = true)
3673 sys = sys_or_symbolcache(sys, syms, paramsyms)
3674 OptimizationFunction{iip, typeof(adtype), typeof(f), typeof(grad), typeof(hess),
3675 typeof(hv),
3676 typeof(cons), typeof(cons_j), typeof(cons_h),
3677 typeof(hess_prototype),
3678 typeof(cons_jac_prototype), typeof(cons_hess_prototype),
3679 typeof(observed),
3680 typeof(expr), typeof(cons_expr), typeof(sys), typeof(lag_h),
3681 typeof(lag_hess_prototype), typeof(hess_colorvec),
3682 typeof(cons_jac_colorvec), typeof(cons_hess_colorvec),
3683 typeof(lag_hess_colorvec)
3684 }(f, adtype, grad, hess,
3685 hv, cons, cons_j, cons_h,
3686 hess_prototype, cons_jac_prototype,
3687 cons_hess_prototype, observed, expr, cons_expr, sys,
3688 lag_h, lag_hess_prototype, hess_colorvec, cons_jac_colorvec,
3689 cons_hess_colorvec, lag_hess_colorvec)
3690 end
3691
3692 function BVPFunction{iip, specialize, twopoint}(f, bc;
3693 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
3694 analytic = __has_analytic(f) ? f.analytic : nothing,
3695 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3696 jac = __has_jac(f) ? f.jac : nothing,
3697 bcjac = __has_jac(bc) ? bc.jac : nothing,
3698 jvp = __has_jvp(f) ? f.jvp : nothing,
3699 vjp = __has_vjp(f) ? f.vjp : nothing,
3700 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
3701 bcjac_prototype = __has_jac_prototype(bc) ? bc.jac_prototype : nothing,
3702 bcresid_prototype = nothing,
3703 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
3704 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3705 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3706 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3707 syms = nothing,
3708 indepsym = nothing,
3709 paramsyms = nothing,
3710 observed = __has_observed(f) ? f.observed : DEFAULT_OBSERVED,
3711 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3712 bccolorvec = __has_colorvec(bc) ? bc.colorvec : nothing,
3713 sys = __has_sys(f) ? f.sys : nothing) where {iip, specialize, twopoint}
3714 if mass_matrix === I && f isa Tuple
3715 mass_matrix = ((I for i in 1:length(f))...,)
3716 end
3717
3718 if (specialize === FunctionWrapperSpecialize) &&
3719 !(f isa FunctionWrappersWrappers.FunctionWrappersWrapper)
3720 error("FunctionWrapperSpecialize must be used on the problem constructor for access to u0, p, and t types!")
3721 end
3722
3723 if jac === nothing && isa(jac_prototype, AbstractDiffEqLinearOperator)
3724 if iip_f
3725 jac = update_coefficients! #(J,u,p,t)
3726 else
3727 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3728 end
3729 end
3730
3731 if bcjac === nothing && isa(bcjac_prototype, AbstractDiffEqLinearOperator)
3732 if iip_bc
3733 bcjac = update_coefficients! #(J,u,p,t)
3734 else
3735 bcjac = (u, p, t) -> update_coefficients!(deepcopy(bcjac_prototype), u, p, t)
3736 end
3737 end
3738
3739 if jac_prototype !== nothing && colorvec === nothing &&
3740 ArrayInterfaceCore.fast_matrix_colors(jac_prototype)
3741 _colorvec = ArrayInterfaceCore.matrix_colors(jac_prototype)
3742 else
3743 _colorvec = colorvec
3744 end
3745
3746 if bcjac_prototype !== nothing && bccolorvec === nothing &&
3747 ArrayInterfaceCore.fast_matrix_colors(bcjac_prototype)
3748 _bccolorvec = ArrayInterfaceCore.matrix_colors(bcjac_prototype)
3749 else
3750 _bccolorvec = bccolorvec
3751 end
3752
3753 bciip = if !twopoint
3754 isinplace(bc, 4, "bc", iip)
3755 else
3756 @assert length(bc) == 2
3757 bc = Tuple(bc)
3758 if isinplace(first(bc), 3, "bc", iip) != isinplace(last(bc), 3, "bc", iip)
3759 throw(NonconformingFunctionsError(["bc[1]", "bc[2]"]))
3760 end
3761 isinplace(first(bc), 3, "bc", iip)
3762 end
3763 jaciip = jac !== nothing ? isinplace(jac, 4, "jac", iip) : iip
3764 bcjaciip = if bcjac !== nothing
3765 if !twopoint
3766 isinplace(bcjac, 4, "bcjac", bciip)
3767 else
3768 @assert length(bcjac) == 2
3769 bcjac = Tuple(bcjac)
3770 if isinplace(first(bcjac), 3, "bcjac", bciip) !=
3771 isinplace(last(bcjac), 3, "bcjac", bciip)
3772 throw(NonconformingFunctionsError(["bcjac[1]", "bcjac[2]"]))
3773 end
3774 isinplace(bcjac, 3, "bcjac", iip)
3775 end
3776 else
3777 bciip
3778 end
3779 tgradiip = tgrad !== nothing ? isinplace(tgrad, 4, "tgrad", iip) : iip
3780 jvpiip = jvp !== nothing ? isinplace(jvp, 5, "jvp", iip) : iip
3781 vjpiip = vjp !== nothing ? isinplace(vjp, 5, "vjp", iip) : iip
3782 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 5, "Wfact", iip) : iip
3783 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 5, "Wfact_t", iip) : iip
3784 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 4, "paramjac", iip) : iip
3785
3786 nonconforming = (bciip, jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
3787 paramjaciip) .!= iip
3788 bc_nonconforming = bcjaciip .!= bciip
3789 if any(nonconforming)
3790 nonconforming = findall(nonconforming)
3791 functions = ["bc", "jac", "bcjac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t",
3792 "paramjac"][nonconforming]
3793 throw(NonconformingFunctionsError(functions))
3794 end
3795
3796 if twopoint
3797 if iip && (bcresid_prototype === nothing || length(bcresid_prototype) != 2)
3798 error("bcresid_prototype must be a tuple / indexable collection of length 2 for a inplace TwoPointBVPFunction")
3799 end
3800 if bcresid_prototype !== nothing && length(bcresid_prototype) == 2
3801 bcresid_prototype = ArrayPartition(first(bcresid_prototype),
3802 last(bcresid_prototype))
3803 end
3804
3805 bccolorvec !== nothing && length(bccolorvec) == 2 &&
3806 (bccolorvec = Tuple(bccolorvec))
3807
3808 bcjac_prototype !== nothing && length(bcjac_prototype) == 2 &&
3809 (bcjac_prototype = Tuple(bcjac_prototype))
3810 end
3811
3812 if any(bc_nonconforming)
3813 bc_nonconforming = findall(bc_nonconforming)
3814 functions = ["bcjac"][bc_nonconforming]
3815 throw(NonconformingFunctionsError(functions))
3816 end
3817
3818 _f = prepare_function(f)
3819
3820 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3821
3822 if specialize === NoSpecialize
3823 BVPFunction{iip, specialize, twopoint, Any, Any, Any, Any, Any,
3824 Any, Any, Any, Any, Any, Any, Any, Any, Any, Any,
3825 Any,
3826 Any, typeof(_colorvec), typeof(_bccolorvec), Any}(_f, bc, mass_matrix,
3827 analytic, tgrad, jac, bcjac, jvp, vjp, jac_prototype,
3828 bcjac_prototype, bcresid_prototype,
3829 sparsity, Wfact, Wfact_t, paramjac, observed,
3830 _colorvec, _bccolorvec, sys)
3831 else
3832 BVPFunction{iip, specialize, twopoint, typeof(_f), typeof(bc),
3833 typeof(mass_matrix), typeof(analytic), typeof(tgrad), typeof(jac),
3834 typeof(bcjac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3835 typeof(bcjac_prototype), typeof(bcresid_prototype), typeof(sparsity),
3836 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3837 typeof(_colorvec), typeof(_bccolorvec), typeof(sys)}(
3838 _f, bc, mass_matrix, analytic,
3839 tgrad, jac, bcjac, jvp, vjp,
3840 jac_prototype, bcjac_prototype, bcresid_prototype, sparsity,
3841 Wfact, Wfact_t, paramjac,
3842 observed,
3843 _colorvec, _bccolorvec, sys)
3844 end
3845 end
3846
3847 function BVPFunction{iip}(f, bc; twopoint::Union{Val, Bool} = Val(false),
3848 kwargs...) where {iip}
3849 BVPFunction{iip, FullSpecialize, _unwrap_val(twopoint)}(f, bc; kwargs...)
3850 end
3851 BVPFunction{iip}(f::BVPFunction, bc; kwargs...) where {iip} = f
3852 function BVPFunction(f, bc; twopoint::Union{Val, Bool} = Val(false), kwargs...)
3853 BVPFunction{isinplace(f, 4), FullSpecialize, _unwrap_val(twopoint)}(f, bc; kwargs...)
3854 end
3855 BVPFunction(f::BVPFunction; kwargs...) = f
3856
3857 function IntegralFunction{iip, specialize}(f, integrand_prototype) where {iip, specialize}
3858 _f = prepare_function(f)
3859 IntegralFunction{iip, specialize, typeof(_f), typeof(integrand_prototype)}(_f,
3860 integrand_prototype)
3861 end
3862
3863 function IntegralFunction{iip}(f, integrand_prototype) where {iip}
3864 IntegralFunction{iip, FullSpecialize}(f, integrand_prototype)
3865 end
3866 function IntegralFunction(f)
3867 calculated_iip = isinplace(f, 3, "integral", true)
3868 if calculated_iip
3869 throw(IntegrandMismatchFunctionError(calculated_iip, false))
3870 end
3871 IntegralFunction{false}(f, nothing)
3872 end
3873 function IntegralFunction(f, integrand_prototype)
3874 calculated_iip = isinplace(f, 3, "integral", true)
3875 IntegralFunction{calculated_iip}(f, integrand_prototype)
3876 end
3877
3878 function BatchIntegralFunction{iip, specialize}(f, integrand_prototype;
3879 max_batch::Integer = typemax(Int)) where {iip, specialize}
3880 _f = prepare_function(f)
3881 BatchIntegralFunction{
3882 iip,
3883 specialize,
3884 typeof(_f),
3885 typeof(integrand_prototype)
3886 }(_f,
3887 integrand_prototype,
3888 max_batch)
3889 end
3890
3891 function BatchIntegralFunction{iip}(f,
3892 integrand_prototype;
3893 kwargs...) where {iip}
3894 return BatchIntegralFunction{iip, FullSpecialize}(f,
3895 integrand_prototype;
3896 kwargs...)
3897 end
3898
3899 function BatchIntegralFunction(f; kwargs...)
3900 calculated_iip = isinplace(f, 3, "batchintegral", true)
3901 if calculated_iip
3902 throw(IntegrandMismatchFunctionError(calculated_iip, false))
3903 end
3904 BatchIntegralFunction{false}(f, nothing; kwargs...)
3905 end
3906 function BatchIntegralFunction(f, integrand_prototype; kwargs...)
3907 calculated_iip = isinplace(f, 3, "batchintegral", true)
3908 BatchIntegralFunction{calculated_iip}(f, integrand_prototype; kwargs...)
3909 end
3910
3911 ########## Utility functions
3912
3913 function sys_or_symbolcache(sys, syms, paramsyms, indepsym = nothing)
3914 if sys === nothing &&
3915 (syms !== nothing || paramsyms !== nothing || indepsym !== nothing)
3916 Base.depwarn(
3917 "The use of keyword arguments `syms`, `paramsyms` and `indepsym` for `SciMLFunction`s is deprecated. Pass `sys = SymbolCache(syms, paramsyms, indepsym)` instead.",
3918 :syms)
3919 sys = SymbolCache(syms, paramsyms, indepsym)
3920 end
3921 return sys
3922 end
3923
3924 ########## Existence Functions
3925
3926 # Check that field/property exists (may be nothing)
3927 __has_jac(f) = isdefined(f, :jac)
3928 __has_jvp(f) = isdefined(f, :jvp)
3929 __has_vjp(f) = isdefined(f, :vjp)
3930 __has_tgrad(f) = isdefined(f, :tgrad)
3931 __has_Wfact(f) = isdefined(f, :Wfact)
3932 __has_Wfact_t(f) = isdefined(f, :Wfact_t)
3933 __has_W_prototype(f) = isdefined(f, :W_prototype)
3934 __has_paramjac(f) = isdefined(f, :paramjac)
3935 __has_jac_prototype(f) = isdefined(f, :jac_prototype)
3936 __has_sparsity(f) = isdefined(f, :sparsity)
3937 __has_mass_matrix(f) = isdefined(f, :mass_matrix)
3938 __has_syms(f) = isdefined(f, :syms)
3939 __has_indepsym(f) = isdefined(f, :indepsym)
3940 __has_paramsyms(f) = isdefined(f, :paramsyms)
3941 __has_observed(f) = isdefined(f, :observed)
3942 __has_analytic(f) = isdefined(f, :analytic)
3943 __has_colorvec(f) = isdefined(f, :colorvec)
3944 __has_sys(f) = isdefined(f, :sys)
3945 __has_analytic_full(f) = isdefined(f, :analytic_full)
3946 __has_resid_prototype(f) = isdefined(f, :resid_prototype)
3947 __has_initializeprob(f) = isdefined(f, :initializeprob)
3948 __has_initializeprobmap(f) = isdefined(f, :initializeprobmap)
3949
3950 # compatibility
3951 has_invW(f::AbstractSciMLFunction) = false
3952 has_analytic(f::AbstractSciMLFunction) = __has_analytic(f) && f.analytic !== nothing
3953 has_jac(f::AbstractSciMLFunction) = __has_jac(f) && f.jac !== nothing
3954 has_jvp(f::AbstractSciMLFunction) = __has_jvp(f) && f.jvp !== nothing
3955 has_vjp(f::AbstractSciMLFunction) = __has_vjp(f) && f.vjp !== nothing
3956 has_tgrad(f::AbstractSciMLFunction) = __has_tgrad(f) && f.tgrad !== nothing
3957 has_Wfact(f::AbstractSciMLFunction) = __has_Wfact(f) && f.Wfact !== nothing
3958 has_Wfact_t(f::AbstractSciMLFunction) = __has_Wfact_t(f) && f.Wfact_t !== nothing
3959 has_paramjac(f::AbstractSciMLFunction) = __has_paramjac(f) && f.paramjac !== nothing
3960 has_sys(f::AbstractSciMLFunction) = __has_sys(f) && f.sys !== nothing
3961 function has_initializeprob(f::AbstractSciMLFunction)
3962 __has_initializeprob(f) && f.initializeprob !== nothing
3963 end
3964 function has_initializeprobmap(f::AbstractSciMLFunction)
3965 __has_initializeprobmap(f) && f.initializeprobmap !== nothing
3966 end
3967
3968 function has_syms(f::AbstractSciMLFunction)
3969 if __has_syms(f)
3970 f.syms !== nothing
3971 else
3972 !isempty(variable_symbols(f))
3973 end
3974 end
3975 function has_indepsym(f::AbstractSciMLFunction)
3976 if __has_indepsym(f)
3977 f.indepsym !== nothing
3978 else
3979 !isempty(independent_variable_symbols(f))
3980 end
3981 end
3982 function has_paramsyms(f::AbstractSciMLFunction)
3983 if __has_paramsyms(f)
3984 f.paramsyms !== nothing
3985 else
3986 !isempty(parameter_symbols(f))
3987 end
3988 end
3989 function has_observed(f::AbstractSciMLFunction)
3990 __has_observed(f) && f.observed !== DEFAULT_OBSERVED && f.observed !== nothing
3991 end
3992 has_colorvec(f::AbstractSciMLFunction) = __has_colorvec(f) && f.colorvec !== nothing
3993
3994 # TODO: find an appropriate way to check `has_*`
3995 has_jac(f::Union{SplitFunction, SplitSDEFunction}) = has_jac(f.f1)
3996 has_jvp(f::Union{SplitFunction, SplitSDEFunction}) = has_jvp(f.f1)
3997 has_vjp(f::Union{SplitFunction, SplitSDEFunction}) = has_vjp(f.f1)
3998 has_tgrad(f::Union{SplitFunction, SplitSDEFunction}) = has_tgrad(f.f1)
3999 has_Wfact(f::Union{SplitFunction, SplitSDEFunction}) = has_Wfact(f.f1)
4000 has_Wfact_t(f::Union{SplitFunction, SplitSDEFunction}) = has_Wfact_t(f.f1)
4001 has_paramjac(f::Union{SplitFunction, SplitSDEFunction}) = has_paramjac(f.f1)
4002 has_colorvec(f::Union{SplitFunction, SplitSDEFunction}) = has_colorvec(f.f1)
4003
4004 has_jac(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_jac(f.f1)
4005 has_jvp(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_jvp(f.f1)
4006 has_vjp(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_vjp(f.f1)
4007 has_tgrad(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_tgrad(f.f1)
4008 has_Wfact(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_Wfact(f.f1)
4009 has_Wfact_t(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_Wfact_t(f.f1)
4010 has_paramjac(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_paramjac(f.f1)
4011 has_colorvec(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_colorvec(f.f1)
4012
4013 has_jac(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_jac(f.f)
4014 has_jvp(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_jvp(f.f)
4015 has_vjp(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_vjp(f.f)
4016 has_tgrad(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_tgrad(f.f)
4017 has_Wfact(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_Wfact(f.f)
4018 has_Wfact_t(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_Wfact_t(f.f)
4019 has_paramjac(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_paramjac(f.f)
4020 has_colorvec(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_colorvec(f.f)
4021
4022 has_jac(f::JacobianWrapper) = has_jac(f.f)
4023 has_jvp(f::JacobianWrapper) = has_jvp(f.f)
4024 has_vjp(f::JacobianWrapper) = has_vjp(f.f)
4025 has_tgrad(f::JacobianWrapper) = has_tgrad(f.f)
4026 has_Wfact(f::JacobianWrapper) = has_Wfact(f.f)
4027 has_Wfact_t(f::JacobianWrapper) = has_Wfact_t(f.f)
4028 has_paramjac(f::JacobianWrapper) = has_paramjac(f.f)
4029 has_colorvec(f::JacobianWrapper) = has_colorvec(f.f)
4030
4031 ######### Additional traits
4032
4033 islinear(::AbstractDiffEqFunction) = false
4034 islinear(f::ODEFunction) = islinear(f.f)
4035 islinear(f::SplitFunction) = islinear(f.f1)
4036
4037 struct IncrementingODEFunction{iip, specialize, F} <: AbstractODEFunction{iip}
4038 f::F
4039 end
4040
4041 function IncrementingODEFunction{iip, specialize}(f) where {iip, specialize}
4042 _f = prepare_function(f)
4043 IncrementingODEFunction{iip, specialize, typeof(_f)}(_f)
4044 end
4045
4046 function IncrementingODEFunction{iip}(f) where {iip}
4047 IncrementingODEFunction{iip, FullSpecialize}(f)
4048 end
4049 function IncrementingODEFunction(f)
4050 IncrementingODEFunction{isinplace(f, 7), FullSpecialize}(f)
4051 end
4052
4053 (f::IncrementingODEFunction)(args...; kwargs...) = f.f(args...; kwargs...)
4054
4055 for S in [:ODEFunction
4056 :DiscreteFunction
4057 :DAEFunction
4058 :DDEFunction
4059 :SDEFunction
4060 :RODEFunction
4061 :SDDEFunction
4062 :NonlinearFunction
4063 :IntervalNonlinearFunction
4064 :IncrementingODEFunction
4065 :BVPFunction
4066 :IntegralFunction
4067 :BatchIntegralFunction]
4068 @eval begin
4069 function ConstructionBase.constructorof(::Type{<:$S{iip}}) where {
4070 iip,
4071 }
4072 (args...) -> $S{iip, FullSpecialize, map(typeof, args)...}(args...)
4073 end
4074 end
4075 end
4076
4077 function SymbolicIndexingInterface.symbolic_container(fn::AbstractSciMLFunction)
4078 has_sys(fn) ? fn.sys : SymbolCache()
4079 end
4080
4081 function SymbolicIndexingInterface.is_observed(fn::AbstractSciMLFunction, sym)
4082 has_sys(fn) ? is_observed(fn.sys, sym) : has_observed(fn)
4083 end
4084
4085 function SymbolicIndexingInterface.observed(fn::AbstractSciMLFunction, sym)
4086 if has_observed(fn)
4087 if hasmethod(fn.observed, Tuple{Any})
4088 return fn.observed(sym)
4089 else
4090 return (args...) -> fn.observed(sym, args...)
4091 end
4092 end
4093 error("SciMLFunction does not have observed")
4094 end
4095
4096 function SymbolicIndexingInterface.observed(fn::AbstractSciMLFunction, sym::Symbol)
4097 return SymbolicIndexingInterface.observed(fn, getproperty(fn.sys, sym))
4098 end
4099
4100 SymbolicIndexingInterface.constant_structure(::AbstractSciMLFunction) = true