Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
3bf32f1
Bump actions/checkout from 4 to 5
dependabot[bot] Aug 12, 2025
1161d80
Merge pull request #200 from JuliaDiff/dependabot/github_actions/acti…
ChrisRackauckas Aug 12, 2025
bdfcc99
Add comprehensive docstrings for utility and internal functions
ChrisRackauckas Aug 13, 2025
5f53259
Apply SciMLStyle formatting to all documented source files
ChrisRackauckas Aug 13, 2025
07a1c78
Update src/epsilons.jl
ChrisRackauckas Aug 13, 2025
18e838e
Apply suggestions from code review
ChrisRackauckas Aug 13, 2025
410c999
Add JVP functions to API documentation
ChrisRackauckas Aug 13, 2025
5409eb4
Restructure API documentation into separate pages
ChrisRackauckas Aug 13, 2025
9e3f51d
Restructure documentation to remove API reference level
ChrisRackauckas Aug 13, 2025
436cc25
Remove API reference section completely
ChrisRackauckas Aug 13, 2025
422200c
Add dedicated epsilon page and reorganize internal utilities
ChrisRackauckas Aug 13, 2025
262b06f
Merge pull request #201 from ChrisRackauckas-Claude/add-comprehensive…
ChrisRackauckas Aug 13, 2025
83ff37e
Update Project.toml
ChrisRackauckas Aug 13, 2025
1f15677
Fix repeated evaluation of fx0 in forward gradient computation
ChrisRackauckas Aug 16, 2025
fdf769d
Update src/gradients.jl
ChrisRackauckas Aug 16, 2025
5ce3014
Update src/gradients.jl
ChrisRackauckas Aug 16, 2025
d79e161
Update ordinarydiffeq_tridiagonal_solve.jl
ChrisRackauckas Aug 16, 2025
62990cb
Update runtests.jl
ChrisRackauckas Aug 16, 2025
72f5d07
Merge pull request #203 from ChrisRackauckas-Claude/fix-repeated-fx0-…
ChrisRackauckas Aug 16, 2025
3e64a97
Update Project.toml
ChrisRackauckas Aug 16, 2025
cfa0c68
Update Docs: Replace `SparseDiffTools.jl` with `SparseConnectivityTra…
DanielDoehring Aug 26, 2025
1f79574
eg
DanielDoehring Aug 26, 2025
9d2d9a7
Merge pull request #204 from DanielDoehring/UpdateDocs_Sparsity
ChrisRackauckas Aug 26, 2025
42ae452
Use Documenter.jl v1 for docs/
abhro Oct 11, 2025
c7f3b13
Use `[sources]` attribute in Project.toml
abhro Oct 11, 2025
b90b7b8
Update spacing and output for code block examples
abhro Oct 11, 2025
fc614fd
Update syntax highlight tag in code block example
abhro Oct 11, 2025
7af1dd6
Update method signatures in docstrings
abhro Oct 11, 2025
ff841bb
Merge pull request #205 from abhro/update-docs
ChrisRackauckas Oct 12, 2025
c49cdd2
Update Project.toml
ChrisRackauckas Oct 12, 2025
0c05618
Update finitedifftests.jl for non-allocating
ChrisRackauckas Oct 12, 2025
3a8c3d8
Merge pull request #206 from JuliaDiff/ChrisRackauckas-patch-1
ChrisRackauckas Oct 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
version:
- '1'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.version }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- uses: julia-actions/setup-julia@latest
with:
version: '1'
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/Downstream.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,14 @@ jobs:
package:
- {user: SciML, repo: OrdinaryDiffEq.jl, group: InterfaceII}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.julia-version }}
arch: x64
- uses: julia-actions/julia-buildpkg@latest
- name: Clone Downstream
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
repository: ${{ matrix.package.user }}/${{ matrix.package.repo }}
path: downstream
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "FiniteDiff"
uuid = "6a86dc24-6348-571c-b903-95158fe2bd41"
version = "2.27.0"
version = "2.29.0"

[deps]
ArrayInterface = "4fba245c-0d91-5ea0-9b3e-6abc04ee57a9"
Expand Down
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,8 @@ Coloring vectors are allowed to be supplied to the Jacobian routines, and these
the directional derivatives for constructing the Jacobian. For example, an accurate
NxN tridiagonal Jacobian can be computed in just 4 `f` calls by using
`colorvec=repeat(1:3,N÷3)`. For information on automatically generating coloring
vectors of sparse matrices, see [SparseDiffTools.jl](https://github.com/JuliaDiff/SparseDiffTools.jl).

Hessian coloring support is coming soon!
vectors of sparse matrices, see [SparseMatrixColorings.jl](https://github.com/gdalle/SparseMatrixColorings.jl) and
the now deprecated [SparseDiffTools.jl](https://github.com/JuliaDiff/SparseDiffTools.jl).

## Contributing

Expand Down
5 changes: 4 additions & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,7 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
FiniteDiff = "6a86dc24-6348-571c-b903-95158fe2bd41"

[compat]
Documenter = "0.27"
Documenter = "1"

[sources]
FiniteDiff = {path = ".."}
2 changes: 2 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ open(joinpath(@__DIR__, "src", "index.md"), "w") do io
for line in eachline(joinpath(dirname(@__DIR__), "README.md"))
println(io, line)
end

for line in eachline(joinpath(@__DIR__, "src", "reproducibility.md"))
println(io, line)
end
Expand All @@ -37,6 +38,7 @@ makedocs(sitename="FiniteDiff.jl",
doctest=false,
format=Documenter.HTML(assets=["assets/favicon.ico"],
canonical="https://docs.sciml.ai/FiniteDiff/stable/"),
warnonly=[:missing_docs],
pages=pages)

deploydocs(repo="github.com/JuliaDiff/FiniteDiff.jl.git"; push_preview=true)
12 changes: 11 additions & 1 deletion docs/pages.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
# Put in a separate page so it can be used by SciMLDocs.jl

pages = ["Home" => "index.md", "tutorials.md", "api.md"]
pages = [
"Home" => "index.md",
"Tutorials" => "tutorials.md",
"Derivatives" => "derivatives.md",
"Gradients" => "gradients.md",
"Jacobians" => "jacobians.md",
"Hessians" => "hessians.md",
"Jacobian-Vector Products" => "jvp.md",
"Step Size Selection" => "epsilons.md",
"Internal Utilities" => "utilities.md"
]
58 changes: 0 additions & 58 deletions docs/src/api.md

This file was deleted.

5 changes: 4 additions & 1 deletion docs/src/assets/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,7 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
FiniteDiff = "6a86dc24-6348-571c-b903-95158fe2bd41"

[compat]
Documenter = "0.27"
Documenter = "1"

[sources]
FiniteDiff = {path = ".."}
26 changes: 26 additions & 0 deletions docs/src/derivatives.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Derivatives

Functions for computing derivatives of scalar-valued functions.

## Overview

Derivatives are computed for scalar→scalar maps `f(x)` where `x` can be a single point or a collection of points. The derivative functions support:

- **Forward differences**: `O(1)` function evaluation per point, `O(h)` accuracy
- **Central differences**: `O(2)` function evaluations per point, `O(h²)` accuracy
- **Complex step**: `O(1)` function evaluation per point, machine precision accuracy

For optimal performance with repeated computations, use the cached versions with `DerivativeCache`.

## Functions

```@docs
FiniteDiff.finite_difference_derivative
FiniteDiff.finite_difference_derivative!
```

## Cache

```@docs
FiniteDiff.DerivativeCache
```
71 changes: 71 additions & 0 deletions docs/src/epsilons.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Step Size Selection (Epsilons)

Functions and theory for computing optimal step sizes in finite difference approximations.

## Theory

The choice of step size (epsilon) in finite difference methods is critical for accuracy. Too large a step leads to truncation error, while too small a step leads to round-off error. The optimal step size balances these two sources of error.

### Error Analysis

For a function `f` with bounded derivatives, the total error in finite difference approximations consists of:

1. **Truncation Error**: Comes from the finite difference approximation itself
- Forward differences: `O(h)` where `h` is the step size
- Central differences: `O(h²)`
- Hessian central differences: `O(h²)` for second derivatives

2. **Round-off Error**: Comes from floating-point arithmetic
- Forward differences: `O(eps/h)` where `eps` is machine epsilon
- Central differences: `O(eps/h)`

### Optimal Step Sizes

Minimizing the total error `truncation + round-off` gives optimal step sizes:

- **Forward differences**: `h* = sqrt(eps)` - balances `O(h)` truncation with `O(eps/h)` round-off
- **Central differences**: `h* = eps^(1/3)` - balances `O(h²)` truncation with `O(eps/h)` round-off
- **Hessian central**: `h* = eps^(1/4)` - balances `O(h²)` truncation for mixed derivatives
- **Complex step**: `h* = eps` - no subtractive cancellation, only limited by machine precision

## Adaptive Step Sizing

The step size computation uses both relative and absolute components:

```julia
epsilon = max(relstep * abs(x), absstep) * dir
```

This ensures:
- **Large values**: Use relative step `relstep * |x|` for scale-invariant accuracy
- **Small values**: Use absolute step `absstep` to avoid underflow
- **Direction**: Multiply by `dir` (±1) for forward differences

## Implementation

The step size computation is handled by internal functions:

- **`compute_epsilon(fdtype, x, relstep, absstep, dir)`**: Computes the actual step size for a given finite difference method and input value
- **`default_relstep(fdtype, T)`**: Returns the optimal relative step size for a given method and numeric type

These functions are called automatically by all finite difference routines, but understanding their behavior can help with custom implementations or debugging numerical issues.

## Special Cases

### Complex Step Differentiation

For complex step differentiation, the step size is simply machine epsilon since this method avoids subtractive cancellation entirely:

⚠️ **Important**: The function `f` must be complex analytic when the input is complex!

### Sparse Jacobians

When computing sparse Jacobians with graph coloring, the step size is computed based on the norm of the perturbation vector to ensure balanced accuracy across all columns in the same color group.

## Practical Considerations

- **Default step sizes** are optimal for most smooth functions
- **Custom step sizes** may be needed for functions with unusual scaling or near-discontinuities
- **Relative steps** should scale with the magnitude of the input
- **Absolute steps** provide a fallback for inputs near zero
- **Direction parameter** allows for one-sided differences when needed (e.g., at domain boundaries)
38 changes: 38 additions & 0 deletions docs/src/gradients.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Gradients

Functions for computing gradients of scalar-valued functions with respect to vector inputs.

## Function Types

Gradients support two types of function mappings:

- **Vector→scalar**: `f(x)` where `x` is a vector and `f` returns a scalar
- **Scalar→vector**: `f(fx, x)` for in-place evaluation or `fx = f(x)` for out-of-place

## Performance Notes

- **Forward differences**: `O(n)` function evaluations, `O(h)` accuracy
- **Central differences**: `O(2n)` function evaluations, `O(h²)` accuracy
- **Complex step**: `O(n)` function evaluations, machine precision accuracy

## Cache Management

When using `GradientCache` with pre-computed function values:

- If you provide `fx`, then `fx` will be used in forward differencing to skip a function call
- You must update `cache.fx` before each call to `finite_difference_gradient!`
- For immutable types (scalars, `StaticArray`), use `@set` from [Setfield.jl](https://github.com/jw3126/Setfield.jl)
- Consider aliasing existing arrays into the cache for memory efficiency

## Functions

```@docs
FiniteDiff.finite_difference_gradient
FiniteDiff.finite_difference_gradient!
```

## Cache

```@docs
FiniteDiff.GradientCache
```
45 changes: 45 additions & 0 deletions docs/src/hessians.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Hessians

Functions for computing Hessian matrices of scalar-valued functions.

## Function Requirements

Hessian functions are designed for scalar-valued functions `f(x)` where:

- `x` is a vector of parameters
- `f(x)` returns a scalar value
- The Hessian `H[i,j] = ∂²f/(∂x[i]∂x[j])` is automatically symmetrized

## Mathematical Background

For a scalar function `f: ℝⁿ → ℝ`, the Hessian central difference approximation is:

```
H[i,j] ≈ (f(x + eᵢhᵢ + eⱼhⱼ) - f(x + eᵢhᵢ - eⱼhⱼ) - f(x - eᵢhᵢ + eⱼhⱼ) + f(x - eᵢhᵢ - eⱼhⱼ)) / (4hᵢhⱼ)
```

where `eᵢ` is the i-th unit vector and `hᵢ` is the step size in dimension i.

## Performance Considerations

- **Complexity**: Requires `O(n²)` function evaluations for an n-dimensional input
- **Accuracy**: Central differences provide `O(h²)` accuracy for second derivatives
- **Memory**: The result is returned as a `Symmetric` matrix view
- **Alternative**: For large problems, consider computing the gradient twice instead

## StaticArrays Support

The cache constructor automatically detects `StaticArray` types and adjusts the `inplace` parameter accordingly for optimal performance.

## Functions

```@docs
FiniteDiff.finite_difference_hessian
FiniteDiff.finite_difference_hessian!
```

## Cache

```@docs
FiniteDiff.HessianCache
```
42 changes: 42 additions & 0 deletions docs/src/jacobians.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Jacobians

Functions for computing Jacobian matrices of vector-valued functions.

## Function Types

Jacobians support the following function signatures:

- **Out-of-place**: `fx = f(x)` where both `x` and `fx` are vectors
- **In-place**: `f!(fx, x)` where `f!` modifies `fx` in-place

## Sparse Jacobians

FiniteDiff.jl provides efficient sparse Jacobian computation using graph coloring:

- Pass a `colorvec` of matrix colors to enable column compression
- Provide `sparsity` as a sparse (e.g. the default `SparseMatrixCSC`)
or structured matrix (`Tridiagonal`, `Banded`, etc.)
- Supports automatic sparsity pattern detection via ArrayInterfaceCore.jl
- Results are automatically decompressed unless `sparsity=nothing`

## Performance Notes

- **Forward differences**: `O(n)` function evaluations, `O(h)` accuracy
- **Central differences**: `O(2n)` function evaluations, `O(h²)` accuracy
- **Complex step**: `O(n)` function evaluations, machine precision accuracy
- **Sparse Jacobians**: Use graph coloring to reduce function evaluations significantly

For non-square Jacobians, specify the output vector `fx` when creating the cache to ensure proper sizing.

## Functions

```@docs
FiniteDiff.finite_difference_jacobian
FiniteDiff.finite_difference_jacobian!
```

## Cache

```@docs
FiniteDiff.JacobianCache
```
Loading