@@ -7,15 +7,17 @@ using ImageCore.MappedArrays: of_eltype
7
7
8
8
"""
9
9
Although stored as an array, image can also be viewed as a function from discrete grid space
10
- Zᴺ to continuous space R (or C if it is complex value). This module provides the discrete
10
+ Zᴺ to continuous space R if it is gray image, to C if it is complex-valued image
11
+ (MRI rawdata), to Rᴺ if it is colorant image, etc.
12
+ This module provides the discrete
11
13
version of gradient-related operators by viewing image arrays as functions.
12
14
13
15
This module provides:
14
16
15
17
- forward/backward difference [`fdiff`](@ref) are the Images-flavor of `Base.diff`
16
18
- gradient operator [`fgradient`](@ref) and its adjoint via keyword `adjoint=true`.
17
- - divergence operator [`fdiv`](@ref) is the negative sum of the adjoint gradient operator of
18
- given vector fields.
19
+ - divergence operator [`fdiv`](@ref) computes the sum of discrete derivatives of vector
20
+ fields.
19
21
- laplacian operator [`flaplacian`](@ref) is the divergence of the gradient fields.
20
22
21
23
Every function in this module has its in-place version.
@@ -184,10 +186,11 @@ flaplacian(X::AbstractArray) = flaplacian!(similar(X, maybe_floattype(eltype(X))
184
186
185
187
The in-place version of the laplacian operator [`flaplacian`](@ref).
186
188
187
- !!! tips Non-allocating
188
- This function will allocate a new set of memories to store the intermediate
189
- gradient fields `∇X`, if you pre-allcoate the memory for `∇X`, then this function
190
- will use it and is thus non-allcating.
189
+ !!! tip Avoiding allocations
190
+ The two-argument method will allocate memory to store the intermediate
191
+ gradient fields `∇X`. If you call this repeatedly with images of consistent size and type,
192
+ consider using the three-argument form with pre-allocated memory for `∇X`,
193
+ which will eliminate allocation by this function.
191
194
"""
192
195
flaplacian! (out, X:: AbstractArray ) = fdiv! (out, fgradient (X))
193
196
flaplacian! (out, ∇X:: Tuple , X:: AbstractArray ) = fdiv! (out, fgradient! (∇X, X))
@@ -206,7 +209,7 @@ Mathematically, the adjoint operator ∂ᵢ' of ∂ᵢ is defined as `<∂ᵢu,
206
209
207
210
See also the in-place version [`fgradient!(X)`](@ref) to reuse the allocated memory.
208
211
"""
209
- function fgradient (X:: AbstractArray{T,N} ; adjoint= false ) where {T,N}
212
+ function fgradient (X:: AbstractArray{T,N} ; adjoint:: Bool = false ) where {T,N}
210
213
fgradient! (ntuple (i-> similar (X, maybe_floattype (T)), N), X; adjoint= adjoint)
211
214
end
212
215
@@ -218,12 +221,15 @@ The in-place version of (adjoint) gradient operator [`fgradient`](@ref).
218
221
The input `∇X = (∂₁X, ∂₂X, ..., ∂ₙX)` is a tuple of arrays that are similar to `X`, i.e.,
219
222
`eltype(∂ᵢX) == eltype(X)` and `axes(∂ᵢX) == axes(X)` for all `i`.
220
223
"""
221
- function fgradient! (∇X:: NTuple{N, <:AbstractArray} , X; adjoint= false ) where N
224
+ function fgradient! (∇X:: NTuple{N, <:AbstractArray} , X; adjoint:: Bool = false ) where N
222
225
all (v-> axes (v) == axes (X), ∇X) || throw (ArgumentError (" All axes of vector fields ∇X and X should be the same." ))
223
226
for i in 1 : N
224
227
if adjoint
225
228
# the negative adjoint of gradient operator for forward difference is the backward difference
229
+ # see also
230
+ # Getreuer, Pascal. "Rudin-Osher-Fatemi total variation denoising using split Bregman." _Image Processing On Line_ 2 (2012): 74-95.
226
231
fdiff! (∇X[i], X, dims= i, rev= true )
232
+ # TODO (johnnychen94): ideally we can get avoid flipping the signs for better performance.
227
233
@. ∇X[i] = - ∇X[i]
228
234
else
229
235
fdiff! (∇X[i], X, dims= i)
0 commit comments