You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: How a function of functions changes with respect to its functions
7
+
---
8
+
9
+

10
+
11
+
# Introduction
12
+
13
+
Mucha gente está familiarizada con el concepto de derivada de una función. Menos lo están con el de derivada de una función de funciones, o derivada de un funcional.
14
+
15
+
BLABLABLA
16
+
17
+
# Remark
18
+
19
+
Demostrar lo de que derivada funcional de operador diferencial lineal (como gradiente) es el propio operador. (libreta cin 11/02/25)
20
+
21
+
# TODO
22
+
23
+
Citar este artículo en el paso 3.1. de post de split bregman.
compares the value $$J(u)$$ with the tangent plane (which in 1D is a line) $$J(v) + \langle p, u-v \rangle$$. Choosing a differentiable $$H$$, the subdifferential becomes the gradient $$\nabla H$$. This is not strictly a distance in the usual sense because it satisfies neither symmetry nor the triangle inequality, but it retains many distance properties (see [8]). Instead of directly measuring the distance between two points, it measures it as differences in function values, comparing $$J(u)$$ with a linear approximation of $$J$$ based on its tangent point $$J(v)$$. In other words, it measures the difference between the function value at $$u$$, which is $$J(u)$$, and the best linear approximation of $$J(u)$$ from $$v$$.
40
+
$$ D^p_J (u, v) $$ compares the value $$J(u)$$ with the tangent plane (which in 1D is a line) $$J(v) + \langle p, u-v \rangle$$. Choosing a differentiable $$H$$, the subdifferential becomes the gradient $$\nabla H$$. This is not strictly a distance in the usual sense because it satisfies neither symmetry nor the triangle inequality, but it retains many distance properties (see [8]). Instead of directly measuring the distance between two points, it measures it as differences in function values, comparing $$J(u)$$ with a linear approximation of $$J$$ based on its tangent point $$J(v)$$. In other words, it measures the difference between the function value at $$u$$, which is $$J(u)$$, and the best linear approximation of $$J(u)$$ from $$v$$.
45
41
46
42
From the figure, it is clear that convexity is required for an effective linear approximation. The distance tends to zero when $$v$$ approaches the optimum $$\hat{u}$$. So, given an initial point $$u^0$$ and a parameter $$\gamma > 0$$, the Bregman iteration algorithm is formally:
47
43
@@ -170,6 +166,174 @@ In [14], the explained approach is used for deconvolution, and instead of keepin
170
166
</table>
171
167
</p>
172
168
169
+
## Denoising
170
+
171
+
We will implement and test the Total Variation (TV) model proposed in [15] using the Split Bregman method in C++. See [16] for another interesting implementation.
172
+
173
+
### 1. Total Variation Model Functional
174
+
175
+
The combination of an L2 data fidelity term with **total variation (TV)** leads to a variational model proposed by Rudin, Osher, and Fatemi (ROF model). The functional is expressed as:
To minimize this functional, we introduce an auxiliary variable $$ w = (w_1, w_2) $$ and a Bregman iterative parameter $$ b = (b_1, b_2) $$, transforming the original functional into:
184
+
185
+
$$
186
+
E(u, w, b) = \frac{1}{2} \int (u - f)^2 + \alpha \int |w|^2 + \beta \int (w - \nabla u - b)^2
187
+
$$
188
+
189
+
To solve this functional, we use an alternating optimization method, where each iteration alternates between updating $$ u $$ and $$ w $$.
190
+
191
+
### 3. Algorithm and Discretized Equations
192
+
193
+
The authors provide a pseudocode representation of the algorithm, which closely resembles the previously presented implementation:
Finally, we update the Bregman iterative parameter b, which serves as an error adjustment term in each iteration to reinforce the constraint $$ w = \nabla u $$:
294
+
295
+
$$
296
+
b \leftarrow b + \nabla u - w
297
+
$$
298
+
299
+
```cpp
300
+
// Update Bregman variables
301
+
subtract(c1, w1, b1);
302
+
subtract(c2, w2, b2);
303
+
```
304
+
305
+
### Results
306
+
307
+
We conclude by showing some original, noisy, and denoised versions of various images using Gaussian and impulsive noise to degrade them. The model parameters are sensibly modulated by hand, proportionally to the level of noise generated:
@@ -199,3 +363,7 @@ In [14], the explained approach is used for deconvolution, and instead of keepin
199
363
[13] E. Esser, *Applications of Lagrangian-Based Alternating Direction Methods and Connections to Split Bregman*, UCLA CAM Report 09-21, 2009, [FTP Link](ftp://ftp.math.ucla.edu/pub/camreport/cam09-31.pdf)
200
364
201
365
[14] Weihong Li et al., *Total Variation Blind Deconvolution Employing Split Bregman Iteration*, 2012, [Online Article](https://www.ipol.im/pub/art/2012/g-tvdc/)
366
+
367
+
[15] Lu, W., Duan, J., Qiu, Z., Pan, Z., Liu, R. W., & Bai, L. (2016). Implementation of high-order variational models made easy for image processing. Mathematical Methods in the Applied Sciences, 39(18), 5371–5387. https://doi.org/10.1002/mma.3858
0 commit comments