Skip to content

Commit 8a4fc96

Browse files
authored
Merge pull request #170 from PyLops/dev
Release v0.8.0
2 parents 435403b + 3be6267 commit 8a4fc96

File tree

22 files changed

+534
-93
lines changed

22 files changed

+534
-93
lines changed

.github/workflows/build.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ jobs:
77
strategy:
88
matrix:
99
platform: [ ubuntu-latest, macos-latest ]
10-
python-version: ["3.8", "3.9", "3.10"]
10+
python-version: ["3.8", "3.9", "3.10", "3.11"]
1111

1212
runs-on: ${{ matrix.platform }}
1313
steps:

CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,11 @@
1+
# 0.8.0
2+
3+
* Added ``pyproximal.projection.L01BallProj`` and ``pyproximal.proximal.L01Ball`` operators
4+
* Added ``eta`` to ``pyproximal.optimization.primal.ProximalGradient``
5+
* Added ``eta`` and ``weights`` to ``pyproximal.optimization.primal.GeneralizedProximalGradient``
6+
* Allow ``eta`` to ``pyproximal.optimization.primal.ProximalGradient`` to have iteration-dependent ``epsg``
7+
* Switched from ``lsqr`` to ``cg`` in ``pyproximal.projection.AffineSetProj``
8+
19
# 0.7.0
210

311
* Added ``pyproximal.proximal.RelaxedMumfordShah`` operator

README.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -82,10 +82,10 @@ operators and/or algorithms, which present some clear overlap with this project.
8282
A (possibly not exhaustive) list of other projects is:
8383

8484
* http://proximity-operator.net
85-
* https://github.com/ganguli-lab/proxalgs/blob/master/proxalgs/operators.py
85+
* https://github.com/ganguli-lab/proxalgs
8686
* https://github.com/pmelchior/proxmin
8787
* https://github.com/comp-imaging/ProxImaL
88-
* https://github.com/matthieumeo/pycsou
88+
* https://github.com/pyxu-org/pyxu
8989

9090
All of these projects are self-contained, meaning that they implement both proximal
9191
and linear operators as needed to solve a variety of problems in different areas
@@ -115,10 +115,6 @@ You need **Python 3.8 or greater**.
115115
*Note: Versions prior to v0.3.0 work also with Python 3.6 or greater, however they
116116
require scipy version to be lower than v1.8.0.*
117117

118-
#### From PyPi
119-
you want to use PyProximal within your codes,
120-
install it in your Python environment by typing the following command in your terminal:
121-
122118
To get the most out of PyLops straight out of the box, we recommend `conda` to install PyLops:
123119
```bash
124120
conda install -c conda-forge pyproximal
@@ -127,7 +123,7 @@ conda install -c conda-forge pyproximal
127123
#### From PyPi
128124
You can also install pyproximal with `pip`:
129125
```bash
130-
pip install pylops
126+
pip install pyproximal
131127
```
132128

133129
#### From Github
@@ -193,4 +189,4 @@ you are required to rebuild the entire documentation before your changes will be
193189
* Matteo Ravasi, mrava87
194190
* Nick Luiken, NickLuiken
195191
* Eneko Uruñuela, eurunuela
196-
* Marcus Valtonen Örnhag, marcusvaltonen
192+
* Marcus Valtonen Örnhag, marcusvaltonen

docs/source/api/index.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ Orthogonal projections
2424
HyperPlaneBoxProj
2525
IntersectionProj
2626
L0BallProj
27+
L01BallProj
2728
L1BallProj
2829
NuclearBallProj
2930
SimplexProj
@@ -68,6 +69,7 @@ Convex
6869
Intersection
6970
L0
7071
L0Ball
72+
L01Ball
7173
L1
7274
L1Ball
7375
L2

docs/source/changelog.rst

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,22 @@
33
Changelog
44
=========
55

6+
Version 0.8.0
7+
--------------
8+
*Released on: 11/03/2024*
9+
10+
* Added :py:class:`pyproximal.projection.L01BallProj` and :py:class:`pyproximal.proximal.L01Ball` operators
11+
* Added ``eta`` to :py:func:`pyproximal.optimization.primal.ProximalGradient`
12+
* Added ``eta`` and ``weights`` to :py:func:`pyproximal.optimization.primal.GeneralizedProximalGradient`
13+
* Allow ``eta`` to :py:func:`pyproximal.optimization.primal.ProximalGradient` to have iteration-dependent ``epsg``
14+
* Switched from ``lsqr`` to ``cg`` in :py:func:`pyproximal.projection.AffineSetProj`
15+
16+
617
Version 0.7.0
718
--------------
819
*Released on: 10/11/2023*
920

10-
* Added :py:class:`pyproximal.proximal.RelaxedMumfordShah`` operator
21+
* Added :py:class:`pyproximal.proximal.RelaxedMumfordShah` operator
1122
* Added cuda version to the proximal operator of :py:class:`pyproximal.proximal.Simplex`
1223
* Added bilinear update to :py:func:`pyproximal.optimization.primal.ProximalGradient`
1324
* Modified :py:func:`pyproximal.optimization.pnp.PlugAndPlay` function signature to allow using any proximal solver of choice
@@ -34,7 +45,7 @@ Version 0.5.0
3445
|:vertical_traffic_light:| |:vertical_traffic_light:|
3546

3647
* Added :py:class:`pyproximal.proximal.Log1` operator
37-
* Allow ``radius`` parameter of :py:func:`pyproximal.optimization.primal.L0` to be a function
48+
* Allow ``radius`` parameter of :py:func:`pyproximal.proximal.L0` to be a function
3849
* Allow ``tau`` parameter of :py:func:`pyproximal.optimization.primal.HQS` to be a vector
3950
and change over iterations
4051
* Added ``z0`` to :py:func:`pyproximal.optimization.primal.HQS`

docs/source/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,10 +76,10 @@ operators and/or algorithms which present some clear overlap with this project.
7676
A (possibly not exahustive) list of other projects is:
7777

7878
* http://proximity-operator.net
79-
* https://github.com/ganguli-lab/proxalgs/blob/master/proxalgs/operators.py
79+
* https://github.com/ganguli-lab/proxalgs
8080
* https://github.com/pmelchior/proxmin
8181
* https://github.com/comp-imaging/ProxImaL
82-
* https://github.com/matthieumeo/pycsou
82+
* https://github.com/matthieumeo/pyxu
8383

8484
All of these projects are self-contained, meaning that they implement both proximal
8585
and linear operators as needed to solve a variety of problems in different areas

docs/source/installation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ or just clone the repository
6161

6262
.. code-block:: bash
6363
64-
>> git clone https://github.com/mrava87/pyproximal.git
64+
>> git clone https://github.com/PyLops/pyproximal.git
6565
6666
or download the zip file from the repository (green button in the top right corner of the
6767
main github repo page) and install PyProximal from terminal using the command:

pyproximal/optimization/primal.py

Lines changed: 68 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -102,8 +102,9 @@ def ProximalPoint(prox, x0, tau, niter=10, callback=None, show=False):
102102
return x
103103

104104

105-
def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
106-
epsg=1., niter=10, niterback=100,
105+
def ProximalGradient(proxf, proxg, x0, epsg=1.,
106+
tau=None, beta=0.5, eta=1.,
107+
niter=10, niterback=100,
107108
acceleration=None,
108109
callback=None, show=False):
109110
r"""Proximal gradient (optionally accelerated)
@@ -127,17 +128,19 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
127128
Proximal operator of g function
128129
x0 : :obj:`numpy.ndarray`
129130
Initial vector
131+
epsg : :obj:`float` or :obj:`np.ndarray`, optional
132+
Scaling factor of g function
130133
tau : :obj:`float` or :obj:`numpy.ndarray`, optional
131134
Positive scalar weight, which should satisfy the following condition
132135
to guarantees convergence: :math:`\tau \in (0, 1/L]` where ``L`` is
133136
the Lipschitz constant of :math:`\nabla f`. When ``tau=None``,
134137
backtracking is used to adaptively estimate the best tau at each
135-
iteration. Finally note that :math:`\tau` can be chosen to be a vector
138+
iteration. Finally, note that :math:`\tau` can be chosen to be a vector
136139
when dealing with problems with multiple right-hand-sides
137140
beta : :obj:`float`, optional
138141
Backtracking parameter (must be between 0 and 1)
139-
epsg : :obj:`float` or :obj:`np.ndarray`, optional
140-
Scaling factor of g function
142+
eta : :obj:`float`, optional
143+
Relaxation parameter (must be between 0 and 1, 0 excluded).
141144
niter : :obj:`int`, optional
142145
Number of iterations of iterative scheme
143146
niterback : :obj:`int`, optional
@@ -161,9 +164,8 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
161164
162165
.. math::
163166
164-
165-
\mathbf{x}^{k+1} = \prox_{\tau^k \epsilon g}(\mathbf{y}^{k+1} -
166-
\tau^k \nabla f(\mathbf{y}^{k+1})) \\
167+
\mathbf{x}^{k+1} = \mathbf{y}^k + \eta (\prox_{\tau^k \epsilon g}(\mathbf{y}^k -
168+
\tau^k \nabla f(\mathbf{y}^k)) - \mathbf{y}^k) \\
167169
\mathbf{y}^{k+1} = \mathbf{x}^k + \omega^k
168170
(\mathbf{x}^k - \mathbf{x}^{k-1})
169171
@@ -187,7 +189,7 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
187189
Different accelerations are provided:
188190
189191
- ``acceleration=None``: :math:`\omega^k = 0`;
190-
- `acceleration=vandenberghe`` [1]_: :math:`\omega^k = k / (k + 3)` for `
192+
- ``acceleration=vandenberghe`` [1]_: :math:`\omega^k = k / (k + 3)` for `
191193
- ``acceleration=fista``: :math:`\omega^k = (t_{k-1}-1)/t_k` for where
192194
:math:`t_k = (1 + \sqrt{1+4t_{k-1}^{2}}) / 2` [2]_
193195
@@ -197,9 +199,10 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
197199
Imaging Sciences, vol. 2, pp. 183-202. 2009.
198200
199201
"""
200-
# check if epgs is a ve
202+
# check if epgs is a vector
201203
if np.asarray(epsg).size == 1.:
202-
epsg_print = str(epsg)
204+
epsg = epsg * np.ones(niter)
205+
epsg_print = str(epsg[0])
203206
else:
204207
epsg_print = 'Multi'
205208

@@ -218,7 +221,7 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
218221
'niterback = %d\tacceleration = %s\n' % (type(proxf), type(proxg),
219222
'Adaptive' if tau is None else str(tau), beta,
220223
epsg_print, niter, niterback, acceleration))
221-
head = ' Itn x[0] f g J=f+eps*g'
224+
head = ' Itn x[0] f g J=f+eps*g tau'
222225
print(head)
223226

224227
backtracking = False
@@ -237,10 +240,15 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
237240

238241
# proximal step
239242
if not backtracking:
240-
x = proxg.prox(y - tau * proxf.grad(y), epsg * tau)
243+
if eta == 1.:
244+
x = proxg.prox(y - tau * proxf.grad(y), epsg[iiter] * tau)
245+
else:
246+
x = x + eta * (proxg.prox(x - tau * proxf.grad(x), epsg[iiter] * tau) - x)
241247
else:
242-
x, tau = _backtracking(y, tau, proxf, proxg, epsg,
248+
x, tau = _backtracking(y, tau, proxf, proxg, epsg[iiter],
243249
beta=beta, niterback=niterback)
250+
if eta != 1.:
251+
x = x + eta * (proxg.prox(x - tau * proxf.grad(x), epsg[iiter] * tau) - x)
244252

245253
# update internal parameters for bilinear operator
246254
if isinstance(proxf, BilinearOperator):
@@ -264,10 +272,11 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
264272
if show:
265273
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
266274
pf, pg = proxf(x), proxg(x)
267-
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
275+
msg = '%6g %12.5e %10.3e %10.3e %10.3e %10.3e' % \
268276
(iiter + 1, np.real(to_numpy(x[0])) if x.ndim == 1 else np.real(to_numpy(x[0, 0])),
269-
pf, pg[0] if epsg_print == 'Multi' else pg,
270-
pf + np.sum(epsg * pg))
277+
pf, pg,
278+
pf + np.sum(epsg[iiter] * pg),
279+
tau)
271280
print(msg)
272281
if show:
273282
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
@@ -296,8 +305,9 @@ def AcceleratedProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
296305
callback=callback, show=show)
297306

298307

299-
def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
300-
epsg=1., niter=10,
308+
def GeneralizedProximalGradient(proxfs, proxgs, x0, tau,
309+
epsg=1., weights=None,
310+
eta=1., niter=10,
301311
acceleration=None,
302312
callback=None, show=False):
303313
r"""Generalized Proximal gradient
@@ -316,24 +326,27 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
316326
317327
Parameters
318328
----------
319-
proxfs : :obj:`List of pyproximal.ProxOperator`
329+
proxfs : :obj:`list of pyproximal.ProxOperator`
320330
Proximal operators of the :math:`f_i` functions (must have ``grad`` implemented)
321-
proxgs : :obj:`List of pyproximal.ProxOperator`
331+
proxgs : :obj:`list of pyproximal.ProxOperator`
322332
Proximal operators of the :math:`g_j` functions
323333
x0 : :obj:`numpy.ndarray`
324334
Initial vector
325-
tau : :obj:`float` or :obj:`numpy.ndarray`, optional
335+
tau : :obj:`float`
326336
Positive scalar weight, which should satisfy the following condition
327337
to guarantees convergence: :math:`\tau \in (0, 1/L]` where ``L`` is
328-
the Lipschitz constant of :math:`\sum_{i=1}^n \nabla f_i`. When ``tau=None``,
329-
backtracking is used to adaptively estimate the best tau at each
330-
iteration.
338+
the Lipschitz constant of :math:`\sum_{i=1}^n \nabla f_i`.
331339
epsg : :obj:`float` or :obj:`np.ndarray`, optional
332340
Scaling factor(s) of ``g`` function(s)
341+
weights : :obj:`float`, optional
342+
Weighting factors of ``g`` functions. Must sum to 1.
343+
eta : :obj:`float`, optional
344+
Relaxation parameter (must be between 0 and 1, 0 excluded). Note that
345+
this will be only used when ``acceleration=None``.
333346
niter : :obj:`int`, optional
334347
Number of iterations of iterative scheme
335348
acceleration: :obj:`str`, optional
336-
Acceleration (``vandenberghe`` or ``fista``)
349+
Acceleration (``None``, ``vandenberghe`` or ``fista``)
337350
callback : :obj:`callable`, optional
338351
Function with signature (``callback(x)``) to call after each iteration
339352
where ``x`` is the current model vector
@@ -352,16 +365,27 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
352365
353366
.. math::
354367
\text{for } j=1,\cdots,n, \\
355-
~~~~\mathbf z_j^{k+1} = \mathbf z_j^{k} + \epsilon_j
356-
\left[prox_{\frac{\tau^k}{\omega_j} g_j}\left(2 \mathbf{x}^{k} - \mathbf{z}_j^{k}
368+
~~~~\mathbf z_j^{k+1} = \mathbf z_j^{k} + \eta
369+
\left[prox_{\frac{\tau^k \epsilon_j}{w_j} g_j}\left(2 \mathbf{x}^{k} - \mathbf{z}_j^{k}
357370
- \tau^k \sum_{i=1}^n \nabla f_i(\mathbf{x}^{k})\right) - \mathbf{x}^{k} \right] \\
358-
\mathbf{x}^{k+1} = \sum_{j=1}^n \omega_j f_j \\
359-
360-
where :math:`\sum_{j=1}^n \omega_j=1`. In the current implementation :math:`\omega_j=1/n`.
371+
\mathbf{x}^{k+1} = \sum_{j=1}^n w_j \mathbf z_j^{k+1} \\
372+
373+
where :math:`\sum_{j=1}^n w_j=1`. In the current implementation, :math:`w_j=1/n` when
374+
not provided.
375+
361376
"""
377+
# check if weights sum to 1
378+
if weights is None:
379+
weights = np.ones(len(proxgs)) / len(proxgs)
380+
if len(weights) != len(proxgs) or np.sum(weights) != 1.:
381+
raise ValueError(f'omega={weights} must be an array of size {len(proxgs)} '
382+
f'summing to 1')
383+
print(weights)
384+
362385
# check if epgs is a vector
363386
if np.asarray(epsg).size == 1.:
364387
epsg_print = str(epsg)
388+
epsg = epsg * np.ones(len(proxgs))
365389
else:
366390
epsg_print = 'Multi'
367391

@@ -403,9 +427,9 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
403427
x = np.zeros_like(x)
404428
for i, proxg in enumerate(proxgs):
405429
ztmp = 2 * y - zs[i] - tau * grad
406-
ztmp = proxg.prox(ztmp, tau * len(proxgs))
407-
zs[i] += epsg * (ztmp - y)
408-
x += zs[i] / len(proxgs)
430+
ztmp = proxg.prox(ztmp, tau * epsg[i] / weights[i])
431+
zs[i] += eta * (ztmp - y)
432+
x += weights[i] * zs[i]
409433

410434
# update y
411435
if acceleration == 'vandenberghe':
@@ -416,7 +440,6 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
416440
omega = ((told - 1.) / t)
417441
else:
418442
omega = 0
419-
420443
y = x + omega * (x - xold)
421444

422445
# run callback
@@ -558,7 +581,8 @@ def HQS(proxf, proxg, x0, tau, niter=10, z0=None, gfirst=True,
558581
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
559582
pf, pg = proxf(x), proxg(x)
560583
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
561-
(iiter + 1, x[0], pf, pg, pf + pg)
584+
(iiter + 1, np.real(to_numpy(x[0])),
585+
pf, pg, pf + pg)
562586
print(msg)
563587
if show:
564588
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
@@ -683,7 +707,8 @@ def ADMM(proxf, proxg, x0, tau, niter=10, gfirst=False,
683707
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
684708
pf, pg = proxf(x), proxg(x)
685709
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
686-
(iiter + 1, x[0], pf, pg, pf + pg)
710+
(iiter + 1, np.real(to_numpy(x[0])),
711+
pf, pg, pf + pg)
687712
print(msg)
688713
if show:
689714
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
@@ -784,7 +809,8 @@ def ADMML2(proxg, Op, b, A, x0, tau, niter=10, callback=None, show=False, **kwar
784809
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
785810
pf, pg = 0.5 * np.linalg.norm(Op @ x - b) ** 2, proxg(Ax)
786811
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
787-
(iiter + 1, x[0], pf, pg, pf + pg)
812+
(iiter + 1, np.real(to_numpy(x[0])),
813+
pf, pg, pf + pg)
788814
print(msg)
789815
if show:
790816
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
@@ -889,7 +915,8 @@ def LinearizedADMM(proxf, proxg, A, x0, tau, mu, niter=10,
889915
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
890916
pf, pg = proxf(x), proxg(Ax)
891917
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
892-
(iiter + 1, x[0], pf, pg, pf + pg)
918+
(iiter + 1, np.real(to_numpy(x[0])),
919+
pf, pg, pf + pg)
893920
print(msg)
894921
if show:
895922
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
@@ -1037,7 +1064,8 @@ def TwIST(proxg, A, b, x0, alpha=None, beta=None, eigs=None, niter=10,
10371064
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
10381065
pf, pg = proxf(x), proxg(x)
10391066
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
1040-
(iiter + 1, np.real(to_numpy(x[0])), pf, pg, pf + pg)
1067+
(iiter + 1, np.real(to_numpy(x[0])),
1068+
pf, pg, pf + pg)
10411069
print(msg)
10421070
if show:
10431071
print('\nTotal time (s) = %.2f' % (time.time() - tstart))

0 commit comments

Comments
 (0)