Skip to content

Commit 4029e00

Browse files
authored
Merge pull request #44 from PerformanceEstimation/feature/doctest
Feature/doctest
2 parents 1c787f0 + ce98338 commit 4029e00

File tree

10 files changed

+84
-13
lines changed

10 files changed

+84
-13
lines changed

PEPit/functions/convex_indicator.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ class ConvexIndicatorFunction(Function):
99
implementing interpolation constraints for the class of closed convex indicator functions.
1010
1111
Attributes:
12-
D (float): upper bound on the diameter of the feasible set
12+
D (float): upper bound on the diameter of the feasible set, possibly set to np.inf
1313
1414
Convex indicator functions are characterized by a parameter `D`, hence can be instantiated as
1515

PEPit/functions/smooth_convex_function.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import numpy as np
12
from PEPit.functions.smooth_strongly_convex_function import SmoothStronglyConvexFunction
23

34

@@ -46,3 +47,9 @@ def __init__(self,
4647
reuse_gradient=True,
4748
mu=0,
4849
L=L)
50+
51+
if self.L == np.inf:
52+
print("\033[96m(PEPit) The class of smooth convex functions is necessarily differentiable.\n"
53+
"To instantiate a convex function, please avoid using the class SmoothConvexFunction with \n"
54+
"L == np.inf. Instead, please use the class ConvexFunction (which accounts for the fact \n"
55+
"that there might be several subgradients at the same point).\033[0m")

PEPit/functions/smooth_function.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import numpy as np
12
from PEPit.function import Function
23

34

@@ -53,6 +54,10 @@ def __init__(self,
5354
# Store L
5455
self.L = L
5556

57+
if self.L == np.inf:
58+
print("\033[96m(PEPit) The class of L-smooth functions with L == np.inf implies no constraint: \n"
59+
"it contains all differentiable functions. This might imply issues in your code.\033[0m")
60+
5661
def add_class_constraints(self):
5762
"""
5863
Formulates the list of interpolation constraints for self (smooth (not necessarily convex) function),

PEPit/functions/smooth_strongly_convex_function.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import numpy as np
12
from PEPit.function import Function
23

34

@@ -57,6 +58,12 @@ def __init__(self,
5758
self.mu = mu
5859
self.L = L
5960

61+
if self.L == np.inf:
62+
print("\033[96m(PEPit) The class of smooth strongly convex functions is necessarily differentiable.\n"
63+
"To instantiate a strongly convex function, please avoid using the class SmoothStronglyConvexFunction\n"
64+
"with L == np.inf. Instead, please use the class StronglyConvexFunction (which accounts for the fact\n"
65+
"that there might be several subgradients at the same point).\033[0m")
66+
6067
def add_class_constraints(self):
6168
"""
6269
Formulates the list of interpolation constraints for self (smooth strongly convex function); see [1, Theorem 4].

PEPit/operators/cocoercive.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import numpy as np
12
from PEPit.function import Function
23

34

@@ -7,7 +8,7 @@ class CocoerciveOperator(Function):
78
implementing the interpolation constraints of the class of cocoercive (and maximally monotone) operators.
89
910
Note:
10-
Operators'values can be requested through `gradient` and `function values` should not be used.
11+
Operator values can be requested through `gradient` and `function values` should not be used.
1112
1213
Attributes:
1314
beta (float): cocoercivity parameter
@@ -56,6 +57,12 @@ def __init__(self,
5657
# Store the beta parameter
5758
self.beta = beta
5859

60+
if self.beta == 0:
61+
print("\033[96m(PEPit) The class of cocoercive operators is necessarily continuous. \n"
62+
"To instantiate a monotone opetator, please avoid using the class CocoerciveOperator\n"
63+
"with beta == 0. Instead, please use the class Monotone (which accounts for the fact \n"
64+
"that the image of the operator at certain points might not be a singleton).\033[0m")
65+
5966
def add_class_constraints(self):
6067
"""
6168
Formulates the list of interpolation constraints for self (cocoercive maximally monotone operator),

PEPit/operators/lipschitz.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import numpy as np
12
from PEPit.function import Function
23

34

@@ -7,7 +8,7 @@ class LipschitzOperator(Function):
78
implementing the interpolation constraints of the class of Lipschitz continuous operators.
89
910
Note:
10-
Operators'values can be requested through `gradient` and `function values` should not be used.
11+
Operator values can be requested through `gradient` and `function values` should not be used.
1112
1213
Attributes:
1314
L (float) Lipschitz parameter
@@ -74,6 +75,10 @@ def __init__(self,
7475
# Store L
7576
self.L = L
7677

78+
if self.L == np.inf:
79+
print("\033[96m(PEPit) The class of L-Lipschitz operators with L == np.inf implies no constraint: \n"
80+
"it contains all multi-valued mappings. This might imply issues in your code.\033[0m")
81+
7782
def add_class_constraints(self):
7883
"""
7984
Formulates the list of interpolation constraints for self (Lipschitz operator),

PEPit/operators/lipschitz_strongly_monotone.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import numpy as np
12
from PEPit.function import Function
23

34

@@ -8,7 +9,7 @@ class LipschitzStronglyMonotoneOperator(Function):
89
for the class of Lipschitz continuous strongly monotone (and maximally monotone) operators.
910
1011
Note:
11-
Operators'values can be requested through `gradient` and `function values` should not be used.
12+
Operator values can be requested through `gradient` and `function values` should not be used.
1213
1314
Warning:
1415
Lipschitz strongly monotone operators do not enjoy known interpolation conditions. The conditions implemented
@@ -67,6 +68,12 @@ def __init__(self,
6768
self.mu = mu
6869
self.L = L
6970

71+
if self.L == np.inf:
72+
print("\033[96m(PEPit) The class of Lipschitz strongly monotone operators is necessarily continuous.\n"
73+
"To instantiate an operator, please avoid using the class LipschitzStronglyMonotoneOperator with\n"
74+
" L == np.inf. Instead, please use the class StronglyMonotoneOperator (which accounts for the fact\n"
75+
"that the image of the operator at certain points might not be a singleton).\033[0m")
76+
7077
def add_class_constraints(self):
7178
"""
7279
Formulates the list of necessary conditions for interpolation of self (Lipschitz strongly monotone and

PEPit/operators/monotone.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,13 @@ class MonotoneOperator(Function):
77
implementing interpolation constraints for the class of maximally monotone operators.
88
99
Note:
10-
Operators'values can be requested through `gradient` and `function values` should not be used.
10+
Operator values can be requested through `gradient` and `function values` should not be used.
1111
1212
General maximally monotone operators are not characterized by any parameter, hence can be instantiated as
1313
1414
Example:
1515
>>> from PEPit import PEP
16+
>>> from PEPit.operators import MonotoneOperator
1617
>>> problem = PEP()
1718
>>> h = problem.declare_function(function_class=MonotoneOperator)
1819

PEPit/operators/strongly_monotone.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ class StronglyMonotoneOperator(Function):
88
(maximally monotone) operators.
99
1010
Note:
11-
Operators'values can be requested through `gradient` and `function values` should not be used.
11+
Operator values can be requested through `gradient` and `function values` should not be used.
1212
1313
Attributes:
1414
mu (float): strong monotonicity parameter
@@ -18,6 +18,7 @@ class StronglyMonotoneOperator(Function):
1818
1919
Example:
2020
>>> from PEPit import PEP
21+
>>> from PEPit.operators import StronglyMonotoneOperator
2122
>>> problem = PEP()
2223
>>> h = problem.declare_function(function_class=StronglyMonotoneOperator, mu=.1)
2324

docs/source/quickstart.rst

Lines changed: 38 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,20 @@ From now, you can declare functions thanks to the `declare_function` method.
6969
7070
func = problem.declare_function(SmoothConvexFunction, L=L)
7171
72+
.. warning::
73+
To enforce the same subgradient to be returned each time one is required,
74+
we introduced the attribute `reuse_gradient` in the `Function` class.
75+
Some classes of functions contain only differentiable functions (e.g. smooth convex function).
76+
In those, the `reuse_gradient` attribute is set to True by default.
77+
78+
When the same subgradient is used several times in the same code and when it is difficult to
79+
to keep track of it (through proximal calls for instance), it may be useful to set this parameter
80+
to True even if the function is not differentiable. This helps reducing the number of constraints,
81+
and improve the accuracy of the underlying semidefinite program. See for instance the code for
82+
`improved interior method
83+
<https://pepit.readthedocs.io/en/latest/examples/b.html#improved-interior-method>`_ or
84+
`no Lips in Bregman divergence
85+
<https://pepit.readthedocs.io/en/latest/examples/b.html#no-lips-in-bregman-divergence>`_.
7286

7387
You can also define a new point with
7488

@@ -139,6 +153,14 @@ Finally, you can ask PEPit to solve the system for you and return the worst-case
139153
140154
pepit_tau = problem.solve()
141155
156+
.. warning::
157+
Performance estimation problems consist in reformulating the problem of finding a worst-case scenario as a semidefinite
158+
program (SDP). The dimension of the corresponding SDP is directly related to the number of function and gradient evaluations
159+
in a given code.
160+
161+
We encourage the users to perform as few function and subgradient evaluations as possible, as the size of the
162+
corresponding SDP grows with the number of subgradient/function evaluations at different points.
163+
142164

143165
Derive proofs and adversarial objectives
144166
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -192,25 +214,34 @@ Then, after solving the system, you can require its associated dual variable val
192214
Output pdf
193215
~~~~~~~~~~
194216

195-
In a latter release, we will provide an option to output a pdf file summarizing all those pieces of information.
217+
In a later release, we will provide an option to output a pdf file summarizing all those pieces of information.
196218

197-
Simplify proofs
198-
^^^^^^^^^^^^^^^
219+
Simpler worst-case scenarios
220+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
199221

200222
Sometimes, there are several solutions to the PEP problem.
201-
In order to simplify the proof, one would prefer a low dimension solution.
202-
To this end, we provide an **heuristic** based on the trace to reduce the dimension of the provided solution.
223+
For obtaining simpler worst-case scenarios, one would prefer a low dimension solutions to the SDP.
224+
To this end, we provide **heuristics** based on the trace norm or log det minimization for reducing
225+
the dimension of the numerical solution to the SDP.
203226

204-
You can use it by specifying
227+
You can use the trace heuristic by specifying
205228

206229
.. code-block::
207230
208231
problem.solve(dimension_reduction_heuristic="trace")
232+
233+
You can use the n iteration of the log det heuristic by specifying "logdetn". For example, for
234+
using 5 iterations of the logdet heuristic:
235+
236+
.. code-block::
237+
238+
problem.solve(dimension_reduction_heuristic="logdet5")
239+
209240
210241
Finding Lyapunov
211242
^^^^^^^^^^^^^^^^
212243

213-
In a latter release, we will provide tools to help finding good Lyapunov functions to study a given method.
244+
In a later release, we will provide tools to help finding good Lyapunov functions to study a given method.
214245

215246
This tool will be based on the very recent work [7].
216247

0 commit comments

Comments
 (0)