Skip to content

Commit ce98338

Browse files
AdrienTaylorAdrienTaylor
authored andcommitted
corrected wording
1 parent ec6e4a7 commit ce98338

File tree

9 files changed

+41
-32
lines changed

9 files changed

+41
-32
lines changed

PEPit/functions/smooth_convex_function.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,5 +51,5 @@ def __init__(self,
5151
if self.L == np.inf:
5252
print("\033[96m(PEPit) The class of smooth convex functions is necessarily differentiable.\n"
5353
"To instantiate a convex function, please avoid using the class SmoothConvexFunction with \n"
54-
"L == infinity. Instead, please use the class ConvexFunction that allows to compute several \n"
55-
"subgradients at the same point each time one is required.\033[0m")
54+
"L == np.inf. Instead, please use the class ConvexFunction (which accounts for the fact \n"
55+
"that there might be several subgradients at the same point).\033[0m")

PEPit/functions/smooth_function.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,8 +55,8 @@ def __init__(self,
5555
self.L = L
5656

5757
if self.L == np.inf:
58-
print("\033[96m(PEPit) The class of smooth functions is necessarily differentiable. \n"
59-
"When setting L to infinity, you remove all the other constraints on the function.\033[0m")
58+
print("\033[96m(PEPit) The class of L-smooth functions with L == np.inf implies no constraint: \n"
59+
"it contains all differentiable functions. This might imply issues in your code.\033[0m")
6060

6161
def add_class_constraints(self):
6262
"""

PEPit/functions/smooth_strongly_convex_function.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,8 @@ def __init__(self,
6161
if self.L == np.inf:
6262
print("\033[96m(PEPit) The class of smooth strongly convex functions is necessarily differentiable.\n"
6363
"To instantiate a strongly convex function, please avoid using the class SmoothStronglyConvexFunction\n"
64-
"with L == infinity. Instead, please use the class StronglyConvexFunction that allows to compute \n"
65-
"several subgradients at the same point each time one is required.\033[0m")
64+
"with L == np.inf. Instead, please use the class StronglyConvexFunction (which accounts for the fact\n"
65+
"that there might be several subgradients at the same point).\033[0m")
6666

6767
def add_class_constraints(self):
6868
"""

PEPit/operators/cocoercive.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ class CocoerciveOperator(Function):
88
implementing the interpolation constraints of the class of cocoercive (and maximally monotone) operators.
99
1010
Note:
11-
Operators'values can be requested through `gradient` and `function values` should not be used.
11+
Operator values can be requested through `gradient` and `function values` should not be used.
1212
1313
Attributes:
1414
beta (float): cocoercivity parameter
@@ -60,8 +60,8 @@ def __init__(self,
6060
if self.beta == 0:
6161
print("\033[96m(PEPit) The class of cocoercive operators is necessarily continuous. \n"
6262
"To instantiate a monotone opetator, please avoid using the class CocoerciveOperator\n"
63-
"with beta == 0. Instead, please use the class Monotone that allows to compute \n"
64-
"several values of the operator at the same point each time one is required.\033[0m")
63+
"with beta == 0. Instead, please use the class Monotone (which accounts for the fact \n"
64+
"that the image of the operator at certain points might not be a singleton).\033[0m")
6565

6666
def add_class_constraints(self):
6767
"""

PEPit/operators/lipschitz.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ class LipschitzOperator(Function):
88
implementing the interpolation constraints of the class of Lipschitz continuous operators.
99
1010
Note:
11-
Operators'values can be requested through `gradient` and `function values` should not be used.
11+
Operator values can be requested through `gradient` and `function values` should not be used.
1212
1313
Attributes:
1414
L (float) Lipschitz parameter
@@ -76,8 +76,8 @@ def __init__(self,
7676
self.L = L
7777

7878
if self.L == np.inf:
79-
print("\033[96m(PEPit) The class of Lipschitz operators is necessarily continuous. \n"
80-
"When setting L to infinity, you remove all the other constraints on the operator.\033[0m")
79+
print("\033[96m(PEPit) The class of L-Lipschitz operators with L == np.inf implies no constraint: \n"
80+
"it contains all multi-valued mappings. This might imply issues in your code.\033[0m")
8181

8282
def add_class_constraints(self):
8383
"""

PEPit/operators/lipschitz_strongly_monotone.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ class LipschitzStronglyMonotoneOperator(Function):
99
for the class of Lipschitz continuous strongly monotone (and maximally monotone) operators.
1010
1111
Note:
12-
Operators'values can be requested through `gradient` and `function values` should not be used.
12+
Operator values can be requested through `gradient` and `function values` should not be used.
1313
1414
Warning:
1515
Lipschitz strongly monotone operators do not enjoy known interpolation conditions. The conditions implemented
@@ -70,9 +70,9 @@ def __init__(self,
7070

7171
if self.L == np.inf:
7272
print("\033[96m(PEPit) The class of Lipschitz strongly monotone operators is necessarily continuous.\n"
73-
"To instantiate an operator, please avoid using the class LipschitzStronglyMonotoneOperator with L == infinity.\n"
74-
" Instead, please use the class StronglyMonotoneOperator that allows to compute several"
75-
"subgradients at the same point each time one is required.\033[0m")
73+
"To instantiate an operator, please avoid using the class LipschitzStronglyMonotoneOperator with\n"
74+
" L == np.inf. Instead, please use the class StronglyMonotoneOperator (which accounts for the fact\n"
75+
"that the image of the operator at certain points might not be a singleton).\033[0m")
7676

7777
def add_class_constraints(self):
7878
"""

PEPit/operators/monotone.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ class MonotoneOperator(Function):
77
implementing interpolation constraints for the class of maximally monotone operators.
88
99
Note:
10-
Operators'values can be requested through `gradient` and `function values` should not be used.
10+
Operator values can be requested through `gradient` and `function values` should not be used.
1111
1212
General maximally monotone operators are not characterized by any parameter, hence can be instantiated as
1313

PEPit/operators/strongly_monotone.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ class StronglyMonotoneOperator(Function):
88
(maximally monotone) operators.
99
1010
Note:
11-
Operators'values can be requested through `gradient` and `function values` should not be used.
11+
Operator values can be requested through `gradient` and `function values` should not be used.
1212
1313
Attributes:
1414
mu (float): strong monotonicity parameter

docs/source/quickstart.rst

Lines changed: 23 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -73,12 +73,13 @@ From now, you can declare functions thanks to the `declare_function` method.
7373
To enforce the same subgradient to be returned each time one is required,
7474
we introduced the attribute `reuse_gradient` in the `Function` class.
7575
Some classes of functions contain only differentiable functions (e.g. smooth convex function).
76-
In those, the `reuse_gradient` attribute is per default set to True.
76+
In those, the `reuse_gradient` attribute is set to True by default.
7777

7878
When the same subgradient is used several times in the same code and when it is difficult to
79-
to keep track on it (through proximal calls for instance), it may be useful to set this parameter
79+
to keep track of it (through proximal calls for instance), it may be useful to set this parameter
8080
to True even if the function is not differentiable. This helps reducing the number of constraints,
81-
and improve the accuracy of the worst-case. See for instance the code for `improved interior method
81+
and improve the accuracy of the underlying semidefinite program. See for instance the code for
82+
`improved interior method
8283
<https://pepit.readthedocs.io/en/latest/examples/b.html#improved-interior-method>`_ or
8384
`no Lips in Bregman divergence
8485
<https://pepit.readthedocs.io/en/latest/examples/b.html#no-lips-in-bregman-divergence>`_.
@@ -153,10 +154,10 @@ Finally, you can ask PEPit to solve the system for you and return the worst-case
153154
pepit_tau = problem.solve()
154155
155156
.. warning::
156-
Performance estimation problems consists in reformulating the problem as an optimization problem, convex in a Gram
157-
matrix G, and in function values F. The dimension of G is directly related to the number of points at which
158-
the gradients are evaluated, and the differentiability of the function.
159-
157+
Performance estimation problems consist in reformulating the problem of finding a worst-case scenario as a semidefinite
158+
program (SDP). The dimension of the corresponding SDP is directly related to the number of function and gradient evaluations
159+
in a given code.
160+
160161
We encourage the users to perform as few function and subgradient evaluations as possible, as the size of the
161162
corresponding SDP grows with the number of subgradient/function evaluations at different points.
162163

@@ -213,26 +214,34 @@ Then, after solving the system, you can require its associated dual variable val
213214
Output pdf
214215
~~~~~~~~~~
215216

216-
In a latter release, we will provide an option to output a pdf file summarizing all those pieces of information.
217+
In a later release, we will provide an option to output a pdf file summarizing all those pieces of information.
217218

218-
Simplify proofs
219-
^^^^^^^^^^^^^^^
219+
Simpler worst-case scenarios
220+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
220221

221222
Sometimes, there are several solutions to the PEP problem.
222-
In order to simplify the proof, one would prefer a low dimension solution.
223-
To this end, we provide an **heuristic** based on the trace to reduce the dimension of the provided solution.
223+
For obtaining simpler worst-case scenarios, one would prefer a low dimension solutions to the SDP.
224+
To this end, we provide **heuristics** based on the trace norm or log det minimization for reducing
225+
the dimension of the numerical solution to the SDP.
224226

225-
You can use it by specifying
227+
You can use the trace heuristic by specifying
226228

227229
.. code-block::
228230
229231
problem.solve(dimension_reduction_heuristic="trace")
232+
233+
You can use the n iteration of the log det heuristic by specifying "logdetn". For example, for
234+
using 5 iterations of the logdet heuristic:
235+
236+
.. code-block::
237+
238+
problem.solve(dimension_reduction_heuristic="logdet5")
230239
231240
232241
Finding Lyapunov
233242
^^^^^^^^^^^^^^^^
234243

235-
In a latter release, we will provide tools to help finding good Lyapunov functions to study a given method.
244+
In a later release, we will provide tools to help finding good Lyapunov functions to study a given method.
236245

237246
This tool will be based on the very recent work [7].
238247

0 commit comments

Comments
 (0)