Skip to content

Commit 532dacf

Browse files
authored
[docs] various improvements (#664)
1 parent 30825a2 commit 532dacf

File tree

14 files changed

+248
-252
lines changed

14 files changed

+248
-252
lines changed

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ problem = minimize(sumsquares(A * x - b), [x >= 0])
4949
solve!(problem, SCS.Optimizer)
5050

5151
# Check the status of the problem
52-
problem.status # :Optimal, :Infeasible, :Unbounded etc.
52+
problem.status
5353

5454
# Get the optimal value
5555
problem.optval
@@ -102,21 +102,21 @@ settings:
102102
tol_feas = 1.0e-08, tol_gap_abs = 1.0e-08, tol_gap_rel = 1.0e-08,
103103
static reg : on, ϵ1 = 1.0e-08, ϵ2 = 4.9e-32
104104
dynamic reg: on, ϵ = 1.0e-13, δ = 2.0e-07
105-
iter refine: on, reltol = 1.0e-13, abstol = 1.0e-12,
105+
iter refine: on, reltol = 1.0e-13, abstol = 1.0e-12,
106106
max iter = 10, stop ratio = 5.0
107107
equilibrate: on, min_scale = 1.0e-04, max_scale = 1.0e+04
108108
max iter = 10
109109

110-
iter pcost dcost gap pres dres k/t μ step
110+
iter pcost dcost gap pres dres k/t μ step
111111
---------------------------------------------------------------------------------------------
112-
0 0.0000e+00 4.4359e-01 4.44e-01 8.68e-01 8.16e-02 1.00e+00 1.00e+00 ------
113-
1 2.2037e+00 2.6563e+00 2.05e-01 7.34e-02 6.03e-03 5.44e-01 1.01e-01 9.33e-01
114-
2 2.5276e+00 2.6331e+00 4.17e-02 1.43e-02 1.26e-03 1.27e-01 2.26e-02 7.84e-01
115-
3 2.6758e+00 2.7129e+00 1.39e-02 4.09e-03 3.42e-04 4.35e-02 6.00e-03 7.84e-01
116-
4 2.7167e+00 2.7178e+00 3.90e-04 1.18e-04 9.82e-06 1.25e-03 1.72e-04 9.80e-01
117-
5 2.7182e+00 2.7183e+00 9.60e-06 3.39e-06 2.82e-07 3.15e-05 4.95e-06 9.80e-01
118-
6 2.7183e+00 2.7183e+00 1.92e-07 6.74e-08 5.62e-09 6.29e-07 9.84e-08 9.80e-01
119-
7 2.7183e+00 2.7183e+00 4.70e-09 1.94e-09 1.61e-10 1.59e-08 2.83e-09 9.80e-01
112+
0 0.0000e+00 4.4359e-01 4.44e-01 8.68e-01 8.16e-02 1.00e+00 1.00e+00 ------
113+
1 2.2037e+00 2.6563e+00 2.05e-01 7.34e-02 6.03e-03 5.44e-01 1.01e-01 9.33e-01
114+
2 2.5276e+00 2.6331e+00 4.17e-02 1.43e-02 1.26e-03 1.27e-01 2.26e-02 7.84e-01
115+
3 2.6758e+00 2.7129e+00 1.39e-02 4.09e-03 3.42e-04 4.35e-02 6.00e-03 7.84e-01
116+
4 2.7167e+00 2.7178e+00 3.90e-04 1.18e-04 9.82e-06 1.25e-03 1.72e-04 9.80e-01
117+
5 2.7182e+00 2.7183e+00 9.60e-06 3.39e-06 2.82e-07 3.15e-05 4.95e-06 9.80e-01
118+
6 2.7183e+00 2.7183e+00 1.92e-07 6.74e-08 5.62e-09 6.29e-07 9.84e-08 9.80e-01
119+
7 2.7183e+00 2.7183e+00 4.70e-09 1.94e-09 1.61e-10 1.59e-08 2.83e-09 9.80e-01
120120
---------------------------------------------------------------------------------------------
121121
Terminated with status = solved
122122
solve time = 941μs

docs/make.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,7 @@ Documenter.makedocs(
105105
"Home" => "index.md",
106106
"introduction/installation.md",
107107
"introduction/quick_tutorial.md",
108+
"introduction/dcp.md",
108109
"introduction/faq.md",
109110
],
110111
"Examples" => examples_nav,

docs/src/examples/supplemental_material/Convex.jl_intro_ISMP2015.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ problem = minimize(
4646
# Solve the problem by calling `solve!`
4747
solve!(problem, SCS.Optimizer; silent_solver = true)
4848

49-
println("problem status is ", problem.status) # :Optimal, :Infeasible, :Unbounded etc.
49+
println("problem status is ", problem.status)
5050
println("optimal value is ", problem.optval)
5151

5252
#-

docs/src/index.md

Lines changed: 0 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -47,58 +47,3 @@ you know where to look for certain things.
4747
* The **Developer docs** section contains information for people contributing to
4848
Convex development. Don't worry about this section if you are using Convex to
4949
formulate and solve problems as a user.
50-
51-
## Extended formulations and the DCP ruleset
52-
53-
Convex.jl works by transforming the problem (which possibly has nonsmooth,
54-
nonlinear constructions like the nuclear norm, the log determinant, and so
55-
forth—into) a linear optimization problem subject to conic constraints. This
56-
reformulation often involves adding auxiliary variables, and is called an
57-
"extended formulation," since the original problem has been extended with
58-
additional variables. These formulations rely on the problem being modeled by
59-
combining Convex.jl's "atoms" or primitives according to certain rules which
60-
ensure convexity, called the
61-
[disciplined convex programming (DCP) ruleset](http://cvxr.com/cvx/doc/dcp.html).
62-
If these atoms are combined in a way that does not ensure convexity, the
63-
extended formulations are often invalid. As a simple example, consider the problem
64-
65-
```julia
66-
model = minimize(abs(x), x >= 1, x <= 2)
67-
```
68-
69-
The optimum occurs at `x=1`, but let us imagine we want to solve this problem
70-
via Convex.jl using a linear programming (LP) solver.
71-
72-
Since `abs` is a nonlinear function, we need to reformulate the problem to pass
73-
it to the LP solver. We do this by introducing an auxiliary variable `t` and
74-
instead solving:
75-
```julia
76-
model = minimize(t, x >= 1, x <= 2, t >= x, t >= -x)
77-
```
78-
That is, we add the constraints `t >= x` and `t >= -x`, and replace `abs(x)` by
79-
`t`. Since we are minimizing over `t` and the smallest possible `t` satisfying
80-
these constraints is the absolute value of `x`, we get the right answer. This
81-
reformulation worked because we were minimizing `abs(x)`, and that is a valid
82-
way to use the primitive `abs`.
83-
84-
If we were maximizing `abs`, Convex.jl would error with
85-
86-
> Problem not DCP compliant: objective is not DCP
87-
88-
Why? Well, let us consider the same reformulation for a maximization problem.
89-
The original problem is:
90-
```julia
91-
model = maximize(abs(x), x >= 1, x <= 2)
92-
```
93-
and the maximum of 2, obtained at `x = 2`. If we do the same reformulation as
94-
above, however, we arrive at the problem:
95-
```julia
96-
maximize(t, x >= 1, x <= 2, t >= x, t >= -x)
97-
```
98-
whose solution is infinity.
99-
100-
In other words, we got the wrong answer by using the reformulation, since the
101-
extended formulation was only valid for a minimization problem. Convex.jl always
102-
performs these reformulations, but they are only guaranteed to be valid when the
103-
DCP ruleset is followed. Therefore, Convex.jl programmatically checks the
104-
whether or not these rules were satisfied and errors if they were not.

docs/src/introduction/dcp.md

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
# Extended formulations and the DCP ruleset
2+
3+
Convex.jl works by transforming the problem (which possibly has nonsmooth,
4+
nonlinear constructions like the nuclear norm, the log determinant, and so
5+
forth—into) a linear optimization problem subject to conic constraints.
6+
7+
The transformed problem often involves adding auxiliary variables, and it is
8+
called an "extended formulation," since the original problem has been extended
9+
with additional variables.
10+
11+
Creating an extended formulation relies on the problem being modeled by
12+
combining Convex.jl's "atoms" or primitives according to certain rules which
13+
ensure convexity, called the
14+
[disciplined convex programming (DCP) ruleset](http://cvxr.com/cvx/doc/dcp.html).
15+
If these atoms are combined in a way that does not ensure convexity, the
16+
extended formulations are often invalid.
17+
18+
## A valid formulation
19+
20+
As a simple example, consider the problem:
21+
```@repl
22+
using Convex, SCS
23+
x = Variable();
24+
model_min = minimize(abs(x), [x >= 1, x <= 2]);
25+
solve!(model_min, SCS.Optimizer; silent_solver = true)
26+
x.value
27+
```
28+
29+
The optimum occurs at `x = 1`, but let us imagine we want to solve this problem
30+
via Convex.jl using a linear programming (LP) solver.
31+
32+
Since `abs` is a nonlinear function, we need to reformulate the problem to pass
33+
it to the LP solver. We do this by introducing an auxiliary variable `t` and
34+
instead solving:
35+
```@repl
36+
using Convex, SCS
37+
x = Variable();
38+
t = Variable();
39+
model_min_extended = minimize(t, [x >= 1, x <= 2, t >= x, t >= -x]);
40+
solve!(model_min_extended, SCS.Optimizer; silent_solver = true)
41+
x.value
42+
```
43+
That is, we add the constraints `t >= x` and `t >= -x`, and replace `abs(x)` by
44+
`t`. Since we are minimizing over `t` and the smallest possible `t` satisfying
45+
these constraints is the absolute value of `x`, we get the right answer. This
46+
reformulation worked because we were minimizing `abs(x)`, and that is a valid
47+
way to use the primitive `abs`.
48+
49+
## An invalid formulation
50+
51+
The reformulation of `abx(x)` works only if we are minimizing `t`.
52+
53+
Why? Well, let us consider the same reformulation for a maximization problem.
54+
The original problem is:
55+
```@repl
56+
using Convex
57+
x = Variable();
58+
model_max = maximize(abs(x), [x >= 1, x <= 2])
59+
```
60+
This time, `problem is DCP` reports `false`. If we attempt to solve the problem,
61+
an error is thrown:
62+
```julia
63+
julia> solve!(model_max, SCS.Optimizer; silent_solver = true)
64+
┌ Warning: Problem not DCP compliant: objective is not DCP
65+
└ @ Convex ~/.julia/dev/Convex/src/problems.jl:73
66+
ERROR: DCPViolationError: Expression not DCP compliant. This either means that your problem is not convex, or that we could not prove it was convex using the rules of disciplined convex programming. For a list of supported operations, see https://jump.dev/Convex.jl/stable/operations/. For help writing your problem as a disciplined convex program, please post a reproducible example on https://jump.dev/forum.
67+
Stacktrace:
68+
[...]
69+
```
70+
71+
The error is thrown because, if we do the same reformulation as before, we
72+
arrive at the problem:
73+
```@repl
74+
using Convex, SCS
75+
x = Variable();
76+
t = Variable();
77+
model_max_extended = maximize(t, [x >= 1, x <= 2, t >= x, t >= -x]);
78+
solve!(model_max_extended, SCS.Optimizer; silent_solver = true)
79+
```
80+
whose solution is unbounded.
81+
82+
In other words, we can get the wrong answer by using the extended reformulation,
83+
because the extended formulation was only valid for a minimization problem.
84+
85+
Convex.jl always creates the extended reformulation, but because they are only
86+
guaranteed to be valid when the DCP ruleset is followed, Convex.jl will
87+
programmatically check the whether or not these DCP rules were satisfied and
88+
error if they were not.

docs/src/introduction/faq.md

Lines changed: 25 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,17 @@
22

33
## Where can I get help?
44

5-
For usage questions, please contact us via the
6-
[Julia Discourse](https://discourse.julialang.org/c/domain/opt). If you're
7-
running into bugs or have feature requests, please use the [GitHub Issue
8-
Tracker](https://github.com/jump-dev/Convex.jl/issues).
5+
For usage questions, please start a new post on the
6+
[Julia Discourse](https://discourse.julialang.org/c/domain/opt).
7+
8+
If you have a reproducible example of a bug or if you have a feature request,
9+
please open a [GitHub issue](https://github.com/jump-dev/Convex.jl/issues/new).
910

1011
## How does Convex.jl differ from JuMP?
1112

1213
Convex.jl and JuMP are both modelling languages for mathematical programming
13-
embedded in Julia, and both interface with solvers via MathOptInterface, so many
14-
of the same solvers are available in both.
14+
embedded in Julia, and both interface with solvers via
15+
[MathOptInterface](https://github.com/jump-dev/MathOptInterface.jl).
1516

1617
Convex.jl converts problems to a standard conic form. This approach requires
1718
(and certifies) that the problem is convex and DCP compliant, and guarantees
@@ -26,45 +27,46 @@ formulation.
2627

2728
For linear programming, the difference is more stylistic. JuMP's syntax is
2829
scalar-based and similar to AMPL and GAMS making it easy and fast to create
29-
constraints by indexing and summation (like `sum(x[i] for i in 1:numLocation)`).
30+
constraints by indexing and summation (like `sum(x[i] for i in 1:n)`).
3031

3132
Convex.jl allows (and prioritizes) linear algebraic and functional constructions
32-
(like `max(x,y) <= A*z`); indexing and summation are also supported in Convex.jl,
33+
(like `max(x, y) <= A * z`); indexing and summation are also supported in Convex.jl,
3334
but are somewhat slower than in JuMP.
3435

3536
JuMP also lets you efficiently solve a sequence of problems when new constraints
3637
are added or when coefficients are modified, whereas Convex.jl parses the
37-
problem again whenever the [solve!]{.title-ref} method is called.
38+
problem again whenever the [solve!](@ref) method is called.
3839

3940
## Where can I learn more about Convex Optimization?
4041

4142
See the freely available book [Convex Optimization](http://web.stanford.edu/~boyd/cvxbook/)
42-
by Boyd and Vandenberghe for general background on convex optimization. For help
43-
understanding the rules of Disciplined Convex Programming, we recommend this
43+
by Boyd and Vandenberghe for general background on convex optimization.
44+
45+
For help understanding the rules of Disciplined Convex Programming, see the
4446
[DCP tutorial website](http://dcp.stanford.edu/).
4547

4648
## Are there similar packages available in other languages?
4749

48-
You might use [CVXPY](http://www.cvxpy.org) in Python, or [CVX](http://cvxr.com/)
49-
in Matlab.
50+
See [CVXPY](http://www.cvxpy.org) in Python and [CVX](http://cvxr.com/) in
51+
Matlab.
5052

5153
## How does Convex.jl work?
5254

5355
For a detailed discussion of how Convex.jl works, see [our paper](http://www.arxiv.org/abs/1410.4821).
5456

5557
## How do I cite this package?
5658

57-
If you use Convex.jl for published work, we encourage you to cite the
58-
software using the following BibTeX citation: :
59+
If you use Convex.jl for published work, we encourage you to cite the software
60+
using the following BibTeX citation:
5961

6062
```
6163
@article{convexjl,
62-
title = {Convex Optimization in {J}ulia},
63-
author ={Udell, Madeleine and Mohan, Karanveer and Zeng, David and Hong, Jenny and Diamond, Steven and Boyd, Stephen},
64-
year = {2014},
65-
journal = {SC14 Workshop on High Performance Technical Computing in Dynamic Languages},
66-
archivePrefix = "arXiv",
67-
eprint = {1410.4821},
68-
primaryClass = "math-oc",
69-
}
64+
title = {Convex Optimization in {J}ulia},
65+
author ={Udell, Madeleine and Mohan, Karanveer and Zeng, David and Hong, Jenny and Diamond, Steven and Boyd, Stephen},
66+
year = {2014},
67+
journal = {SC14 Workshop on High Performance Technical Computing in Dynamic Languages},
68+
archivePrefix = "arXiv",
69+
eprint = {1410.4821},
70+
primaryClass = "math-oc",
71+
}
7072
```

docs/src/introduction/installation.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,13 @@
11
# Installation
22

3-
Installing Convex.jl is a one step process. Open up Julia and type:
3+
Install Convex.jl using the Julia package manager:
44
```julia
55
using Pkg
6-
Pkg.update()
76
Pkg.add("Convex")
87
```
98

10-
This does not install any solvers. If you don't have a solver installed
11-
already, you will want to install a solver such as [SCS](https://github.com/jump-dev/SCS.jl)
9+
This does not install any solvers. If you don't have a solver installed already,
10+
you will want to install a solver such as [SCS](https://github.com/jump-dev/SCS.jl)
1211
by running:
1312
```julia
1413
Pkg.add("SCS")

docs/src/introduction/quick_tutorial.md

Lines changed: 3 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -15,27 +15,14 @@ with variable $x\in \mathbf{R}^{n}$, and problem data
1515
$A \in \mathbf{R}^{m \times n}$, $b \in \mathbf{R}^{m}$.
1616

1717
This problem can be solved in Convex.jl as follows:
18-
```@example
19-
# Make the Convex.jl module available
18+
```@repl
2019
using Convex, SCS
21-
22-
# Generate random problem data
2320
m = 4; n = 5
2421
A = randn(m, n); b = randn(m)
25-
26-
# Create a (column vector) variable of size n x 1.
2722
x = Variable(n)
28-
29-
# The problem is to minimize ||Ax - b||^2 subject to x >= 0
30-
# This can be done by: minimize(objective, constraints)
3123
problem = minimize(sumsquares(A * x - b), [x >= 0])
32-
33-
# Solve the problem by calling solve!
3424
solve!(problem, SCS.Optimizer; silent_solver = true)
35-
36-
# Check the status of the problem
37-
problem.status # :Optimal, :Infeasible, :Unbounded etc.
38-
39-
# Get the optimum value
25+
problem.status
4026
problem.optval
27+
x.value
4128
```

0 commit comments

Comments
 (0)