Skip to content

Commit f71d351

Browse files
Transurgeonclaude
andcommitted
Merge PR #135: Sync from CVXPY upstream master
Resolves merge conflicts: - README.md: Keep DNLP-specific README - cvxpy/atoms/pnorm.py: Keep DNLP _hess_vec methods, add upstream PnormApprox class - cvxpy/utilities/citations.py: Keep IPOPT/UNO citations, add MOREAU citation Co-Authored-By: Claude Opus 4.5 <[email protected]>
2 parents 6af3241 + d44d359 commit f71d351

File tree

131 files changed

+6443
-1040
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

131 files changed

+6443
-1040
lines changed

.github/workflows/docs.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,15 +36,15 @@ jobs:
3636
echo DEPLOY_LATEST=$(python continuous_integration/doc_is_latest.py --c "$CURRENT_VERSION" --l "$LATEST_DEPLOYED_VERSION") >> $GITHUB_ENV
3737
3838
- name: Deploy this version
39-
uses: JamesIves/github-pages-deploy-action@v4.7.6
39+
uses: JamesIves/github-pages-deploy-action@v4.8.0
4040
with:
4141
branch: gh-pages # The branch the action should deploy to.
4242
folder: doc/build/html # The folder the action should deploy.
4343
target-folder: ${{ env.VERSION_PATH }}
4444

4545
- name: Deploy to root if latest version
4646
if: ${{env.DEPLOY_LATEST == 'True'}}
47-
uses: JamesIves/github-pages-deploy-action@v4.7.6
47+
uses: JamesIves/github-pages-deploy-action@v4.8.0
4848
with:
4949
branch: gh-pages # The branch the action should deploy to.
5050
folder: doc/build/html # The folder the action should deploy.

.github/workflows/scorecards.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,6 @@ jobs:
5151
# Upload the results to GitHub's code scanning dashboard (optional).
5252
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
5353
- name: "Upload to code-scanning"
54-
uses: github/codeql-action/upload-sarif@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v2.16.4
54+
uses: github/codeql-action/upload-sarif@19b2f06db2b6f5108140aeb04014ef02b648f789 # v2.16.4
5555
with:
5656
sarif_file: results.sarif

PROCEDURES.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,13 @@ If this file has changed between versions, the old patch will fail to apply and
113113
## Creating a release on GitHub
114114
Go to the [Releases](https://github.com/cvxpy/cvxpy/releases) tab and click "Draft a new release". Select the previously created tag and write release notes. For minor releases, this includes a summary of new features and deprecations. Additionally, we mention the PRs contained in the release and their contributors. Take care to select the "set as the latest release" only for minor releases or patches to the most recent major release.
115115

116+
To generate the list of PRs and contributors, use the `tools/release_notes.py` script:
117+
```
118+
python tools/release_notes.py v1.8.0 # minor release
119+
python tools/release_notes.py v1.7.5 # patch release
120+
```
121+
For minor releases, the script automatically excludes PRs that were cherry-picked into the previous release branch's patch releases. For patch releases, it compares against the previous patch tag.
122+
116123
## Deploying updated documentation to gh-pages
117124

118125
The web documentation is built and deployed using a GitHub action that can be found [here](https://github.com/cvxpy/cvxpy/blob/master/.github/workflows/docs.yml).

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# DNLP — Disciplined Nonlinear Programming
22
The DNLP package is an extension of [CVXPY](https://www.cvxpy.org/) to general nonlinear programming (NLP).
3-
DNLP allows smooth functions to be freely mixed with nonsmooth convex and concave functions,
3+
DNLP allows smooth functions to be freely mixed with nonsmooth convex and concave functions,
44
with some rules governing how the nonsmooth functions can be used. For details, see our paper [Disciplined Nonlinear Programming](https://web.stanford.edu/~boyd/papers/dnlp.html).
55

66
---
@@ -27,7 +27,7 @@ pip install .
2727
Below we give a toy example where we maximize a convex quadratic function subject to a nonlinear equality constraint. Many more examples, including the ones in the paper, can be found at [DNLP-examples](https://github.com/cvxgrp/dnlp-examples).
2828
```python
2929
import cvxpy as cp
30-
import numpy as np
30+
import numpy as np
3131
import cvxpy as cp
3232

3333
# problem data

continuous_integration/install_dependencies.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ fi
1717

1818
uv pip install pytest pytest-cov hypothesis "setuptools>65.5.1"
1919

20-
uv pip install scs clarabel osqp
20+
uv pip install scs clarabel osqp highspy
2121

2222
if [[ "$RUNNER_OS" != "macOS" ]]; then
2323
uv pip install mkl

cvxpy/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,9 @@
4141
enable_warnings as enable_warnings,
4242
warnings_enabled as warnings_enabled,
4343
)
44+
from cvxpy.utilities.warn import (
45+
CvxpyDeprecationWarning as CvxpyDeprecationWarning,
46+
)
4447
from cvxpy.expressions.constants import (
4548
CallbackParam as CallbackParam,
4649
Constant as Constant,
@@ -59,6 +62,7 @@
5962
partial_optimize as partial_optimize,
6063
suppfunc as suppfunc,
6164
)
65+
from cvxpy import logic as logic
6266
from cvxpy.reductions.solvers.defines import installed_solvers as installed_solvers
6367
from cvxpy.settings import (
6468
CBC as CBC,

cvxpy/atoms/__init__.py

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@
6565
from cvxpy.atoms.elementwise.minimum import minimum
6666
from cvxpy.atoms.elementwise.neg import neg
6767
from cvxpy.atoms.elementwise.pos import pos
68-
from cvxpy.atoms.elementwise.power import power
68+
from cvxpy.atoms.elementwise.power import Power, PowerApprox, power
6969
from cvxpy.atoms.elementwise.rel_entr import rel_entr
7070
from cvxpy.atoms.elementwise.scalene import scalene
7171
from cvxpy.atoms.elementwise.sqrt import sqrt
@@ -75,7 +75,7 @@
7575
from cvxpy.atoms.elementwise.hyperbolic import sinh, asinh, tanh, atanh
7676
from cvxpy.atoms.eye_minus_inv import eye_minus_inv, resolvent
7777
from cvxpy.atoms.gen_lambda_max import gen_lambda_max
78-
from cvxpy.atoms.geo_mean import geo_mean
78+
from cvxpy.atoms.geo_mean import GeoMean, GeoMeanApprox, geo_mean
7979
from cvxpy.atoms.gmatmul import gmatmul
8080
from cvxpy.atoms.harmonic_mean import harmonic_mean
8181
from cvxpy.atoms.inv_prod import inv_prod
@@ -98,7 +98,7 @@
9898
from cvxpy.atoms.one_minus_pos import diff_pos, one_minus_pos
9999
from cvxpy.atoms.perspective import perspective
100100
from cvxpy.atoms.pf_eigenvalue import pf_eigenvalue
101-
from cvxpy.atoms.pnorm import Pnorm, pnorm
101+
from cvxpy.atoms.pnorm import Pnorm, PnormApprox, pnorm
102102
from cvxpy.atoms.prod import Prod, prod
103103
from cvxpy.atoms.quad_form import QuadForm, quad_form
104104
from cvxpy.atoms.quad_over_lin import quad_over_lin
@@ -117,16 +117,18 @@
117117
# TODO(akshayka): Perhaps couple this information with the atom classes
118118
# themselves.
119119
SOC_ATOMS = [
120-
geo_mean,
121-
pnorm,
122-
Pnorm,
120+
GeoMeanApprox,
121+
PnormApprox,
123122
QuadForm,
124123
quad_over_lin,
125-
power,
124+
PowerApprox,
126125
huber,
127126
std,
128127
]
129128

129+
POWCONE_ATOMS = [Pnorm, Power]
130+
POWCONE_ND_ATOMS = [GeoMean]
131+
130132
EXP_ATOMS = [
131133
log_sum_exp,
132134
log_det,

cvxpy/atoms/affine/binary_operators.py

Lines changed: 47 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@
2626
import cvxpy.utilities as u
2727
from cvxpy.atoms.affine.add_expr import AddExpression
2828
from cvxpy.atoms.affine.affine_atom import AffAtom
29+
from cvxpy.atoms.affine.broadcast_to import broadcast_to
2930
from cvxpy.atoms.affine.conj import conj
3031
from cvxpy.atoms.affine.promote import Promote
3132
from cvxpy.atoms.affine.reshape import deep_flatten, reshape
@@ -131,6 +132,52 @@ class MulExpression(BinaryOperator):
131132
OP_NAME = "@"
132133
OP_FUNC = op.mul
133134

135+
def __init__(self, lh_exp, rh_exp) -> None:
136+
# Broadcast batch dimensions for ND matmul
137+
lh_exp, rh_exp = self._broadcast_batch_dims(lh_exp, rh_exp)
138+
super(MulExpression, self).__init__(lh_exp, rh_exp)
139+
140+
@staticmethod
141+
def _broadcast_batch_dims(lh_exp, rh_exp):
142+
"""
143+
Broadcast batch dimensions for ND matrix multiplication.
144+
145+
For A @ B where A has shape (...a, m, k) and B has shape (...b, k, n),
146+
broadcasts both to have batch shape broadcast(...a, ...b).
147+
"""
148+
lh_exp = Expression.cast_to_const(lh_exp)
149+
rh_exp = Expression.cast_to_const(rh_exp)
150+
151+
lh_shape = lh_exp.shape
152+
rh_shape = rh_exp.shape
153+
154+
# Only apply batch broadcasting for ND arrays (ndim > 2)
155+
if len(lh_shape) <= 2 and len(rh_shape) <= 2:
156+
return lh_exp, rh_exp
157+
158+
# Extract batch dimensions (all but last 2)
159+
lh_batch = lh_shape[:-2] if len(lh_shape) > 2 else ()
160+
rh_batch = rh_shape[:-2] if len(rh_shape) > 2 else ()
161+
162+
# Compute broadcast batch shape
163+
try:
164+
broadcast_batch = np.broadcast_shapes(lh_batch, rh_batch)
165+
except ValueError:
166+
# Let shape validation handle the error with a clearer message
167+
return lh_exp, rh_exp
168+
169+
# Broadcast lhs if needed
170+
if lh_batch != broadcast_batch:
171+
target_shape = broadcast_batch + lh_shape[-2:]
172+
lh_exp = broadcast_to(lh_exp, target_shape)
173+
174+
# Broadcast rhs if needed
175+
if rh_batch != broadcast_batch:
176+
target_shape = broadcast_batch + rh_shape[-2:]
177+
rh_exp = broadcast_to(rh_exp, target_shape)
178+
179+
return lh_exp, rh_exp
180+
134181
def numeric(self, values):
135182
"""Matrix multiplication.
136183
"""
@@ -139,11 +186,6 @@ def numeric(self, values):
139186
else:
140187
return values[0] @ values[1]
141188

142-
def validate_arguments(self):
143-
"""Validate that the arguments can be multiplied together."""
144-
if self.args[0].ndim > 2 or self.args[1].ndim > 2:
145-
raise ValueError("Multiplication with N-d arrays is not yet supported")
146-
147189
def shape_from_args(self) -> Tuple[int, ...]:
148190
"""Returns the (row, col) shape of the expression.
149191
"""

cvxpy/atoms/affine/conv.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@
1414
limitations under the License.
1515
"""
1616

17-
import warnings
1817
from typing import List, Tuple
1918

2019
import numpy as np
@@ -26,6 +25,7 @@
2625
from cvxpy.atoms.affine.affine_atom import AffAtom
2726
from cvxpy.constraints.constraint import Constraint
2827
from cvxpy.expressions.constants.parameter import is_param_free
28+
from cvxpy.utilities.warn import CvxpyDeprecationWarning, warn
2929

3030

3131
class conv(AffAtom):
@@ -48,7 +48,7 @@ class conv(AffAtom):
4848
"""
4949

5050
def __init__(self, lh_expr, rh_expr) -> None:
51-
warnings.warn("conv is deprecated. Use convolve instead.", DeprecationWarning)
51+
warn("conv is deprecated. Use convolve instead.", CvxpyDeprecationWarning)
5252
super(conv, self).__init__(lh_expr, rh_expr)
5353

5454
@AffAtom.numpy_numeric

cvxpy/atoms/affine/cumsum.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@
1313
See the License for the specific language governing permissions and
1414
limitations under the License.
1515
"""
16-
import warnings
1716
from typing import Optional, Tuple
1817

1918
import numpy as np
@@ -23,6 +22,7 @@
2322
from cvxpy.atoms.affine.affine_atom import AffAtom
2423
from cvxpy.atoms.axis_atom import AxisAtom
2524
from cvxpy.expressions.expression import Expression
25+
from cvxpy.utilities.warn import warn
2626

2727

2828
def _sparse_triu_ones(dim: int) -> sp.csc_array:
@@ -58,7 +58,7 @@ def validate_arguments(self) -> None:
5858
"""Validate axis, but handle 0D arrays specially."""
5959
if self.args[0].ndim == 0:
6060
if self.axis is not None:
61-
warnings.warn(
61+
warn(
6262
"cumsum on 0-dimensional arrays currently returns a scalar, "
6363
"but in a future CVXPY version it will return a 1-element "
6464
"array to match numpy.cumsum behavior. Additionally, only "

0 commit comments

Comments
 (0)