Skip to content

Commit 29ad039

Browse files
authored
Merge pull request tensorly#600 from acotino-ignitioncomputing/main
Escape characters compatible with python >= 3.12
2 parents d53c3ff + da76092 commit 29ad039

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

tensorly/solvers/admm.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -96,20 +96,20 @@ def admm(
9696
9797
.. math:: x_{split} = argmin_{x_{split}}~ f(x_{split}) + (\\rho/2)\\|Ax_{split} + Bx - c\\|_2^2
9898
.. math:: x = argmin_x~ g(x) + (\\rho/2)\\|Ax_{split} + Bx - c\\|_2^2
99-
.. math:: dual\_var = dual\_var + (Ax + Bx_{split} - c)
99+
.. math:: dual\\_var = dual\\_var + (Ax + Bx_{split} - c)
100100
101101
where rho is a constant defined by the user.
102102
103103
Let us define a least square problem such as :math:`\\|Ux - M\\|^2 + r(x)`.
104104
105105
ADMM can be adapted to this least square problem as following
106106
107-
.. math:: x_{split} = (UtU + \\rho\\times I)\\times(UtM + \\rho\\times(x + dual\_var)^T)
108-
.. math:: x = argmin_{x}~ r(x) + (\\rho/2)\\|x - x_{split}^T + dual\_var\\|_2^2
109-
.. math:: dual\_var = dual\_var + x - x_{split}^T
107+
.. math:: x_{split} = (UtU + \\rho\\times I)\\times(UtM + \\rho\\times(x + dual\\_var)^T)
108+
.. math:: x = argmin_{x}~ r(x) + (\\rho/2)\\|x - x_{split}^T + dual\\_var\\|_2^2
109+
.. math:: dual\\_var = dual\\_var + x - x_{split}^T
110110
111111
where r is the regularization operator. Here, x can be updated by using proximity operator
112-
of :math:`x_{split}^T - dual\_var`.
112+
of :math:`x_{split}^T - dual\\_var`.
113113
114114
References
115115
----------

tensorly/solvers/nnls.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,14 +100,14 @@ def hals_nnls(
100100
101101
This problem can also be defined by adding respectively a sparsity coefficient and a ridge coefficients
102102
103-
.. math:: \lambda_s, \lambda_r
103+
.. math:: \\lambda_s, \\lambda_r
104104
105105
enhancing sparsity or smoothness in the solution [2]. In this sparse/ridge version, the update rule becomes
106106
107107
.. math::
108108
109109
\\begin{equation}
110-
V[k,:]_{(j+1)} = V[k,:]_{(j)} + (UtM[k,:] - UtU[k,:]\\times V_{(j)} - \lambda_s)/(UtU[k,k]+2\lambda_r)
110+
V[k,:]_{(j+1)} = V[k,:]_{(j)} + (UtM[k,:] - UtU[k,:]\\times V_{(j)} - \\lambda_s)/(UtU[k,k]+2\\lambda_r)
111111
\\end{equation}
112112
113113
Note that the data fitting is halved but not the ridge penalization.

0 commit comments

Comments
 (0)