You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
is a convex piecewise linear-quadratic loss function. You can find built-in loss functions in the `Loss <./loss.rst>`_ section.
44
37
45
-
- :math:`\mathbf{A}` is a :math:`K\timesr` matrix and :math:`\mathbf{b}` is a :math:`K`-dimensional vector
46
-
representing :math:`K` linear constraints. See `Constraints <./constraint.rst>`_ for more details.
38
+
- :math:`\mathbf{A}` is a :math:`d\times(k+1)` matrix and :math:`\mathbf{b}` is a :math:`d`-dimensional vector
39
+
representing :math:`d` linear constraints. See `Constraints <./constraint.rst>`_ for more details.
47
40
48
41
- :math:`\Omega`
49
42
is a user-item collection that records all training data
50
43
51
44
- :math:`n` is number of users, :math:`m` is number of items
52
45
53
-
- :math:`r` is length of latent factors (rank of MF)
46
+
- :math:`k` is length of latent factors (rank of MF)
54
47
55
48
- :math:`C` is regularization parameter, :math:`\rho` balances regularization strength between user and item
56
49
@@ -214,69 +207,13 @@ The model complexity is mainly controlled by :code:`C` and :code:`rank`.
214
207
mae = mean_absolute_error(y_test, y_pred)
215
208
print(f"rank={rank_value}: MAE = {mae:.3f}")
216
209
217
-
Convergence Tracking
218
-
^^^^^^^^^^^^^^^^^^^^
219
-
220
-
You can customize the optimization process by setting your preferred iteration counts and tolerance levels.
221
-
Training progress can be monitored either by enabling :code:`verbose` output during fitting or by examining the :code:`history` attribute after fitting.
222
-
223
-
.. code-block:: python
224
-
225
-
clf = plqMF_Ridge(
226
-
C=0.001,
227
-
rank=6,
228
-
loss={'name': 'mae'},
229
-
n_users=user_num,
230
-
n_items=item_num,
231
-
max_iter_CD=15, ## Outer CD iterations
232
-
tol_CD=1e-5, ## Outer CD tolerance
233
-
max_iter=8000, ## ReHLine solver iterations
234
-
tol=1e-2, ## ReHLine solver tolerance
235
-
verbose=1, ## Enable progress output
236
-
)
237
-
clf.fit(X_train, y_train)
238
-
239
-
print(clf.history) ## Check training trace of cumulative loss and objection value
240
-
241
-
Different Gaussian initial conditions can be manually set by :code:`init_mean` and :code:`init_sd`:
242
-
243
-
.. code-block:: python
244
-
245
-
# Initialize model with positive shifted normal
246
-
clf = plqMF_Ridge(
247
-
C=0.001,
248
-
rank=6,
249
-
loss={'name': 'mae'},
250
-
n_users=user_num,
251
-
n_items=item_num,
252
-
init_mean=1.0, ## Manually set mean of normal distribution
253
-
init_sd=0.5## Manually set sd of normal distribution
254
-
)
255
-
256
210
Practical Guidance
257
211
^^^^^^^^^^^^^^^^^^
258
212
259
213
- The first column of :code:`X` corresponds to **users**, and the second column corresponds to **items**. Please ensure this aligns with your :code:`n_users` and :code:`n_items` parameters.
260
214
- The default penalty strength is relatively weak; it is recommended to set a relatively small :code:`C` value initially.
261
215
- When using larger :code:`C` values, consider increasing :code:`max_iter` to avoid ConvergenceWarning.
262
216
263
-
264
-
Regularization Conversion
265
-
-------------------------
266
-
The regularization in this algorithm is tuned via :math:`C` and :math:`\rho`. For users who prefer to set the penalty strength directly, you may achieve conversion through the following formula:
267
-
268
-
.. math::
269
-
\lambda_{\text{user}} = \frac{\rho}{Cn}
270
-
\quad\text{and}\quad
271
-
\lambda_{\text{item}} = \frac{(1 - \rho)}{Cm}
272
-
273
-
274
-
.. math::
275
-
C = \frac{1}{m \cdot\lambda_{\text{item}} + n \cdot\lambda_{\text{user}}}
276
-
\quad\text{and}\quad
277
-
\rho = \frac{1}{\frac{m \cdot\lambda_{\text{item}}}{ n \cdot\lambda_{\text{user}}}+1}
0 commit comments