You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
is a convex piecewise linear-quadratic loss function. You can find built-in loss functions in the `Loss <./loss.rst>`_ section.
37
37
38
-
- :math:`\mathbf{A}` is a :math:`d \times (k+1)` matrix and :math:`\mathbf{b}` is a :math:`d`-dimensional vector
39
-
representing :math:`d` linear constraints. See `Constraints <./constraint.rst>`_ for more details.
38
+
- :math:`\mathbf{A}_{\text{user}}` is a :math:`d \times (k+1)` matrix and :math:`\mathbf{b}_{\text{user}}` is a :math:`d`-dimensional vector
39
+
representing :math:`d` linear constraints to user side parameters. See `Constraints <./constraint.rst>`_ for more details.
40
+
41
+
- :math:`\mathbf{A}_{\text{item}}` is a :math:`d \times (k+1)` matrix and :math:`\mathbf{b}_{\text{item}}` is a :math:`d`-dimensional vector
42
+
representing :math:`d` linear constraints to item side parameters. See `Constraints <./constraint.rst>`_ for more details.
40
43
41
44
- :math:`\Omega`
42
45
is a user-item collection that records all training data
@@ -93,11 +96,11 @@ Basic Usage
93
96
94
97
# 3. Model Construction
95
98
clf = plqMF_Ridge(
96
-
C=0.001, ## Regularization strength
97
-
rank=6, ## Latent factor dimension
98
-
loss={'name': 'mae'}, ## Use absolute loss
99
-
n_users=user_num, ## Number of users
100
-
n_items=item_num, ## Number of items
99
+
C=0.001, ## Regularization strength
100
+
rank=6, ## Latent factor dimension
101
+
loss={'name': 'mae'}, ## Use absolute loss
102
+
n_users=user_num, ## Number of users
103
+
n_items=item_num, ## Number of items
101
104
)
102
105
clf.fit(X_train, y_train)
103
106
@@ -118,19 +121,19 @@ Choosing different `loss functions <./loss.rst>`_ through :code:`loss`:
118
121
clf_mse = plqMF_Ridge(
119
122
C=0.001,
120
123
rank=6,
121
-
loss={'name': 'mse'}, ## Choose square loss
124
+
loss={'name': 'mse'}, ## Choose square loss
122
125
n_users=user_num,
123
126
n_items=item_num)
124
127
125
128
# Hinge loss (suitable for binary data)
126
129
clf_hinge = plqMF_Ridge(
127
130
C=0.001,
128
131
rank=6,
129
-
loss={'name': 'hinge'}, ## Choose hinge loss
132
+
loss={'name': 'hinge'}, ## Choose hinge loss
130
133
n_users=user_num,
131
134
n_items=item_num)
132
135
133
-
`Linear constraints <./constraint.rst>`_ can be applied via :code:`constraint`:
136
+
`Linear constraints <./constraint.rst>`_ can be applied via :code:`constraint_user` and :code:`constraint_item`:
134
137
135
138
.. code-block:: python
136
139
@@ -141,7 +144,8 @@ Choosing different `loss functions <./loss.rst>`_ through :code:`loss`:
141
144
loss={'name': 'mae'},
142
145
n_users=user_num,
143
146
n_items=item_num,
144
-
constraint=[{'name': '>=0'}] ## Use nonnegative constraint
147
+
constraint_user=[{'name': '>=0'}], ## Use nonnegative constraint
148
+
constraint_item=[{'name': '>=0'}]
145
149
)
146
150
147
151
The algorithm includes bias terms :math:`\mathbf{\alpha}` and :math:`\mathbf{\beta}` by default. To disable them, that is, :math:`\mathbf{\alpha} = \mathbf{0}` and :math:`\mathbf{\beta} = \mathbf{0}`, set: :code:`biased=False`:
@@ -155,7 +159,7 @@ The algorithm includes bias terms :math:`\mathbf{\alpha}` and :math:`\mathbf{\be
155
159
loss={'name': 'mae'},
156
160
n_users=user_num,
157
161
n_items=item_num,
158
-
biased=False## Disable bias terms
162
+
biased=False## Disable bias terms
159
163
)
160
164
161
165
Imposing different strengths of regularization on items/users through :code:`rho`:
@@ -169,7 +173,7 @@ Imposing different strengths of regularization on items/users through :code:`rho
169
173
loss={'name': 'mae'},
170
174
n_users=user_num,
171
175
n_items=item_num,
172
-
rho=0.7## Add heavier penalties for user parameters
176
+
rho=0.7## Add heavier penalties for user parameters
173
177
)
174
178
175
179
Parameter Tuning
@@ -182,7 +186,7 @@ The model complexity is mainly controlled by :code:`C` and :code:`rank`.
182
186
183
187
for C_value in [0.0002, 0.001, 0.005]:
184
188
clf = plqMF_Ridge(
185
-
C=C_value, ## Try different regularization strengths
189
+
C=C_value, ## Try different regularization strengths
186
190
rank=6,
187
191
loss={'name': 'mae'},
188
192
n_users=user_num,
@@ -197,7 +201,7 @@ The model complexity is mainly controlled by :code:`C` and :code:`rank`.
197
201
for rank_value in [4, 8, 12]:
198
202
clf = plqMF_Ridge(
199
203
C=0.001,
200
-
rank=rank_value, ## Try different latent factor dimensions
204
+
rank=rank_value, ## Try different latent factor dimensions
0 commit comments