Skip to content

Commit e28ae5b

Browse files
authored
Update example with more evaluation metrics (#339)
1 parent 6613192 commit e28ae5b

File tree

2 files changed

+21
-17
lines changed

2 files changed

+21
-17
lines changed

README.md

Lines changed: 16 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -100,30 +100,33 @@ Define metrics used to evaluate the models:
100100
```python
101101
mae = cornac.metrics.MAE()
102102
rmse = cornac.metrics.RMSE()
103-
recall = cornac.metrics.Recall(k=[10, 20])
104-
ndcg = cornac.metrics.NDCG(k=[10, 20])
103+
prec = cornac.metrics.Precision(k=10)
104+
recall = cornac.metrics.Recall(k=10)
105+
ndcg = cornac.metrics.NDCG(k=10)
105106
auc = cornac.metrics.AUC()
106-
107+
mAP = cornac.metrics.MAP()
107108
```
108109

109110
Put everything together into an experiment and run it:
110111

111112
```python
112-
cornac.Experiment(eval_method=rs,
113-
models=[mf, pmf, bpr],
114-
metrics=[mae, rmse, recall, ndcg, auc],
115-
user_based=True).run()
113+
cornac.Experiment(
114+
eval_method=rs,
115+
models=[mf, pmf, bpr],
116+
metrics=[mae, rmse, recall, ndcg, auc, mAP],
117+
user_based=True
118+
).run()
116119
```
117120

118121
**Output:**
119122

120-
| | MAE | RMSE | AUC | NDCG@10 | NDCG@20 | Recall@10 | Recall@20 | Train (s) | Test (s) |
121-
| ------------------------ | -----: | -----: | -----: | ------: | ------: | --------: | --------: | ---------: | -------: |
122-
| [MF](cornac/models/mf) | 0.7430 | 0.8998 | 0.7445 | 0.0479 | 0.0556 | 0.0352 | 0.0654 | 0.13 | 1.57 |
123-
| [PMF](cornac/models/pmf) | 0.7534 | 0.9138 | 0.7744 | 0.0617 | 0.0719 | 0.0479 | 0.0880 | 2.18 | 1.64 |
124-
| [BPR](cornac/models/bpr) | N/A | N/A | 0.8695 | 0.0975 | 0.1129 | 0.0891 | 0.1449 | 3.74 | 1.49 |
123+
| | MAE | RMSE | AUC | MAP | NDCG@10 | Precision@10 | Recall@10 | Train (s) | Test (s) |
124+
| ------------------------ | -----: | -----: | -----: | ------: | ------: | -----------: | --------: | ---------: | -------: |
125+
| [MF](cornac/models/mf) | 0.7430 | 0.8998 | 0.7445 | 0.0407 | 0.0479 | 0.0437 | 0.0352 | 0.13 | 1.57 |
126+
| [PMF](cornac/models/pmf) | 0.7534 | 0.9138 | 0.7744 | 0.0491 | 0.0617 | 0.0533 | 0.0479 | 2.18 | 1.64 |
127+
| [BPR](cornac/models/bpr) | N/A | N/A | 0.8695 | 0.0753 | 0.0975 | 0.0727 | 0.0891 | 3.74 | 1.49 |
125128

126-
For more details, please take a look at our [examples](examples).
129+
For more details, please take a look at our [examples](examples) as well as [tutorials](tutorials).
127130

128131
## Models
129132

examples/first_example.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,15 +39,16 @@
3939
# Define metrics used to evaluate the models
4040
mae = cornac.metrics.MAE()
4141
rmse = cornac.metrics.RMSE()
42-
recall = cornac.metrics.Recall(k=[10, 20])
43-
ndcg = cornac.metrics.NDCG(k=[10, 20])
42+
prec = cornac.metrics.Precision(k=10)
43+
recall = cornac.metrics.Recall(k=10)
44+
ndcg = cornac.metrics.NDCG(k=10)
4445
auc = cornac.metrics.AUC()
46+
mAP = cornac.metrics.MAP()
4547

4648
# Put it together into an experiment and run
4749
cornac.Experiment(
4850
eval_method=rs,
4951
models=[mf, pmf, bpr],
50-
metrics=[mae, rmse, recall, ndcg, auc],
52+
metrics=[mae, rmse, prec, recall, ndcg, auc, mAP],
5153
user_based=True,
5254
).run()
53-

0 commit comments

Comments
 (0)