MovieLens-1M is a widely used movie dataset with explicit ratings (from 1 to 5). In paper MeLU, they use MAE to evaluate the rating error. But in your paper, it seems like you adopt ranking metrics (Precision@5 and MAP@5) to evaluate the performance?