You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We explore a novel use case for Large Language Models (LLMs) in recommendation: generating natural language user taste profiles from listening histories. Unlike traditional opaque embeddings, these profiles are interpretable, editable, and give users greater transparency and control over their personalization. However, it is unclear whether users actually recognize themselves in these profiles, and whether some users or items are systematically better represented than others. Understanding this is crucial for trust, usability, and fairness in LLM-based recommender systems.
To study this, we generate profiles using three different LLMs and evaluate them along two dimensions: self-identification, through a user study with 64 participants, and recommendation performance in a downstream task. We analyze how both are affected by user attributes (e.g., age, taste diversity, mainstreamness) and item features (e.g., genre, country of origin). Our results show that profile quality varies across users and items, and that self-identification and recommendation performance are only weakly correlated. These findings highlight both the promise and the limitations of scrutable, LLM-based profiling in personalized systems.
On music streaming services, listening sessions are often composed of a balance of familiar and new tracks. Recently, sequential recommender systems have adopted cognitive-informed approaches, such as Adaptive Control of Thought-Rational (ACT-R), to successfully improve the prediction of the most relevant tracks for the next user session. However, one limitation of using a model inspired by human memory (or the past), is that it struggles to recommend new tracks that users have not previously listened to. To bridge this gap, here we propose a model that leverages audio information to predict in advance the ACT-R-like activation of new tracks and incorporates them into the recommendation scoring process. We demonstrate the empirical effectiveness of the proposed model using proprietary data, which we publicly release along with the model's source code to foster future research in this field.
26
+
On music streaming services, listening sessions are often composed of a balance of familiar and new tracks. Recently, sequential recommender systems have adopted cognitive-informed approaches, such as Adaptive Control of Thought—Rational (ACT-R), to successfully improve the prediction of the most relevant tracks for the next user session. However, one limitation of using a model inspired by human memory (or the past), is that it struggles to recommend new tracks that users have not previously listened to. To bridge this gap, here we propose a model that leverages audio information to predict in advance the ACT-R-like activation of new tracks and incorporates them into the recommendation scoring process. We demonstrate the empirical effectiveness of the proposed model using proprietary data, which we publicly release along with the model’s source code to foster future research in this field.
0 commit comments