Skip to content

Conversation

@zhenghaoz
Copy link
Collaborator

No description provided.

@codecov
Copy link

codecov bot commented Dec 21, 2025

Codecov Report

❌ Patch coverage is 95.34884% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 72.92%. Comparing base (0a9dbff) to head (772a623).

Files with missing lines Patch % Lines
dataset/dataset.go 95.23% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1129      +/-   ##
==========================================
+ Coverage   72.87%   72.92%   +0.05%     
==========================================
  Files          79       79              
  Lines       15199    15236      +37     
==========================================
+ Hits        11076    11111      +35     
- Misses       3019     3020       +1     
- Partials     1104     1105       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements LLM benchmarking capabilities for the Gorse recommendation system by adding timestamp tracking to feedback data and creating evaluation functions for different model types (collaborative filtering, attentive factorization machines, and LLM-based rankers).

Changes:

  • Added timestamp tracking to the dataset structure to enable temporal data splitting
  • Implemented SplitLatest() method to create train/test splits based on most recent feedback per user
  • Created comprehensive benchmark command with evaluation functions for CF models, AFM, and LLM rankers

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
dataset/dataset.go Added timestamps field and tracking, implemented SplitLatest method for temporal splitting, added GetItems to CFSplit interface
dataset/dataset_test.go Updated tests for AddFeedback signature change, added test coverage for SplitLatest method
model/cf/evaluator_test.go Updated AddFeedback call to include timestamp parameter
master/tasks.go Updated AddFeedback call to pass actual timestamp from feedback
logics/chat.go Added ChatTemplateKwargs to disable thinking mode for chat completions
common/parallel/parallel_test.go Increased test sleep duration from 100ms to 1 second
cmd/gorse-benchmark/main.go Implemented comprehensive benchmark command with EvaluateCF, EvaluateAFM, EvaluateLLM functions and CTR dataset splitting logic

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

}
count.Add(1)
time.Sleep(100 * time.Millisecond)
time.Sleep(time.Second)
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sleep time has been increased from 100ms to 1 second, making this test 10x slower. This significant increase in test duration could slow down the test suite considerably. If this change is necessary for test reliability, consider whether the test design could be improved to avoid such a long delay, or add a comment explaining why this longer duration is required.

Suggested change
time.Sleep(time.Second)
time.Sleep(100 * time.Millisecond)

Copilot uses AI. Check for mistakes.
Comment on lines +228 to +238
addedNegative := make(map[int32]bool)
for i := 0; i < ctrDataset.Count(); i++ {
user := ctrDataset.Users[i]
item := ctrDataset.Items[i]
target := ctrDataset.Target[i]
key := makeKey(user, item)
if target > 0 && testPositive.Contains(key) {
appendSample(testSet, user, item, target)
} else if target <= 0 && !addedNegative[user] {
appendSample(testSet, user, item, target)
addedNegative[user] = true
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic here only adds one negative sample per user to the test set. This could lead to an imbalanced test set where users with multiple negative samples in the original dataset only get one in the test set. Consider whether this is the intended behavior, or if all negative samples matching the test users should be included. If intentional, a comment explaining this design choice would be helpful.

Suggested change
addedNegative := make(map[int32]bool)
for i := 0; i < ctrDataset.Count(); i++ {
user := ctrDataset.Users[i]
item := ctrDataset.Items[i]
target := ctrDataset.Target[i]
key := makeKey(user, item)
if target > 0 && testPositive.Contains(key) {
appendSample(testSet, user, item, target)
} else if target <= 0 && !addedNegative[user] {
appendSample(testSet, user, item, target)
addedNegative[user] = true
for i := 0; i < ctrDataset.Count(); i++ {
user := ctrDataset.Users[i]
item := ctrDataset.Items[i]
target := ctrDataset.Target[i]
key := makeKey(user, item)
if target > 0 && testPositive.Contains(key) {
// Positive feedback in the CF test split goes to the CTR test set.
appendSample(testSet, user, item, target)
} else if target <= 0 && testNegative.Contains(key) {
// Include all sampled negative feedback for test users in the CTR test set.
appendSample(testSet, user, item, target)

Copilot uses AI. Check for mistakes.
pCtx.Detach()
result, err := chat.Rank(context.Background(), &data.User{}, feedback, candidates)
if err != nil {
if apiError, ok := err.(*openai.APIError); ok && apiError.HTTPStatusCode == 421 {
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HTTP status code 421 (Misdirected Request) is an unusual status code to handle for API throttling. Status code 429 (Too Many Requests) is the standard HTTP status code for rate limiting. Verify that the OpenAI API actually returns 421 for throttling, or if this should be 429. If 421 is intentional, consider adding a comment explaining why this specific status code is being handled.

Suggested change
if apiError, ok := err.(*openai.APIError); ok && apiError.HTTPStatusCode == 421 {
if apiError, ok := err.(*openai.APIError); ok && apiError.HTTPStatusCode == 429 {

Copilot uses AI. Check for mistakes.
return trainSet, testSet
}

// SplitLatest splits dataset by moving the most recent feedback of all users into the test set to avoid leakage.
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation comment for SplitLatest should be more detailed. It currently only mentions "to avoid leakage" but doesn't explain what kind of leakage. Consider expanding the documentation to clarify that this method prevents temporal data leakage by ensuring that training data comes from earlier time periods than test data, which is crucial for realistic evaluation of recommendation models.

Suggested change
// SplitLatest splits dataset by moving the most recent feedback of all users into the test set to avoid leakage.
// SplitLatest splits the dataset by moving the most recent feedback of each user into the test set.
// This enforces a temporal ordering between training and test data: the training set only contains
// earlier interactions, while the test set contains the latest interactions per user. By ensuring
// that the model is trained only on feedback that predates the test feedback, SplitLatest prevents
// temporal data leakage and provides a more realistic evaluation setting for recommendation models.

Copilot uses AI. Check for mistakes.
)
})

score := sum.Load() / count.Load()
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a potential division by zero issue here. If count.Load() is zero (which could happen if all LLM API calls fail or are throttled), this will cause a panic. Consider adding a check to handle the case when count is zero, for example by returning 0 or handling the error appropriately.

Suggested change
score := sum.Load() / count.Load()
c := count.Load()
if c == 0 {
log.Logger().Warn("no successful LLM evaluations, returning default score", zap.Float32("score", 0))
return 0
}
score := sum.Load() / c

Copilot uses AI. Check for mistakes.
Comment on lines +175 to 178
ndcg := float32(0)
if ndcgUsers > 0 {
ndcg = sumNDCG / ndcgUsers
}
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a potential division by zero issue here. If ndcgUsers is zero (which could happen if no valid users are found with positive targets), the function will return 0 by default. However, it would be clearer to explicitly handle this edge case at the return statement to avoid confusion about whether ndcg was computed or defaulted.

Copilot uses AI. Check for mistakes.
test.SampleUserNegatives(dataset, 99)
// go EvaluateCF(train, test, &scores)
EvaluateAFM(ctrDataset, train, test)
// EvaluateLLM(cfg, train, test)
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These commented-out lines should be either removed or uncommented for production code. Leaving commented code in the codebase reduces code clarity and can lead to confusion about which functionality is active. If these are temporary for development/testing purposes, consider using a feature flag or configuration option instead.

Suggested change
// EvaluateLLM(cfg, train, test)

Copilot uses AI. Check for mistakes.
// EvaluateLLM(cfg, train, test, aux.GetItems())
var scores sync.Map
train, test := dataset.SplitLatest()
test.SampleUserNegatives(dataset, 99)
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The result of SampleUserNegatives is not being used. This call generates negative samples but the return value is discarded. If negative samples are needed for evaluation, the result should be stored. If this is intentionally pre-populating negatives for later use, consider adding a comment to clarify the intent. Note that both EvaluateLLM and SplitCTRDataset call SampleUserNegatives separately with train as the excludeSet, which may be the intended usage pattern.

Copilot uses AI. Check for mistakes.
Comment on lines +291 to +293
testSet.timestamps[userIndex] = append(testSet.timestamps[userIndex], d.timestamps[userIndex][k])
testSet.itemFeedback[d.userFeedback[userIndex][k]] = append(testSet.itemFeedback[d.userFeedback[userIndex][k]], userIndex)
testSet.userFeedback[userIndex] = append(testSet.userFeedback[userIndex], d.userFeedback[userIndex][k])
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order of operations has been changed here. Lines 291 and 293 should be in the same order as lines 272-273, where userFeedback is updated before itemFeedback. The current order (timestamps, itemFeedback, userFeedback) is inconsistent with the pattern elsewhere in this function and could lead to confusion. Consider reordering to maintain consistency: userFeedback first, then itemFeedback, then timestamps.

Suggested change
testSet.timestamps[userIndex] = append(testSet.timestamps[userIndex], d.timestamps[userIndex][k])
testSet.itemFeedback[d.userFeedback[userIndex][k]] = append(testSet.itemFeedback[d.userFeedback[userIndex][k]], userIndex)
testSet.userFeedback[userIndex] = append(testSet.userFeedback[userIndex], d.userFeedback[userIndex][k])
testSet.userFeedback[userIndex] = append(testSet.userFeedback[userIndex], d.userFeedback[userIndex][k])
testSet.itemFeedback[d.userFeedback[userIndex][k]] = append(testSet.itemFeedback[d.userFeedback[userIndex][k]], userIndex)
testSet.timestamps[userIndex] = append(testSet.timestamps[userIndex], d.timestamps[userIndex][k])

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants