-
Notifications
You must be signed in to change notification settings - Fork 851
implement LLM benchmark #1129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
implement LLM benchmark #1129
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1129 +/- ##
==========================================
+ Coverage 72.87% 72.92% +0.05%
==========================================
Files 79 79
Lines 15199 15236 +37
==========================================
+ Hits 11076 11111 +35
- Misses 3019 3020 +1
- Partials 1104 1105 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…to CFSplit interface
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements LLM benchmarking capabilities for the Gorse recommendation system by adding timestamp tracking to feedback data and creating evaluation functions for different model types (collaborative filtering, attentive factorization machines, and LLM-based rankers).
Changes:
- Added timestamp tracking to the dataset structure to enable temporal data splitting
- Implemented
SplitLatest()method to create train/test splits based on most recent feedback per user - Created comprehensive benchmark command with evaluation functions for CF models, AFM, and LLM rankers
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| dataset/dataset.go | Added timestamps field and tracking, implemented SplitLatest method for temporal splitting, added GetItems to CFSplit interface |
| dataset/dataset_test.go | Updated tests for AddFeedback signature change, added test coverage for SplitLatest method |
| model/cf/evaluator_test.go | Updated AddFeedback call to include timestamp parameter |
| master/tasks.go | Updated AddFeedback call to pass actual timestamp from feedback |
| logics/chat.go | Added ChatTemplateKwargs to disable thinking mode for chat completions |
| common/parallel/parallel_test.go | Increased test sleep duration from 100ms to 1 second |
| cmd/gorse-benchmark/main.go | Implemented comprehensive benchmark command with EvaluateCF, EvaluateAFM, EvaluateLLM functions and CTR dataset splitting logic |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
| count.Add(1) | ||
| time.Sleep(100 * time.Millisecond) | ||
| time.Sleep(time.Second) |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sleep time has been increased from 100ms to 1 second, making this test 10x slower. This significant increase in test duration could slow down the test suite considerably. If this change is necessary for test reliability, consider whether the test design could be improved to avoid such a long delay, or add a comment explaining why this longer duration is required.
| time.Sleep(time.Second) | |
| time.Sleep(100 * time.Millisecond) |
| addedNegative := make(map[int32]bool) | ||
| for i := 0; i < ctrDataset.Count(); i++ { | ||
| user := ctrDataset.Users[i] | ||
| item := ctrDataset.Items[i] | ||
| target := ctrDataset.Target[i] | ||
| key := makeKey(user, item) | ||
| if target > 0 && testPositive.Contains(key) { | ||
| appendSample(testSet, user, item, target) | ||
| } else if target <= 0 && !addedNegative[user] { | ||
| appendSample(testSet, user, item, target) | ||
| addedNegative[user] = true |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic here only adds one negative sample per user to the test set. This could lead to an imbalanced test set where users with multiple negative samples in the original dataset only get one in the test set. Consider whether this is the intended behavior, or if all negative samples matching the test users should be included. If intentional, a comment explaining this design choice would be helpful.
| addedNegative := make(map[int32]bool) | |
| for i := 0; i < ctrDataset.Count(); i++ { | |
| user := ctrDataset.Users[i] | |
| item := ctrDataset.Items[i] | |
| target := ctrDataset.Target[i] | |
| key := makeKey(user, item) | |
| if target > 0 && testPositive.Contains(key) { | |
| appendSample(testSet, user, item, target) | |
| } else if target <= 0 && !addedNegative[user] { | |
| appendSample(testSet, user, item, target) | |
| addedNegative[user] = true | |
| for i := 0; i < ctrDataset.Count(); i++ { | |
| user := ctrDataset.Users[i] | |
| item := ctrDataset.Items[i] | |
| target := ctrDataset.Target[i] | |
| key := makeKey(user, item) | |
| if target > 0 && testPositive.Contains(key) { | |
| // Positive feedback in the CF test split goes to the CTR test set. | |
| appendSample(testSet, user, item, target) | |
| } else if target <= 0 && testNegative.Contains(key) { | |
| // Include all sampled negative feedback for test users in the CTR test set. | |
| appendSample(testSet, user, item, target) |
| pCtx.Detach() | ||
| result, err := chat.Rank(context.Background(), &data.User{}, feedback, candidates) | ||
| if err != nil { | ||
| if apiError, ok := err.(*openai.APIError); ok && apiError.HTTPStatusCode == 421 { |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HTTP status code 421 (Misdirected Request) is an unusual status code to handle for API throttling. Status code 429 (Too Many Requests) is the standard HTTP status code for rate limiting. Verify that the OpenAI API actually returns 421 for throttling, or if this should be 429. If 421 is intentional, consider adding a comment explaining why this specific status code is being handled.
| if apiError, ok := err.(*openai.APIError); ok && apiError.HTTPStatusCode == 421 { | |
| if apiError, ok := err.(*openai.APIError); ok && apiError.HTTPStatusCode == 429 { |
| return trainSet, testSet | ||
| } | ||
|
|
||
| // SplitLatest splits dataset by moving the most recent feedback of all users into the test set to avoid leakage. |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation comment for SplitLatest should be more detailed. It currently only mentions "to avoid leakage" but doesn't explain what kind of leakage. Consider expanding the documentation to clarify that this method prevents temporal data leakage by ensuring that training data comes from earlier time periods than test data, which is crucial for realistic evaluation of recommendation models.
| // SplitLatest splits dataset by moving the most recent feedback of all users into the test set to avoid leakage. | |
| // SplitLatest splits the dataset by moving the most recent feedback of each user into the test set. | |
| // This enforces a temporal ordering between training and test data: the training set only contains | |
| // earlier interactions, while the test set contains the latest interactions per user. By ensuring | |
| // that the model is trained only on feedback that predates the test feedback, SplitLatest prevents | |
| // temporal data leakage and provides a more realistic evaluation setting for recommendation models. |
| ) | ||
| }) | ||
|
|
||
| score := sum.Load() / count.Load() |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a potential division by zero issue here. If count.Load() is zero (which could happen if all LLM API calls fail or are throttled), this will cause a panic. Consider adding a check to handle the case when count is zero, for example by returning 0 or handling the error appropriately.
| score := sum.Load() / count.Load() | |
| c := count.Load() | |
| if c == 0 { | |
| log.Logger().Warn("no successful LLM evaluations, returning default score", zap.Float32("score", 0)) | |
| return 0 | |
| } | |
| score := sum.Load() / c |
| ndcg := float32(0) | ||
| if ndcgUsers > 0 { | ||
| ndcg = sumNDCG / ndcgUsers | ||
| } |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a potential division by zero issue here. If ndcgUsers is zero (which could happen if no valid users are found with positive targets), the function will return 0 by default. However, it would be clearer to explicitly handle this edge case at the return statement to avoid confusion about whether ndcg was computed or defaulted.
| test.SampleUserNegatives(dataset, 99) | ||
| // go EvaluateCF(train, test, &scores) | ||
| EvaluateAFM(ctrDataset, train, test) | ||
| // EvaluateLLM(cfg, train, test) |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These commented-out lines should be either removed or uncommented for production code. Leaving commented code in the codebase reduces code clarity and can lead to confusion about which functionality is active. If these are temporary for development/testing purposes, consider using a feature flag or configuration option instead.
| // EvaluateLLM(cfg, train, test) |
| // EvaluateLLM(cfg, train, test, aux.GetItems()) | ||
| var scores sync.Map | ||
| train, test := dataset.SplitLatest() | ||
| test.SampleUserNegatives(dataset, 99) |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result of SampleUserNegatives is not being used. This call generates negative samples but the return value is discarded. If negative samples are needed for evaluation, the result should be stored. If this is intentionally pre-populating negatives for later use, consider adding a comment to clarify the intent. Note that both EvaluateLLM and SplitCTRDataset call SampleUserNegatives separately with train as the excludeSet, which may be the intended usage pattern.
| testSet.timestamps[userIndex] = append(testSet.timestamps[userIndex], d.timestamps[userIndex][k]) | ||
| testSet.itemFeedback[d.userFeedback[userIndex][k]] = append(testSet.itemFeedback[d.userFeedback[userIndex][k]], userIndex) | ||
| testSet.userFeedback[userIndex] = append(testSet.userFeedback[userIndex], d.userFeedback[userIndex][k]) |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order of operations has been changed here. Lines 291 and 293 should be in the same order as lines 272-273, where userFeedback is updated before itemFeedback. The current order (timestamps, itemFeedback, userFeedback) is inconsistent with the pattern elsewhere in this function and could lead to confusion. Consider reordering to maintain consistency: userFeedback first, then itemFeedback, then timestamps.
| testSet.timestamps[userIndex] = append(testSet.timestamps[userIndex], d.timestamps[userIndex][k]) | |
| testSet.itemFeedback[d.userFeedback[userIndex][k]] = append(testSet.itemFeedback[d.userFeedback[userIndex][k]], userIndex) | |
| testSet.userFeedback[userIndex] = append(testSet.userFeedback[userIndex], d.userFeedback[userIndex][k]) | |
| testSet.userFeedback[userIndex] = append(testSet.userFeedback[userIndex], d.userFeedback[userIndex][k]) | |
| testSet.itemFeedback[d.userFeedback[userIndex][k]] = append(testSet.itemFeedback[d.userFeedback[userIndex][k]], userIndex) | |
| testSet.timestamps[userIndex] = append(testSet.timestamps[userIndex], d.timestamps[userIndex][k]) |
No description provided.