-
-
Notifications
You must be signed in to change notification settings - Fork 10
🚨 Add tests for contest table providers (#1856) #1857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis pull request introduces a new test suite for contest table providers using the Vitest framework. The tests validate three providers—abcLatest20Rounds, abc319Onwards, and fromAbc212ToAbc318—by checking filtering logic, table structure, metadata, and round label correctness with mock implementations. Additionally, a helper module is added, containing a default task result and a function to generate task results with a specific table index for contests ranging from abc212 to abc397. Changes
Assessment against linked issues
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🔇 Additional comments (7)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
src/test/lib/utils/contest_table_provider.test.ts (5)
46-54: Consider refining the test assertion logic for limiting resultsThe test verifies that the number of unique contests is less than or equal to 20, but doesn't confirm it's exactly 20 (which would be expected for a "latest 20 rounds" provider). If the provider isn't correctly limiting results, this test could still pass with fewer rounds.
-expect(uniqueContests.size).toBeLessThanOrEqual(20); +expect(uniqueContests.size).toBe(20);If there aren't exactly 20 unique contests in your test data, consider either:
- Asserting that it equals the expected count based on your test data
- Ensuring your test data contains at least 20 unique contests
90-97: Consider extracting the contest round parsing logic into a helper functionThe code that extracts and parses the round number is duplicated in multiple test cases (here and in the test for the fromAbc212ToAbc318 provider). This is a good opportunity to apply the DRY principle.
+ const getContestRound = (contestId: string): number => { + return parseInt(contestId.replace('abc', ''), 10); + }; + expect(filtered.every((task) => task.contest_id.startsWith('abc'))).toBeTruthy(); expect( filtered.every((task) => { - const round = parseInt(task.contest_id.replace('abc', ''), 10); + const round = getContestRound(task.contest_id); return round >= 319 && round <= 999; }), ).toBeTruthy();
123-126: Reuse the same contest round parsing logic here as wellThe same logic for extracting and parsing the contest round number is used here, which could benefit from the helper function suggested above.
filtered.every((task) => { - const round = parseInt(task.contest_id.replace('abc', ''), 10); + const round = getContestRound(task.contest_id); return round >= 212 && round <= 318; }),
148-164: Use test data helpers for consistent testingThe test uses manually defined task results rather than leveraging the
mockTaskResultsfrom the imported test data helper. Using consistent test data across tests improves maintainability.test('expects to get contest round IDs correctly', () => { const provider = contestTableProviders.abcLatest20Rounds; - const filtered = [ - { contest_id: 'abc397', task_id: 'a', status_name: 'ac' }, - { contest_id: 'abc319', task_id: 'a', status_name: 'ac' }, - { contest_id: 'abc319', task_id: 'b', status_name: 'ac' }, - { contest_id: 'abc319', task_id: 'c', status_name: 'ac' }, - { contest_id: 'abc319', task_id: 'e', status_name: 'ac_with_editorial' }, - { contest_id: 'abc319', task_id: 'f', status_name: 'wa' }, - { contest_id: 'abc319', task_id: 'g', status_name: 'ns' }, - { contest_id: 'abc318', task_id: 'a', status_name: 'ac' }, - ] as TaskResults; + // Use a subset of the mock data that covers the relevant contest IDs + const filtered = mockTaskResults.filter(task => + ['abc397', 'abc319', 'abc318'].includes(task.contest_id) + ); const roundIds = provider.getContestRoundIds(filtered); expect(roundIds).toEqual(['abc397', 'abc319', 'abc318']); });
166-186: Similarly, use test data helpers here for consistencyThe same observation applies to the test for getting header IDs - it uses manually defined task results rather than leveraging the existing test data.
test('expects to get header IDs for tasks correctly', () => { const provider = contestTableProviders.abcLatest20Rounds; - const filtered = [ - { contest_id: 'abc319', task_id: 'abc319_a', task_table_index: 'A', status_name: 'ac' }, - { contest_id: 'abc319', task_id: 'abc319_b', task_table_index: 'B', status_name: 'ac' }, - { contest_id: 'abc319', task_id: 'abc319_c', task_table_index: 'C', status_name: 'ac' }, - { contest_id: 'abc319', task_id: 'abc319_d', task_table_index: 'D', status_name: 'ac' }, - { - contest_id: 'abc319', - task_id: 'abc319_e', - task_table_index: 'E', - status_name: 'ac_with_editorial', - }, - { contest_id: 'abc319', task_id: 'abc319_f', task_table_index: 'F', status_name: 'wa' }, - { contest_id: 'abc319', task_id: 'abc319_g', task_table_index: 'G', status_name: 'ns' }, - ] as TaskResults; + // Filter just abc319 tasks from mock data, or create a well-defined subset + const filtered = mockTaskResults.filter(task => + task.contest_id === 'abc319' + ); + + // If the filtered data doesn't contain all A-G tasks, you may need to use a more specific subset + // that guarantees all expected task table indices are present const headerIds = provider.getHeaderIdsForTask(filtered); expect(headerIds).toEqual(['A', 'B', 'C', 'D', 'E', 'F', 'G']); });src/test/lib/utils/test_cases/contest_table_provider.ts (2)
68-94: Consider adding missing task indices in the test dataFor ABC319, you've created tasks A, B, F, and G, but C, D, and E are missing. However, in
contest_table_provider.test.ts(lines 169-178), these tasks are referenced in the tests. For completeness and to avoid potential test failures, consider adding these missing task indices.// ABC319 - : 7 tasks (A, B, C, D, E, F and G) const abc319_a = createTaskResultWithTaskTableIndex('abc319', 'abc319_a', 'A', AC); const abc319_b = createTaskResultWithTaskTableIndex('abc319', 'abc319_b', 'B', AC); +const abc319_c = createTaskResultWithTaskTableIndex('abc319', 'abc319_c', 'C', AC); +const abc319_d = createTaskResultWithTaskTableIndex('abc319', 'abc319_d', 'D', AC); +const abc319_e = createTaskResultWithTaskTableIndex('abc319', 'abc319_e', 'E', AC_WITH_EDITORIAL); const abc319_f = createTaskResultWithTaskTableIndex('abc319', 'abc319_f', 'F', TRYING); const abc319_g = createTaskResultWithTaskTableIndex('abc319', 'abc319_g', 'G', PENDING);Also update the export at the end of the file to include these new constants:
export const taskResultsForContestTableProvider: TaskResults = [ // ...existing entries abc319_a, abc319_b, + abc319_c, + abc319_d, + abc319_e, abc319_f, abc319_g, // ...remaining entries ];
51-67: Consider adding comments to clarify the test data organizationWhile you have good comments for the ABC ranges, adding a brief explanation about what each variable represents in the test scenario would improve readability for future developers.
// ABC212 - ABC232: 8 tasks (A, B, C, D, E, F, G and H) +// Mix of different submission statuses to test various filtering and display scenarios const abc212_a = createTaskResultWithTaskTableIndex('abc212', 'abc212_a', 'A', AC); const abc212_b = createTaskResultWithTaskTableIndex('abc212', 'abc212_b', 'B', AC);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/test/lib/utils/contest_table_provider.test.ts(1 hunks)src/test/lib/utils/test_cases/contest_table_provider.ts(1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
src/test/lib/utils/contest_table_provider.test.ts (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1) (1)
taskResultsForContestTableProvider(96-136)
🔇 Additional comments (6)
src/test/lib/utils/contest_table_provider.test.ts (2)
1-3: LGTM: Good use of test import structureThe test imports are well-organized, with Vitest imports separate from application code imports.
9-32: Well-structured mocks for isolated testingYour mock implementations are focused and appropriate for testing only the necessary functionality. The mocking approach effectively isolates the providers from their dependencies.
src/test/lib/utils/test_cases/contest_table_provider.ts (4)
3-20: Good job with the default task result setupThe default task result is well-structured with clear comments explaining design decisions. Using the Unix epoch for
updated_atand documenting that most fields are empty strings because they're not relevant for testing shows good testing practices.
22-25: LGTM: Good use of constants for status namesUsing constants instead of magic strings improves code readability and maintainability.
27-49: Well-documented helper functionThe
createTaskResultWithTaskTableIndexfunction is well-documented with JSDoc comments that clearly explain parameters and purpose. The implementation correctly uses the spread operator to create a new object from the default and properly derives theis_acfield.
96-136: Well-organized export of test dataThe taskResultsForContestTableProvider export is well-structured and provides a comprehensive set of test data covering various contests and tasks. This approach makes it easy to maintain and extend the test data as needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
src/test/lib/utils/contest_table_provider.test.ts (4)
36-38: Consider making thegetContestRoundfunction more robust.The function assumes all contest IDs start with "abc" followed by a number. Consider adding validation for edge cases like non-numeric parts or different contest types to improve robustness.
const getContestRound = (contestId: string): number => { - return parseInt(contestId.replace('abc', ''), 10); + const roundStr = contestId.replace(/^\D+/, ''); + const round = parseInt(roundStr, 10); + if (isNaN(round)) { + throw new Error(`Invalid contest ID format: ${contestId}`); + } + return round; };
49-57: Enhance the test to verify the latest 20 rounds.The test confirms there are 20 unique contests but doesn't verify they're the latest ones. Consider adding additional checks to ensure the filtering logic selects the most recent rounds.
test('expects to limit results to the latest 20 rounds', () => { const provider = contestTableProviders.abcLatest20Rounds; const largeDataset = [...mockTaskResults]; const filtered = provider.filter(largeDataset); const uniqueContests = new Set(filtered.map((task) => task.contest_id)); expect(uniqueContests.size).toBe(20); + + // Verify these are the latest 20 rounds + const contestRounds = Array.from(uniqueContests) + .map(id => getContestRound(id)) + .sort((a, b) => b - a); // Sort in descending order + + // Check if the rounds are sequential and latest + const highestRound = Math.max(...contestRounds); + const expectedRounds = Array.from({ length: 20 }, (_, i) => highestRound - i); + expect(contestRounds).toEqual(expectedRounds); });
59-69: Expand table structure testing for better coverage.The test only checks one specific contest and task. Consider expanding it to verify multiple contests and their tasks for more comprehensive validation of the table structure.
test('expects to generate correct table structure', () => { const provider = contestTableProviders.abcLatest20Rounds; const filtered = provider.filter(mockTaskResults); const table = provider.generateTable(filtered); expect(table).toHaveProperty('abc378'); expect(table.abc378).toHaveProperty('G'); expect(table.abc378.G).toEqual( expect.objectContaining({ contest_id: 'abc378', task_id: 'abc378_g' }), ); + + // Additional structure validations for multiple contests + expect(table).toHaveProperty('abc397'); + expect(table.abc397).toHaveProperty('G'); + + // Verify table shape for a contest with multiple tasks + expect(table).toHaveProperty('abc319'); + expect(table.abc319).toHaveProperty('A'); + expect(table.abc319).toHaveProperty('B'); + expect(table.abc319).toHaveProperty('C'); + + // Verify correct task mapping + expect(table.abc319.A).toEqual( + expect.objectContaining({ contest_id: 'abc319', task_id: 'abc319_a' }), + ); });
152-161: Use dynamic contest identification for more resilient tests.The test filters for very specific contest IDs which could make it fragile if test data changes. Consider using a more dynamic approach based on contest properties rather than hardcoded values.
test('expects to get contest round IDs correctly', () => { const provider = contestTableProviders.abcLatest20Rounds; - // Use a subset of the mock data that covers the relevant contest IDs - const filtered = mockTaskResults.filter((task) => - ['abc397', 'abc319', 'abc318'].includes(task.contest_id), - ); + // Use a more dynamic approach: select tasks from different ranges + const filtered = mockTaskResults.filter((task) => { + const round = getContestRound(task.contest_id); + // Select one from each range: latest, middle, and early + return round >= 390 || // Latest + (round >= 315 && round <= 320) || // Middle + (round >= 212 && round <= 215); // Early + }); const roundIds = provider.getContestRoundIds(filtered); - expect(roundIds).toEqual(['abc397', 'abc319', 'abc318']); + // Verify we have at least one ID from each range + const highRange = roundIds.some(id => getContestRound(id) >= 390); + const midRange = roundIds.some(id => { + const round = getContestRound(id); + return round >= 315 && round <= 320; + }); + const lowRange = roundIds.some(id => { + const round = getContestRound(id); + return round >= 212 && round <= 215; + }); + + expect(highRange).toBeTruthy(); + expect(midRange).toBeTruthy(); + expect(lowRange).toBeTruthy(); });src/test/lib/utils/test_cases/contest_table_provider.ts (3)
7-20: Consider adding JSDoc comments to thedefaultTaskResult.Adding JSDoc comments to the
defaultTaskResultobject would provide better documentation about the purpose of each field, especially for those initialized as empty strings./** + * Default task result object used as a template for test data. + * Most fields are initialized as empty strings as they're not relevant for these tests. + * @type {TaskResult} + */ const defaultTaskResult: TaskResult = { is_ac: false, user_id: '', status_name: '', status_id: '', submission_status_image_path: '', submission_status_label_name: '', contest_id: '', task_table_index: '', task_id: '', title: '', grade: '', updated_at: new Date(0), // Use the Unix epoch as the default value. };
22-25: Add JSDoc comments to status constants for better documentation.Adding JSDoc comments to the status constants would help clarify their purpose and usage.
+/** Represents a fully accepted submission status */ const AC = 'ac'; +/** Represents a submission that was accepted with reference to the editorial */ const AC_WITH_EDITORIAL = 'ac_with_editorial'; +/** Represents a submission with wrong answer (in progress) */ const TRYING = 'wa'; +/** Represents a not-submitted status */ const PENDING = 'ns';
51-98: Refactor repetitive task creation for better maintainability.The current approach creates many individual task results with repetitive code. Consider using loops or array methods to reduce repetition and improve maintainability.
// ABC212 - ABC232: 8 tasks (A, B, C, D, E, F, G and H) -// // Mix of different submission statuses to test various filtering and display scenarios. +// Mix of different submission statuses to test various filtering and display scenarios. -const abc212_a = createTaskResultWithTaskTableIndex('abc212', 'abc212_a', 'A', AC); -const abc212_b = createTaskResultWithTaskTableIndex('abc212', 'abc212_b', 'B', AC); -const abc212_f = createTaskResultWithTaskTableIndex('abc212', 'abc212_f', 'F', AC_WITH_EDITORIAL); -const abc212_g = createTaskResultWithTaskTableIndex('abc212', 'abc212_g', 'G', TRYING); -const abc212_h = createTaskResultWithTaskTableIndex('abc212', 'abc212_h', 'H', PENDING); +// Define a structure for contest tasks +const createContestTasks = ( + contestId: string, + taskConfigs: Array<{ index: string; status: string }> +) => { + return taskConfigs.map(config => { + const taskId = `${contestId}_${config.index.toLowerCase()}`; + return createTaskResultWithTaskTableIndex( + contestId, + taskId, + config.index, + config.status + ); + }); +}; + +// ABC212 tasks +const abc212Tasks = createContestTasks('abc212', [ + { index: 'A', status: AC }, + { index: 'B', status: AC }, + { index: 'F', status: AC_WITH_EDITORIAL }, + { index: 'G', status: TRYING }, + { index: 'H', status: PENDING } +]); + +const [abc212_a, abc212_b, abc212_f, abc212_g, abc212_h] = abc212Tasks; // Additional tasks can be created similarly const abc213_h = createTaskResultWithTaskTableIndex('abc213', 'abc213_h', 'H', PENDING); const abc232_h = createTaskResultWithTaskTableIndex('abc232', 'abc232_h', 'H', TRYING); // ... // For the many similar abc3XX_g entries, you could use a loop: +const highContestTasks = Array.from({ length: 22 }, (_, i) => { + const contestNum = 376 + i; + const contestId = `abc${contestNum}`; + const taskId = `${contestId}_g`; + // Alternating statuses for variety + let status; + if (i % 4 === 0) status = AC; + else if (i % 4 === 1) status = AC_WITH_EDITORIAL; + else if (i % 4 === 2) status = TRYING; + else status = PENDING; + + return createTaskResultWithTaskTableIndex(contestId, taskId, 'G', status); +}); + +// Extract individual variables if needed +const [ + abc376_g, abc377_g, abc378_g, /* ... and so on */ +] = highContestTasks;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/test/lib/utils/contest_table_provider.test.ts(1 hunks)src/test/lib/utils/test_cases/contest_table_provider.ts(1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
src/test/lib/utils/contest_table_provider.test.ts (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1) (1)
taskResultsForContestTableProvider(100-143)
🔇 Additional comments (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1)
100-143: LGTM! Well-structured export of test data.The array of task results is well-structured and provides a comprehensive set of test cases for the contest table providers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
src/test/lib/utils/contest_table_provider.test.ts (1)
37-46: Consider moving helper function to shared utilities.The
getContestRoundfunction extracts and parses the numeric part of a contest ID, which seems like functionality that might be useful elsewhere in the codebase.Consider extracting this to a shared utility file to promote reuse, especially if similar contest ID parsing is needed in other parts of the application.
src/test/lib/utils/test_cases/contest_table_provider.ts (2)
1-26: Well-documented default task result.The default task result object and its documentation clearly explain its purpose as a template for test data. Consider making this object
readonlyto prevent accidental modifications.-const defaultTaskResult: TaskResult = { +const defaultTaskResult: Readonly<TaskResult> = {
123-165: Consider enhancing readability of large array creation.The array creation for contests abc376 to abc397 is functional but could be more readable with a dedicated function name.
Consider extracting this to a named function like
createContestsRangeto make the intention clearer:-const [ - abc376_g, - abc377_g, - ... -] = Array.from({ length: 22 }, (_, i) => { +function createContestsRange(startContest: number, count: number, taskIndex: string) { + return Array.from({ length: count }, (_, i) => { const contestNum = 376 + i; // ... rest of the implementation + }); +} + +const [abc376_g, abc377_g, /* ... */] = createContestsRange(376, 22, 'G');
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/test/lib/utils/contest_table_provider.test.ts(1 hunks)src/test/lib/utils/test_cases/contest_table_provider.ts(1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
src/test/lib/utils/contest_table_provider.test.ts (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1) (1)
taskResultsForContestTableProvider(166-209)
🔇 Additional comments (14)
src/test/lib/utils/contest_table_provider.test.ts (8)
1-7: Good setup with necessary imports.The imports are well organized, getting test utilities from Vitest, application types, and the contest table providers to be tested. The import of test case data from a separate file is a good practice for test maintainability.
9-33: Well-structured mocks for isolated testing.The mock implementations effectively isolate the tests from external dependencies. The mock for
classifyContesthandles ABC contests specifically while providing a default for others, andgetTaskTableHeaderNamereturns the task's table index, which is sufficient for these tests.
48-74: Good test coverage for filtering and limits.The tests for the
abcLatest20Roundsprovider thoroughly validate both the filtering of ABC contests and the limit to exactly 20 most recent rounds. The approach to verify that the selected rounds are the latest ones is particularly thorough.
76-92: Comprehensive table structure validation.The test effectively validates that the generated table structure contains the expected contest entries and that the task data is correctly mapped to the appropriate table cells.
94-108: Good metadata and label formatting tests.The tests for metadata and contest round label formatting are concise and verify all aspects of the provider's metadata, including title, button label, and ARIA label for accessibility.
111-140: Complete test coverage for abc319Onwards provider.The tests for the
abc319Onwardsprovider properly validate its filtering logic (contests ≥ 319), metadata values, and label formatting. This ensures the provider behaves as expected across all its functionality.
142-171: Thorough validation for fromAbc212ToAbc318 provider.Similar to the other providers, these tests comprehensively validate the filtering range (212-318), metadata, and label formatting. The consistency in test structure across providers is commendable.
173-193: Good coverage of common functionality.Testing the common provider functionality separately ensures that these shared methods work consistently across all provider implementations. The tests for contest round IDs and header IDs are clear and focused.
src/test/lib/utils/test_cases/contest_table_provider.ts (6)
28-35: Clear constants for submission statuses.Using named constants instead of magic strings for the submission statuses improves code readability and maintainability. The comments describing each status are helpful.
37-59: Well-implemented task result creation function.The
createTaskResultWithTaskTableIndexfunction is well-documented with JSDoc comments and efficiently creates task results by extending the default template. The logic for settingis_acbased on status is clear.
61-85: Good factory function for creating contest tasks.The
createContestTasksfunction efficiently generates task results for a given contest based on the provided configurations. The code is clean and the JSDoc comments are thorough.
87-108: Well-organized test data creation.The test data creation for contests ABC212 through ABC318 is well-structured with descriptive comments explaining the task structure for different contest ranges.
109-121: Clear test data for ABC319 onwards.The creation of test data for ABC319 with all task indices (A through G) provides good coverage for testing the filtering logic of the providers.
166-209: Comprehensive test data export.The exported array includes a good variety of task results covering different contests, task indices, and submission statuses, which provides thorough test coverage for the contest table providers.
KATO-Hiro
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
close #1856
Summary by CodeRabbit