Skip to content

Conversation

@KATO-Hiro
Copy link
Collaborator

@KATO-Hiro KATO-Hiro commented Mar 22, 2025

close #1856

Summary by CodeRabbit

  • Tests
    • Introduced a new test suite for validating contest table providers, ensuring accurate filtering, metadata, and round label formatting.
    • Added functionality to create default task result objects and generated multiple instances for various contests, enhancing data integrity in tests.

@coderabbitai
Copy link

coderabbitai bot commented Mar 22, 2025

Walkthrough

This pull request introduces a new test suite for contest table providers using the Vitest framework. The tests validate three providers—abcLatest20Rounds, abc319Onwards, and fromAbc212ToAbc318—by checking filtering logic, table structure, metadata, and round label correctness with mock implementations. Additionally, a helper module is added, containing a default task result and a function to generate task results with a specific table index for contests ranging from abc212 to abc397.

Changes

File(s) Change Summary
src/.../contest_table_provider.test.ts Added a new test suite using Vitest to validate contest table providers (abcLatest20Rounds, abc319Onwards, fromAbc212ToAbc318) with tests for filtering, metadata, and labels.
src/.../test_cases/contest_table_provider.ts Introduced a helper module defining a default task result and the function createTaskResultWithTaskTableIndex; exports an array of task results for contests abc212 to abc397.

Assessment against linked issues

Objective Addressed Explanation
Test contest table helper functions (#1856)

Poem

Oh, I hopped through code with delight,
Crafting tests that sparkle in the light.
Helpers now jump to a brand-new beat,
In contests and tables, everything's neat!
With whiskers twitching, my joy overflows—code magic, rabbit-style! 🐰💻


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 095b393 and 6fccadb.

📒 Files selected for processing (1)
  • src/test/lib/utils/test_cases/contest_table_provider.ts (1 hunks)
🔇 Additional comments (7)
src/test/lib/utils/test_cases/contest_table_provider.ts (7)

1-31: Well-structured task result initialization

The default task result is well-defined with clear documentation. Setting the updated_at field to Unix epoch is a good choice for a default timestamp that's easily identifiable in tests.


33-40: Verify constant naming for clarity

There's a slight discrepancy in the comment for the TRYING constant. The comment says "a challenge is underway" but the value is "wa" which typically means "wrong answer". If this is intentional based on your application's terminology, it's fine - otherwise consider renaming for clarity.


42-64: Well-implemented task result creation function

The createTaskResultWithTaskTableIndex function is well-documented and correctly implements the creation of task results with appropriate properties and derived values (like the is_ac flag).


66-90: Good task generation utility

The createContestTasks function provides a clean abstraction for generating multiple task results for a contest, with appropriate task IDs derived from the contest ID and task table index.


92-126: Clean test data implementation

The test data creation is well-organized with clear comments explaining the contest structures. The mix of different submission statuses will enable thorough testing of various filtering and display scenarios.


141-161: Excellent utility for generating sequential test data

The createContestsRange function is a great addition that reduces code duplication when creating many similar contest results. The alternating status pattern provides good test coverage across different submission states.


188-231: Well-organized exported test data

The exported array includes all necessary test cases in a structured manner. This comprehensive dataset will enable thorough testing of contest table providers with various configurations.

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (7)
src/test/lib/utils/contest_table_provider.test.ts (5)

46-54: Consider refining the test assertion logic for limiting results

The test verifies that the number of unique contests is less than or equal to 20, but doesn't confirm it's exactly 20 (which would be expected for a "latest 20 rounds" provider). If the provider isn't correctly limiting results, this test could still pass with fewer rounds.

-expect(uniqueContests.size).toBeLessThanOrEqual(20);
+expect(uniqueContests.size).toBe(20);

If there aren't exactly 20 unique contests in your test data, consider either:

  1. Asserting that it equals the expected count based on your test data
  2. Ensuring your test data contains at least 20 unique contests

90-97: Consider extracting the contest round parsing logic into a helper function

The code that extracts and parses the round number is duplicated in multiple test cases (here and in the test for the fromAbc212ToAbc318 provider). This is a good opportunity to apply the DRY principle.

+  const getContestRound = (contestId: string): number => {
+    return parseInt(contestId.replace('abc', ''), 10);
+  };
+
   expect(filtered.every((task) => task.contest_id.startsWith('abc'))).toBeTruthy();
   expect(
     filtered.every((task) => {
-      const round = parseInt(task.contest_id.replace('abc', ''), 10);
+      const round = getContestRound(task.contest_id);
       return round >= 319 && round <= 999;
     }),
   ).toBeTruthy();

123-126: Reuse the same contest round parsing logic here as well

The same logic for extracting and parsing the contest round number is used here, which could benefit from the helper function suggested above.

     filtered.every((task) => {
-      const round = parseInt(task.contest_id.replace('abc', ''), 10);
+      const round = getContestRound(task.contest_id);
       return round >= 212 && round <= 318;
     }),

148-164: Use test data helpers for consistent testing

The test uses manually defined task results rather than leveraging the mockTaskResults from the imported test data helper. Using consistent test data across tests improves maintainability.

   test('expects to get contest round IDs correctly', () => {
     const provider = contestTableProviders.abcLatest20Rounds;
-    const filtered = [
-      { contest_id: 'abc397', task_id: 'a', status_name: 'ac' },
-      { contest_id: 'abc319', task_id: 'a', status_name: 'ac' },
-      { contest_id: 'abc319', task_id: 'b', status_name: 'ac' },
-      { contest_id: 'abc319', task_id: 'c', status_name: 'ac' },
-      { contest_id: 'abc319', task_id: 'e', status_name: 'ac_with_editorial' },
-      { contest_id: 'abc319', task_id: 'f', status_name: 'wa' },
-      { contest_id: 'abc319', task_id: 'g', status_name: 'ns' },
-      { contest_id: 'abc318', task_id: 'a', status_name: 'ac' },
-    ] as TaskResults;
+    // Use a subset of the mock data that covers the relevant contest IDs
+    const filtered = mockTaskResults.filter(task => 
+      ['abc397', 'abc319', 'abc318'].includes(task.contest_id)
+    );

     const roundIds = provider.getContestRoundIds(filtered);

     expect(roundIds).toEqual(['abc397', 'abc319', 'abc318']);
   });

166-186: Similarly, use test data helpers here for consistency

The same observation applies to the test for getting header IDs - it uses manually defined task results rather than leveraging the existing test data.

   test('expects to get header IDs for tasks correctly', () => {
     const provider = contestTableProviders.abcLatest20Rounds;
-    const filtered = [
-      { contest_id: 'abc319', task_id: 'abc319_a', task_table_index: 'A', status_name: 'ac' },
-      { contest_id: 'abc319', task_id: 'abc319_b', task_table_index: 'B', status_name: 'ac' },
-      { contest_id: 'abc319', task_id: 'abc319_c', task_table_index: 'C', status_name: 'ac' },
-      { contest_id: 'abc319', task_id: 'abc319_d', task_table_index: 'D', status_name: 'ac' },
-      {
-        contest_id: 'abc319',
-        task_id: 'abc319_e',
-        task_table_index: 'E',
-        status_name: 'ac_with_editorial',
-      },
-      { contest_id: 'abc319', task_id: 'abc319_f', task_table_index: 'F', status_name: 'wa' },
-      { contest_id: 'abc319', task_id: 'abc319_g', task_table_index: 'G', status_name: 'ns' },
-    ] as TaskResults;
+    // Filter just abc319 tasks from mock data, or create a well-defined subset
+    const filtered = mockTaskResults.filter(task => 
+      task.contest_id === 'abc319'
+    );
+    
+    // If the filtered data doesn't contain all A-G tasks, you may need to use a more specific subset
+    // that guarantees all expected task table indices are present

     const headerIds = provider.getHeaderIdsForTask(filtered);

     expect(headerIds).toEqual(['A', 'B', 'C', 'D', 'E', 'F', 'G']);
   });
src/test/lib/utils/test_cases/contest_table_provider.ts (2)

68-94: Consider adding missing task indices in the test data

For ABC319, you've created tasks A, B, F, and G, but C, D, and E are missing. However, in contest_table_provider.test.ts (lines 169-178), these tasks are referenced in the tests. For completeness and to avoid potential test failures, consider adding these missing task indices.

// ABC319 - : 7 tasks (A, B, C, D, E, F and G)
const abc319_a = createTaskResultWithTaskTableIndex('abc319', 'abc319_a', 'A', AC);
const abc319_b = createTaskResultWithTaskTableIndex('abc319', 'abc319_b', 'B', AC);
+const abc319_c = createTaskResultWithTaskTableIndex('abc319', 'abc319_c', 'C', AC);
+const abc319_d = createTaskResultWithTaskTableIndex('abc319', 'abc319_d', 'D', AC);
+const abc319_e = createTaskResultWithTaskTableIndex('abc319', 'abc319_e', 'E', AC_WITH_EDITORIAL);
const abc319_f = createTaskResultWithTaskTableIndex('abc319', 'abc319_f', 'F', TRYING);
const abc319_g = createTaskResultWithTaskTableIndex('abc319', 'abc319_g', 'G', PENDING);

Also update the export at the end of the file to include these new constants:

export const taskResultsForContestTableProvider: TaskResults = [
  // ...existing entries
  abc319_a,
  abc319_b,
+ abc319_c,
+ abc319_d,
+ abc319_e,
  abc319_f,
  abc319_g,
  // ...remaining entries
];

51-67: Consider adding comments to clarify the test data organization

While you have good comments for the ABC ranges, adding a brief explanation about what each variable represents in the test scenario would improve readability for future developers.

// ABC212 - ABC232: 8 tasks (A, B, C, D, E, F, G and H)
+// Mix of different submission statuses to test various filtering and display scenarios
const abc212_a = createTaskResultWithTaskTableIndex('abc212', 'abc212_a', 'A', AC);
const abc212_b = createTaskResultWithTaskTableIndex('abc212', 'abc212_b', 'B', AC);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc548de and e1a3739.

📒 Files selected for processing (2)
  • src/test/lib/utils/contest_table_provider.test.ts (1 hunks)
  • src/test/lib/utils/test_cases/contest_table_provider.ts (1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
src/test/lib/utils/contest_table_provider.test.ts (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1) (1)
  • taskResultsForContestTableProvider (96-136)
🔇 Additional comments (6)
src/test/lib/utils/contest_table_provider.test.ts (2)

1-3: LGTM: Good use of test import structure

The test imports are well-organized, with Vitest imports separate from application code imports.


9-32: Well-structured mocks for isolated testing

Your mock implementations are focused and appropriate for testing only the necessary functionality. The mocking approach effectively isolates the providers from their dependencies.

src/test/lib/utils/test_cases/contest_table_provider.ts (4)

3-20: Good job with the default task result setup

The default task result is well-structured with clear comments explaining design decisions. Using the Unix epoch for updated_at and documenting that most fields are empty strings because they're not relevant for testing shows good testing practices.


22-25: LGTM: Good use of constants for status names

Using constants instead of magic strings improves code readability and maintainability.


27-49: Well-documented helper function

The createTaskResultWithTaskTableIndex function is well-documented with JSDoc comments that clearly explain parameters and purpose. The implementation correctly uses the spread operator to create a new object from the default and properly derives the is_ac field.


96-136: Well-organized export of test data

The taskResultsForContestTableProvider export is well-structured and provides a comprehensive set of test data covering various contests and tasks. This approach makes it easy to maintain and extend the test data as needed.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (7)
src/test/lib/utils/contest_table_provider.test.ts (4)

36-38: Consider making the getContestRound function more robust.

The function assumes all contest IDs start with "abc" followed by a number. Consider adding validation for edge cases like non-numeric parts or different contest types to improve robustness.

const getContestRound = (contestId: string): number => {
-  return parseInt(contestId.replace('abc', ''), 10);
+  const roundStr = contestId.replace(/^\D+/, '');
+  const round = parseInt(roundStr, 10);
+  if (isNaN(round)) {
+    throw new Error(`Invalid contest ID format: ${contestId}`);
+  }
+  return round;
};

49-57: Enhance the test to verify the latest 20 rounds.

The test confirms there are 20 unique contests but doesn't verify they're the latest ones. Consider adding additional checks to ensure the filtering logic selects the most recent rounds.

test('expects to limit results to the latest 20 rounds', () => {
  const provider = contestTableProviders.abcLatest20Rounds;

  const largeDataset = [...mockTaskResults];
  const filtered = provider.filter(largeDataset);
  const uniqueContests = new Set(filtered.map((task) => task.contest_id));

  expect(uniqueContests.size).toBe(20);
+  
+  // Verify these are the latest 20 rounds
+  const contestRounds = Array.from(uniqueContests)
+    .map(id => getContestRound(id))
+    .sort((a, b) => b - a); // Sort in descending order
+  
+  // Check if the rounds are sequential and latest
+  const highestRound = Math.max(...contestRounds);
+  const expectedRounds = Array.from({ length: 20 }, (_, i) => highestRound - i);
+  expect(contestRounds).toEqual(expectedRounds);
});

59-69: Expand table structure testing for better coverage.

The test only checks one specific contest and task. Consider expanding it to verify multiple contests and their tasks for more comprehensive validation of the table structure.

test('expects to generate correct table structure', () => {
  const provider = contestTableProviders.abcLatest20Rounds;
  const filtered = provider.filter(mockTaskResults);
  const table = provider.generateTable(filtered);

  expect(table).toHaveProperty('abc378');
  expect(table.abc378).toHaveProperty('G');
  expect(table.abc378.G).toEqual(
    expect.objectContaining({ contest_id: 'abc378', task_id: 'abc378_g' }),
  );
+  
+  // Additional structure validations for multiple contests
+  expect(table).toHaveProperty('abc397');
+  expect(table.abc397).toHaveProperty('G');
+  
+  // Verify table shape for a contest with multiple tasks
+  expect(table).toHaveProperty('abc319');
+  expect(table.abc319).toHaveProperty('A');
+  expect(table.abc319).toHaveProperty('B');
+  expect(table.abc319).toHaveProperty('C');
+  
+  // Verify correct task mapping
+  expect(table.abc319.A).toEqual(
+    expect.objectContaining({ contest_id: 'abc319', task_id: 'abc319_a' }),
+  );
});

152-161: Use dynamic contest identification for more resilient tests.

The test filters for very specific contest IDs which could make it fragile if test data changes. Consider using a more dynamic approach based on contest properties rather than hardcoded values.

test('expects to get contest round IDs correctly', () => {
  const provider = contestTableProviders.abcLatest20Rounds;
-  // Use a subset of the mock data that covers the relevant contest IDs
-  const filtered = mockTaskResults.filter((task) =>
-    ['abc397', 'abc319', 'abc318'].includes(task.contest_id),
-  );
+  // Use a more dynamic approach: select tasks from different ranges
+  const filtered = mockTaskResults.filter((task) => {
+    const round = getContestRound(task.contest_id);
+    // Select one from each range: latest, middle, and early
+    return round >= 390 || // Latest
+           (round >= 315 && round <= 320) || // Middle
+           (round >= 212 && round <= 215); // Early
+  });

  const roundIds = provider.getContestRoundIds(filtered);

-  expect(roundIds).toEqual(['abc397', 'abc319', 'abc318']);
+  // Verify we have at least one ID from each range
+  const highRange = roundIds.some(id => getContestRound(id) >= 390);
+  const midRange = roundIds.some(id => {
+    const round = getContestRound(id);
+    return round >= 315 && round <= 320;
+  });
+  const lowRange = roundIds.some(id => {
+    const round = getContestRound(id);
+    return round >= 212 && round <= 215;
+  });
+  
+  expect(highRange).toBeTruthy();
+  expect(midRange).toBeTruthy();
+  expect(lowRange).toBeTruthy();
});
src/test/lib/utils/test_cases/contest_table_provider.ts (3)

7-20: Consider adding JSDoc comments to the defaultTaskResult.

Adding JSDoc comments to the defaultTaskResult object would provide better documentation about the purpose of each field, especially for those initialized as empty strings.

/**
+ * Default task result object used as a template for test data.
+ * Most fields are initialized as empty strings as they're not relevant for these tests.
+ * @type {TaskResult}
+ */
const defaultTaskResult: TaskResult = {
  is_ac: false,
  user_id: '',
  status_name: '',
  status_id: '',
  submission_status_image_path: '',
  submission_status_label_name: '',
  contest_id: '',
  task_table_index: '',
  task_id: '',
  title: '',
  grade: '',
  updated_at: new Date(0), // Use the Unix epoch as the default value.
};

22-25: Add JSDoc comments to status constants for better documentation.

Adding JSDoc comments to the status constants would help clarify their purpose and usage.

+/** Represents a fully accepted submission status */
const AC = 'ac';
+/** Represents a submission that was accepted with reference to the editorial */
const AC_WITH_EDITORIAL = 'ac_with_editorial';
+/** Represents a submission with wrong answer (in progress) */
const TRYING = 'wa';
+/** Represents a not-submitted status */
const PENDING = 'ns';

51-98: Refactor repetitive task creation for better maintainability.

The current approach creates many individual task results with repetitive code. Consider using loops or array methods to reduce repetition and improve maintainability.

// ABC212 - ABC232: 8 tasks (A, B, C, D, E, F, G and H)
-// // Mix of different submission statuses to test various filtering and display scenarios.
+// Mix of different submission statuses to test various filtering and display scenarios.
-const abc212_a = createTaskResultWithTaskTableIndex('abc212', 'abc212_a', 'A', AC);
-const abc212_b = createTaskResultWithTaskTableIndex('abc212', 'abc212_b', 'B', AC);
-const abc212_f = createTaskResultWithTaskTableIndex('abc212', 'abc212_f', 'F', AC_WITH_EDITORIAL);
-const abc212_g = createTaskResultWithTaskTableIndex('abc212', 'abc212_g', 'G', TRYING);
-const abc212_h = createTaskResultWithTaskTableIndex('abc212', 'abc212_h', 'H', PENDING);
+// Define a structure for contest tasks
+const createContestTasks = (
+  contestId: string,
+  taskConfigs: Array<{ index: string; status: string }>
+) => {
+  return taskConfigs.map(config => {
+    const taskId = `${contestId}_${config.index.toLowerCase()}`;
+    return createTaskResultWithTaskTableIndex(
+      contestId,
+      taskId,
+      config.index,
+      config.status
+    );
+  });
+};
+
+// ABC212 tasks
+const abc212Tasks = createContestTasks('abc212', [
+  { index: 'A', status: AC },
+  { index: 'B', status: AC },
+  { index: 'F', status: AC_WITH_EDITORIAL },
+  { index: 'G', status: TRYING },
+  { index: 'H', status: PENDING }
+]);
+
+const [abc212_a, abc212_b, abc212_f, abc212_g, abc212_h] = abc212Tasks;

// Additional tasks can be created similarly
const abc213_h = createTaskResultWithTaskTableIndex('abc213', 'abc213_h', 'H', PENDING);
const abc232_h = createTaskResultWithTaskTableIndex('abc232', 'abc232_h', 'H', TRYING);

// ...

// For the many similar abc3XX_g entries, you could use a loop:
+const highContestTasks = Array.from({ length: 22 }, (_, i) => {
+  const contestNum = 376 + i;
+  const contestId = `abc${contestNum}`;
+  const taskId = `${contestId}_g`;
+  // Alternating statuses for variety
+  let status;
+  if (i % 4 === 0) status = AC;
+  else if (i % 4 === 1) status = AC_WITH_EDITORIAL;
+  else if (i % 4 === 2) status = TRYING;
+  else status = PENDING;
+  
+  return createTaskResultWithTaskTableIndex(contestId, taskId, 'G', status);
+});
+
+// Extract individual variables if needed
+const [
+  abc376_g, abc377_g, abc378_g, /* ... and so on */
+] = highContestTasks;
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e1a3739 and 511219e.

📒 Files selected for processing (2)
  • src/test/lib/utils/contest_table_provider.test.ts (1 hunks)
  • src/test/lib/utils/test_cases/contest_table_provider.ts (1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
src/test/lib/utils/contest_table_provider.test.ts (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1) (1)
  • taskResultsForContestTableProvider (100-143)
🔇 Additional comments (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1)

100-143: LGTM! Well-structured export of test data.

The array of task results is well-structured and provides a comprehensive set of test cases for the contest table providers.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
src/test/lib/utils/contest_table_provider.test.ts (1)

37-46: Consider moving helper function to shared utilities.

The getContestRound function extracts and parses the numeric part of a contest ID, which seems like functionality that might be useful elsewhere in the codebase.

Consider extracting this to a shared utility file to promote reuse, especially if similar contest ID parsing is needed in other parts of the application.

src/test/lib/utils/test_cases/contest_table_provider.ts (2)

1-26: Well-documented default task result.

The default task result object and its documentation clearly explain its purpose as a template for test data. Consider making this object readonly to prevent accidental modifications.

-const defaultTaskResult: TaskResult = {
+const defaultTaskResult: Readonly<TaskResult> = {

123-165: Consider enhancing readability of large array creation.

The array creation for contests abc376 to abc397 is functional but could be more readable with a dedicated function name.

Consider extracting this to a named function like createContestsRange to make the intention clearer:

-const [
-  abc376_g,
-  abc377_g,
-  ...
-] = Array.from({ length: 22 }, (_, i) => {
+function createContestsRange(startContest: number, count: number, taskIndex: string) {
+  return Array.from({ length: count }, (_, i) => {
     const contestNum = 376 + i;
     // ... rest of the implementation
+  });
+}
+
+const [abc376_g, abc377_g, /* ... */] = createContestsRange(376, 22, 'G');
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 511219e and 095b393.

📒 Files selected for processing (2)
  • src/test/lib/utils/contest_table_provider.test.ts (1 hunks)
  • src/test/lib/utils/test_cases/contest_table_provider.ts (1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
src/test/lib/utils/contest_table_provider.test.ts (1)
src/test/lib/utils/test_cases/contest_table_provider.ts (1) (1)
  • taskResultsForContestTableProvider (166-209)
🔇 Additional comments (14)
src/test/lib/utils/contest_table_provider.test.ts (8)

1-7: Good setup with necessary imports.

The imports are well organized, getting test utilities from Vitest, application types, and the contest table providers to be tested. The import of test case data from a separate file is a good practice for test maintainability.


9-33: Well-structured mocks for isolated testing.

The mock implementations effectively isolate the tests from external dependencies. The mock for classifyContest handles ABC contests specifically while providing a default for others, and getTaskTableHeaderName returns the task's table index, which is sufficient for these tests.


48-74: Good test coverage for filtering and limits.

The tests for the abcLatest20Rounds provider thoroughly validate both the filtering of ABC contests and the limit to exactly 20 most recent rounds. The approach to verify that the selected rounds are the latest ones is particularly thorough.


76-92: Comprehensive table structure validation.

The test effectively validates that the generated table structure contains the expected contest entries and that the task data is correctly mapped to the appropriate table cells.


94-108: Good metadata and label formatting tests.

The tests for metadata and contest round label formatting are concise and verify all aspects of the provider's metadata, including title, button label, and ARIA label for accessibility.


111-140: Complete test coverage for abc319Onwards provider.

The tests for the abc319Onwards provider properly validate its filtering logic (contests ≥ 319), metadata values, and label formatting. This ensures the provider behaves as expected across all its functionality.


142-171: Thorough validation for fromAbc212ToAbc318 provider.

Similar to the other providers, these tests comprehensively validate the filtering range (212-318), metadata, and label formatting. The consistency in test structure across providers is commendable.


173-193: Good coverage of common functionality.

Testing the common provider functionality separately ensures that these shared methods work consistently across all provider implementations. The tests for contest round IDs and header IDs are clear and focused.

src/test/lib/utils/test_cases/contest_table_provider.ts (6)

28-35: Clear constants for submission statuses.

Using named constants instead of magic strings for the submission statuses improves code readability and maintainability. The comments describing each status are helpful.


37-59: Well-implemented task result creation function.

The createTaskResultWithTaskTableIndex function is well-documented with JSDoc comments and efficiently creates task results by extending the default template. The logic for setting is_ac based on status is clear.


61-85: Good factory function for creating contest tasks.

The createContestTasks function efficiently generates task results for a given contest based on the provided configurations. The code is clean and the JSDoc comments are thorough.


87-108: Well-organized test data creation.

The test data creation for contests ABC212 through ABC318 is well-structured with descriptive comments explaining the task structure for different contest ranges.


109-121: Clear test data for ABC319 onwards.

The creation of test data for ABC319 with all task indices (A through G) provides good coverage for testing the filtering logic of the providers.


166-209: Comprehensive test data export.

The exported array includes a good variety of task results covering different contests, task indices, and submission statuses, which provides thorough test coverage for the contest table providers.

Copy link
Collaborator Author

@KATO-Hiro KATO-Hiro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@KATO-Hiro KATO-Hiro merged commit 61a63c4 into staging Mar 22, 2025
3 checks passed
@KATO-Hiro KATO-Hiro deleted the #1856 branch March 22, 2025 13:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

コンテストテーブルのヘルパー関数をテストしましょう

2 participants