Skip to content

Conversation

@F-park
Copy link
Contributor

@F-park F-park commented Jan 19, 2024

New feature

support test_code to add pyright-config as below.

## End of test code ##
# pyright: analyzeUnannotatedFunctions=false

Bug fix

disable user to modify pyright-config.

Summary by CodeRabbit

  • New Features

    • Added support for Pyright configuration directives in challenge test code
    • Improved test code partitioning to separate configuration from executable test sections
    • Enhanced error reporting with refined line-number mapping for type-checking results
  • Documentation

    • Updated contribution guidelines with optional test code section formatting instructions

✏️ Tip: You can customize this high-level summary in your review settings.

@F-park
Copy link
Contributor Author

F-park commented Jan 19, 2024

How can I fix the test?

I should rewrite the test to make it pass

@laike9m
Copy link
Owner

laike9m commented Jan 19, 2024

Could you describe what problems you're trying to solve, and what other ways you have considered (if any)? Thanks.

@F-park
Copy link
Contributor Author

F-park commented Jan 19, 2024

Could you describe what problems you're trying to solve, and what other ways you have considered (if any)? Thanks.

class MyClass:
    def __init__(self, x: int) -> None:
        self.x = x

    # TODO: Fix the type hints of `copy` to make it type check
    def copy(self):
        copied_object = MyClass(x=self.x)
        return copied_object

It can pass the challenge unexpectedly without return_value annoating


and you said Enabling strict mode would break other challenges in #89, the PR can solve it.

@laike9m
Copy link
Owner

laike9m commented Jan 26, 2024

Thanks for the PR.

Surely there's value in enabling the configs, however it also brings more complexity. At this point, I don't think it's worth adding it just to fix a tiny problem.

With that said, in the future it might become more useful, so I'll keep this open and re-evaluate when the time comes. Hope you can understand.

@laike9m laike9m added the enhancement New feature or request label Jan 26, 2024
@laike9m
Copy link
Owner

laike9m commented Jan 12, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Jan 12, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link

coderabbitai bot commented Jan 12, 2026

📝 Walkthrough

Walkthrough

This PR introduces test code partitioning to separate user-written test code from Pyright configuration directives. A new _partition_test_code static method processes test code, extracting configuration blocks marked by "## End of test code ##" comments. The test infrastructure is refactored to use a ChallengeManager fixture pattern, and error reporting is updated to map diagnostic lines to user code, test code, or configuration sections.

Changes

Cohort / File(s) Summary
Test Code Partitioning Core
views/challenge.py, views/views.py
views/challenge.py: Adds PYRIGHT_BASIC_CONFIG constant and _partition_test_code() static method to extract trailing Pyright directives. Updates PYRIGHT_MESSAGE_REGEX with named capture groups. Reworks error line mapping to distinguish user code, test code, and config sections; marks config-related errors as non-failing. views/views.py: Truncates test_code at "## End of test code ##" marker before template rendering.
Test Scaffolding & Documentation
challenges/advanced-forward/question.py, docs/Contribute.md, tests/assets/challenges/basic-foo-pyright-config/question.py
challenges/advanced-forward/question.py: Adds test scaffolding comment and Pyright directive. docs/Contribute.md: Documents optional "## End of test code ##" comment pattern and Pyright configuration format. tests/assets/challenges/basic-foo-pyright-config/question.py: New test asset demonstrating Pyright config directive usage with test code.
Test Infrastructure & Fixtures
tests/conftest.py, tests/test_challenge.py, tests/test_identical.py, tests/test_questions.py
tests/conftest.py: Introduces mgr fixture providing ChallengeManager instance. tests/test_challenge.py: Replaces assets_dir fixture with mgr parameter; adds test_partition_test_code() test; updates TestChallengeWithHints to use new fixture. tests/test_identical.py: Adds get_test_code() helper function with potential file-opening bug (reads from solution_file instead of provided path). tests/test_questions.py: Removes unused pytest import.

Sequence Diagram

sequenceDiagram
    participant User as Challenge Code
    participant CM as ChallengeManager
    participant Part as _partition_test_code()
    participant PR as Pyright
    participant Map as Error Mapper
    participant Report as Error Report

    User->>CM: run_type_check(test_code)
    CM->>Part: _partition_test_code(test_code)
    Part-->>CM: (user_code, config_block)
    CM->>CM: Merge config + BASIC_CONFIG
    CM->>PR: Execute Pyright on augmented code
    PR-->>CM: Error diagnostics with line numbers
    CM->>Map: Map line numbers to sections
    Map->>Map: Identify user code / test code / config lines
    Map-->>CM: Mapped errors with source origin
    CM->>Report: Categorize errors (config errors non-failing)
    Report-->>User: Unified error count & diagnostics
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes


🐰 Test code split with care,
Pyright config tucked away there,
Error maps redrawn,
From dusk until dawn,
Cleaner boundaries everywhere!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.69% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately summarizes the main change: adding support for pyright-config comments in test code, which aligns with all file modifications across the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Warning

Tools execution failed with the following error:

Failed to run tools: Ping-pong health check failed


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai
Copy link

coderabbitai bot commented Jan 12, 2026

Note

Docstrings generation - SUCCESS
Generated docstrings for this pull request at #131

coderabbitai bot added a commit that referenced this pull request Jan 12, 2026
Docstrings generation was requested by @laike9m.

* #98 (comment)

The following files were modified:

* `tests/assets/challenges/basic-foo-pyright-config/question.py`
* `tests/conftest.py`
* `tests/test_identical.py`
* `views/challenge.py`
* `views/views.py`
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @docs/Contribute.md:
- Line 45: The sentence fragment should be rewritten so it reads as a complete
instruction: change the line that currently reads about the optional comment
marker and pyright config to something like: "Optionally add a comment `## End
of test code ##`. You may also include pyright configuration options using the
format `# pyright: <config_name>=<value>`." Update the wording around the `##
End of test code ##` marker and the `# pyright: <config_name>=<value>` example
so the two clauses form complete, grammatical sentences.

In @tests/test_identical.py:
- Around line 11-24: get_test_code currently ignores its path arg and always
reads solution_file, so both solution_test and question_test come from the same
file; open and read the provided path parameter instead of solution_file inside
get_test_code (fix the file open call used when building challenge_code and test
extraction), keeping the rest of the function logic the same so solution_test
and question_test are derived from their respective paths.

In @views/challenge.py:
- Around line 261-268: The error count currently includes non-blocking
pyright-config lines and the singular/plural grammar is wrong; filter out lines
starting with "[pyright-config]" into a blocking_errors list and use that to
determine passed (set passed = True when len(blocking_errors) == 0) and to build
the summary message, e.g., append f"\nFound {n} error" vs "errors" based on n
where n = len(blocking_errors), or include both totals like "Found X errors (Y
blocking)"; update references to error_lines and the append call to use the new
counts.
🧹 Nitpick comments (2)
tests/conftest.py (1)

26-29: LGTM!

The fixture correctly creates a ChallengeManager instance using the test assets directory.

Consider adding scope="module" or scope="session" if the ChallengeManager is stateless and reads from disk, to avoid recreating it for each test function:

♻️ Optional optimization
-@pytest.fixture()
+@pytest.fixture(scope="module")
 def mgr(assets_dir: Path):
     return ChallengeManager(assets_dir / "challenges")
views/challenge.py (1)

175-186: Clarify the pyright config override behavior.

The comment on line 179 states the goal is to "limit user to modify the config." However, appending pyright_config after PYRIGHT_BASIC_CONFIG (line 185) allows challenge authors to override the basic settings, since later pyright inline comments take precedence.

If this is intentional (challenge authors can customize, but end-users submitting code cannot), consider updating the comment to clarify this distinction.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1e6e332 and 8676543.

📒 Files selected for processing (9)
  • challenges/advanced-forward/question.py
  • docs/Contribute.md
  • tests/assets/challenges/basic-foo-pyright-config/question.py
  • tests/conftest.py
  • tests/test_challenge.py
  • tests/test_identical.py
  • tests/test_questions.py
  • views/challenge.py
  • views/views.py
💤 Files with no reviewable changes (1)
  • tests/test_questions.py
🔇 Additional comments (7)
challenges/advanced-forward/question.py (1)

17-19: LGTM!

The test scaffolding correctly demonstrates the new pyright-config feature with the ## End of test code ## marker followed by a valid pyright directive.

views/views.py (1)

62-62: LGTM!

The partition() approach handles both cases correctly: it returns the substring before the marker when present, or the full string if absent. The newline-wrapped marker "\n## End of test code ##\n" ensures exact matching of the standalone comment line.

tests/assets/challenges/basic-foo-pyright-config/question.py (1)

10-11: Verify the test expectation for foo(1).

foo() is defined without parameters, so both foo(1) and foo(1, 2) should produce type errors. However, only line 11 is marked with # expect-type-error. Is this intentional for testing specific pyright config behavior, or should line 10 also be marked?

tests/test_challenge.py (1)

16-22: LGTM!

The test correctly verifies the _partition_test_code behavior by checking that the returned pyright config ends with the expected configuration line from PYRIGHT_BASIC_CONFIG.

views/challenge.py (3)

14-36: LGTM!

The pyright configuration covers appropriate strictness settings for a type-checking challenge platform.


244-251: LGTM!

The error line mapping correctly translates absolute line numbers from the merged code back to their respective sections (user code, test code, or pyright-config).


255-259: Verify: Line number adjustment assumes missing expected errors are always in test code.

Line 258 unconditionally subtracts user_code_lines_len from the line number. If an expect-type-error comment could appear in user code (line numbers ≤ user_code_lines_len), this would produce incorrect (zero or negative) line numbers in the error message.

If expect-type-error is only valid in test code, this is fine. Otherwise, consider applying the same section-based mapping used for other errors (lines 246-251).

- Describe the challenge, make sure people understand what they need to accomplish (i.e. the `TODO:` part)
- A comment `## End of your code ##`. This is mandatory, just copy and paste it.
- Several test cases. Add a comment `# expect-type-error` after the lines where type errors should be thrown.
- (Optional) Add a comment `## End of test code ##`. Several [pyright-config](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#type-check-diagnostics-settings) with the format `# pyright: <config_name>=<value>`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor grammatical issue in the documentation.

The sentence structure is incomplete. After describing the comment marker, it reads "Several [pyright-config]..." which doesn't form a complete sentence.

📝 Suggested fix
-   - (Optional) Add a comment `## End of test code ##`. Several [pyright-config](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#type-check-diagnostics-settings) with the format `# pyright: <config_name>=<value>`
+   - (Optional) Add a comment `## End of test code ##`, followed by [pyright-config](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#type-check-diagnostics-settings) directives with the format `# pyright: <config_name>=<value>`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- (Optional) Add a comment `## End of test code ##`. Several [pyright-config](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#type-check-diagnostics-settings) with the format `# pyright: <config_name>=<value>`
- (Optional) Add a comment `## End of test code ##`, followed by [pyright-config](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#type-check-diagnostics-settings) directives with the format `# pyright: <config_name>=<value>`
🤖 Prompt for AI Agents
In @docs/Contribute.md at line 45, The sentence fragment should be rewritten so
it reads as a complete instruction: change the line that currently reads about
the optional comment marker and pyright config to something like: "Optionally
add a comment `## End of test code ##`. You may also include pyright
configuration options using the format `# pyright: <config_name>=<value>`."
Update the wording around the `## End of test code ##` marker and the `#
pyright: <config_name>=<value>` example so the two clauses form complete,
grammatical sentences.

Comment on lines +11 to +24
def get_test_code(path: Path):
TEST_SPLITTER = "\n## End of test code ##\n"
level, challenge_name = path.parent.name.split("-", maxsplit=1)

with solution_file.open() as f:
challenge_code = f.read()
challenge = Challenge(
name=challenge_name, level=Level(level), code=challenge_code
)

return challenge.test_code.partition(TEST_SPLITTER)[0]

solution_test = get_test_code(solution_file)
question_test = get_test_code(solution_file.parent / "question.py")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical bug: get_test_code always reads from solution_file regardless of the path argument.

Line 15 opens solution_file instead of the path parameter. This means both solution_test and question_test are derived from the same file, causing the test to always pass trivially without actually verifying that solution and question test codes are identical.

🐛 Proposed fix
     def get_test_code(path: Path):
         TEST_SPLITTER = "\n## End of test code ##\n"
         level, challenge_name = path.parent.name.split("-", maxsplit=1)

-        with solution_file.open() as f:
+        with path.open() as f:
             challenge_code = f.read()
         challenge = Challenge(
             name=challenge_name, level=Level(level), code=challenge_code
         )

         return challenge.test_code.partition(TEST_SPLITTER)[0]
🤖 Prompt for AI Agents
In @tests/test_identical.py around lines 11 - 24, get_test_code currently
ignores its path arg and always reads solution_file, so both solution_test and
question_test come from the same file; open and read the provided path parameter
instead of solution_file inside get_test_code (fix the file open call used when
building challenge_code and test extraction), keeping the rest of the function
logic the same so solution_test and question_test are derived from their
respective paths.

Comment on lines +261 to +268
# Error for pyright-config will not fail the challenge
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False

error_lines.append(f"\nFound {len(error_lines)} errors")
Copy link

@coderabbitai coderabbitai bot Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor: Error count includes non-failing pyright-config errors, which may confuse users.

A user might see "Found 1 errors" but still pass the challenge if the only error is from pyright-config. Consider either:

  1. Excluding pyright-config errors from the count, or
  2. Clarifying in the message (e.g., "Found 1 errors (0 blocking)")

Also, minor grammar nit: "Found 1 errors" should be "Found 1 error" for singular.

💡 Suggested improvement
-        error_lines.append(f"\nFound {len(error_lines)} errors")
+        blocking_count = sum(1 for e in error_lines if not e.startswith("[pyright-config]"))
+        error_lines.append(f"\nFound {blocking_count} error{'s' if blocking_count != 1 else ''}")
🤖 Prompt for AI Agents
In @views/challenge.py around lines 261 - 268, The error count currently
includes non-blocking pyright-config lines and the singular/plural grammar is
wrong; filter out lines starting with "[pyright-config]" into a blocking_errors
list and use that to determine passed (set passed = True when
len(blocking_errors) == 0) and to build the summary message, e.g., append
f"\nFound {n} error" vs "errors" based on n where n = len(blocking_errors), or
include both totals like "Found X errors (Y blocking)"; update references to
error_lines and the append call to use the new counts.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well I don't agree

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@laike9m Understood! I'll defer to your judgment on this. Thanks for the feedback.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggested solution seems complicated. Can you work on a better solution?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@laike9m Sure! Here's a simpler approach that adds a counter to the existing loop where you're already checking for pyright-config errors:

# Error for pyright-config will not fail the challenge
passed = True
blocking_errors = 0
for error_line in error_lines:
    if error_line.startswith("[pyright-config]"):
        continue
    blocking_errors += 1
    passed = False

error_word = "error" if blocking_errors == 1 else "errors"
error_lines.append(f"\nFound {blocking_errors} {error_word}")

This reuses the loop you already have, so there's no extra iteration. It counts only the blocking errors and fixes the singular/plural grammar issue. Does this work better for you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants