Skip to content

Conversation

cjluo-nv
Copy link
Collaborator

@cjluo-nv cjluo-nv commented Sep 4, 2025

Relax tensor compare threshold

What does this PR do?

Fix Unittest

Summary by CodeRabbit

  • Tests
    • Make test runs deterministic by automatically seeding randomness before each test, improving reproducibility across environments and CI.
    • Relax numeric comparison tolerances in quantized GEMM assertions (dynamic and calibration paths) to reduce spurious failures from floating‑point variability on different hardware/toolchains.
    • No impact on runtime behavior; changes are confined to test stability and reliability.

Copy link

copy-pr-bot bot commented Sep 4, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@cjluo-nv cjluo-nv force-pushed the chenjiel/cjluo-nv-patch-1 branch from f1d24fe to b11b6fc Compare September 4, 2025 16:28
Relax tensor compare threshold

Signed-off-by: Chenjie Luo <[email protected]>
@cjluo-nv cjluo-nv force-pushed the chenjiel/cjluo-nv-patch-1 branch from b11b6fc to e500c75 Compare September 4, 2025 18:39
Copy link

coderabbitai bot commented Sep 4, 2025

Walkthrough

Adds an autouse pytest fixture to set deterministic seeds before each test and relaxes tolerance thresholds in test_dynamic_gemm comparisons from atol/3 to atol/2 within the same test module.

Changes

Cohort / File(s) Summary
Tests — Quantization GEMM
tests/gpu/torch/quantization/backends/test_gemm_common.py
Added @pytest.fixture(autouse=True) def setup_seed(): set_seed(); relaxed tolerance in test_dynamic_gemm comparisons from atol/3 to atol/2 for four output pair checks.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I twitch my whiskers, seeds aligned just right,
Deterministic dawn, no flutters in the night.
With softer bounds, our numbers gently hum—
From third to half, the tolerances succumb.
Hop, hop! The tests all neatly chime,
A gemm of peace in quantized time.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chenjiel/cjluo-nv-patch-1

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/gpu/torch/quantization/backends/test_gemm_common.py (2)

32-36: Autouse seed fixture is good; consider removing redundant module-level seeding.

@pytest.fixture(autouse=True) ensures per-test determinism. You also call set_seed() at import time (Line 29). Double seeding is harmless but redundant and can confuse ordering. Prefer using the fixture only, or add a brief comment justifying the import-time call.


266-272: Threshold relaxation looks fine; consider adding rtol and better failure diagnostics.

Keeping rtol consistent with earlier checks and printing diffs will make failures actionable while staying flaky-resistant.

-        assert torch.allclose(output_dynamic_quant_gemm, output_dynamic_quant, atol=atol / 2)
-        assert torch.allclose(output_calib_quant_gemm, output_calib_quant, atol=atol / 2)
+        assert torch.allclose(output_dynamic_quant_gemm, output_dynamic_quant, atol=atol / 2, rtol=rtol), (
+            f"dynamic: max|diff|={((output_dynamic_quant_gemm - output_dynamic_quant).abs()).amax()}"
+        )
+        assert torch.allclose(output_calib_quant_gemm, output_calib_quant, atol=atol / 2, rtol=rtol), (
+            f"calib: max|diff|={((output_calib_quant_gemm - output_calib_quant).abs()).amax()}"
+        )
         assert torch.allclose(
-            output_dynamic_quant_gemm, output_dynamic_quant_compressed, atol=atol / 2
+            output_dynamic_quant_gemm, output_dynamic_quant_compressed, atol=atol / 2, rtol=rtol
         )
-        assert torch.allclose(output_calib_quant_gemm, output_calib_quant_compressed, atol=atol / 2)
+        assert torch.allclose(
+            output_calib_quant_gemm, output_calib_quant_compressed, atol=atol / 2, rtol=rtol
+        )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d0372f4 and 29c2435.

📒 Files selected for processing (1)
  • tests/gpu/torch/quantization/backends/test_gemm_common.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/gpu/torch/quantization/backends/test_gemm_common.py (1)
tests/_test_utils/torch_misc.py (1)
  • set_seed (33-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: code-quality
  • GitHub Check: wait-checks / wait
  • GitHub Check: build-docs

@cjluo-nv cjluo-nv requested a review from sugunav14 September 5, 2025 00:47
@cjluo-nv cjluo-nv merged commit 2b52759 into main Sep 5, 2025
9 checks passed
@cjluo-nv cjluo-nv deleted the chenjiel/cjluo-nv-patch-1 branch September 5, 2025 16:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants