Skip to content

Conversation

@njzjz
Copy link
Member

@njzjz njzjz commented Jan 5, 2026

Consistent with other backends.

Summary by CodeRabbit

  • Bug Fixes
    • Improved energy RMSE metric display by normalizing reported values per atom, ensuring more accurate representation in loss monitoring.

✏️ Tip: You can customize this high-level summary in your review settings.

Consistent with other backends.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes the RMSE energy loss calculation in the dpmodel backend to normalize by the number of atoms, making it consistent with other backends (PyTorch, PaddlePaddle, and TensorFlow).

Key Changes:

  • Normalized rmse_e metric by multiplying with atom_norm_ener (1.0 / natoms)
  • Ensures cross-backend consistency for energy RMSE reporting

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 5, 2026

📝 Walkthrough

Walkthrough

The change modifies the EnergyLoss.call method in deepmd/dpmodel/loss/ener.py to scale the observable RMSE for energy by atom_norm_ener (1.0 / natoms) when displaying the value. The underlying loss calculation and accumulation remain unchanged.

Changes

Cohort / File(s) Summary
Energy Loss Observable Display
deepmd/dpmodel/loss/ener.py
Scale the RMSE energy observable (more_loss["rmse_e"]) by atom_norm_ener for display purposes in EnergyLoss.call

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

  • fix(dpmodel): fix rmse_* in more_loss #5105 — Also modifies displayed rmse_e in the same file; that PR changes rmse_\* to use sqrt(l2_\*) while this PR additionally scales rmse_e by atom_norm_ener for per-atom normalization.

Suggested labels

Python

Suggested reviewers

  • wanghan-iapcm

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: scaling the rmse_e loss observable by n_atoms in the EnergyLoss implementation.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b98f6c5 and 877ff5d.

📒 Files selected for processing (1)
  • deepmd/dpmodel/loss/ener.py
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: 1azyking
Repo: deepmodeling/deepmd-kit PR: 4169
File: deepmd/pt/loss/ener_hess.py:341-348
Timestamp: 2024-10-08T15:32:11.479Z
Learning: In `deepmd/pt/loss/ener_hess.py`, the `label` uses the key `"atom_ener"` intentionally to maintain consistency with the forked version.
Learnt from: 1azyking
Repo: deepmodeling/deepmd-kit PR: 4169
File: deepmd/pt/loss/ener_hess.py:341-348
Timestamp: 2024-10-05T03:11:02.922Z
Learning: In `deepmd/pt/loss/ener_hess.py`, the `label` uses the key `"atom_ener"` intentionally to maintain consistency with the forked version.
Learnt from: 1azyking
Repo: deepmodeling/deepmd-kit PR: 4169
File: deepmd/utils/argcheck.py:1982-2117
Timestamp: 2024-10-05T03:06:02.372Z
Learning: The `loss_ener_hess` and `loss_ener` functions should remain separate to avoid confusion, despite code duplication.
📚 Learning: 2024-10-08T15:32:11.479Z
Learnt from: 1azyking
Repo: deepmodeling/deepmd-kit PR: 4169
File: deepmd/pt/loss/ener_hess.py:341-348
Timestamp: 2024-10-08T15:32:11.479Z
Learning: In `deepmd/pt/loss/ener_hess.py`, the `label` uses the key `"atom_ener"` intentionally to maintain consistency with the forked version.

Applied to files:

  • deepmd/dpmodel/loss/ener.py
📚 Learning: 2024-10-05T03:06:02.372Z
Learnt from: 1azyking
Repo: deepmodeling/deepmd-kit PR: 4169
File: deepmd/utils/argcheck.py:1982-2117
Timestamp: 2024-10-05T03:06:02.372Z
Learning: The `loss_ener_hess` and `loss_ener` functions should remain separate to avoid confusion, despite code duplication.

Applied to files:

  • deepmd/dpmodel/loss/ener.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (30)
  • GitHub Check: CodeQL analysis (python)
  • GitHub Check: Agent
  • GitHub Check: Test Python (6, 3.10)
  • GitHub Check: Test Python (3, 3.10)
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Test Python (6, 3.13)
  • GitHub Check: Test Python (3, 3.13)
  • GitHub Check: Test Python (4, 3.13)
  • GitHub Check: Test Python (1, 3.10)
  • GitHub Check: Test Python (5, 3.13)
  • GitHub Check: Test Python (4, 3.10)
  • GitHub Check: Test Python (2, 3.10)
  • GitHub Check: Test Python (2, 3.13)
  • GitHub Check: Test Python (1, 3.13)
  • GitHub Check: Test Python (5, 3.10)
  • GitHub Check: Test C++ (false, true, true, false)
  • GitHub Check: Test C++ (true, false, false, true)
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Analyze (python)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Test C++ (true, true, true, false)
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Test C++ (false, false, false, true)
  • GitHub Check: Build C++ (cpu, cpu)
🔇 Additional comments (1)
deepmd/dpmodel/loss/ener.py (1)

182-184: Approve change; rmse_e normalization aligns across backends.

The change correctly normalizes the displayed rmse_e by atom_norm_ener (1/natoms), making it consistent with both PyTorch and TensorFlow backends:

  • PyTorch: rmse_e = l2_ener_loss.sqrt() * atom_norm (ener.py:232, ener_spin.py:167)
  • TensorFlow: rmse_e = np.sqrt(error_e) / natoms[0] (ener.py:421, 760, 1002)
  • JAX: rmse_e = xp.sqrt(l2_ener_loss) * atom_norm_ener (ener.py:183)

All backends now display per-atom energy RMSE consistently.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Jan 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.15%. Comparing base (b98f6c5) to head (877ff5d).
⚠️ Report is 3 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #5124   +/-   ##
=======================================
  Coverage   82.15%   82.15%           
=======================================
  Files         709      709           
  Lines       72468    72468           
  Branches     3616     3615    -1     
=======================================
  Hits        59535    59535           
  Misses      11769    11769           
  Partials     1164     1164           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Jan 6, 2026
@njzjz njzjz added this pull request to the merge queue Jan 6, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 6, 2026
@njzjz njzjz added this pull request to the merge queue Jan 6, 2026
@njzjz njzjz removed this pull request from the merge queue due to the queue being cleared Jan 6, 2026
@njzjz njzjz added this pull request to the merge queue Jan 7, 2026
Merged via the queue into deepmodeling:master with commit cc85a6d Jan 7, 2026
67 checks passed
@njzjz njzjz deleted the fix-dpmodel-rmse-e branch January 7, 2026 15:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants