Skip to content

Conversation

@Chengqian-Zhang
Copy link
Collaborator

@Chengqian-Zhang Chengqian-Zhang commented Nov 8, 2025

In this PR:

  1. Support writing fitting stat to stat_file and loading fitting stat from stat_file
  2. Ensure the fitting stat calculate is correct when using default_fparam
  3. Support sharing fitting stat when using share_fitting in multitask mode.
  4. Print the process of calculating fitting stat to the board via log.info.

Summary by CodeRabbit

  • New Features

    • Default frame parameters now automatically populate samples when absent
    • Computed per-parameter statistics can be saved to and restored from disk
    • Multitask training supports probability-weighted parameter sharing and protection
  • Refactor

    • Parameter-sharing flows reorganized to propagate statistics and defaults across components
  • Tests

    • Extensive new tests covering statistic computation, persistence, and multitask sharing

@github-actions github-actions bot added the Python label Nov 8, 2025
@Chengqian-Zhang Chengqian-Zhang marked this pull request as draft November 8, 2025 10:16
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 8, 2025

📝 Walkthrough

Walkthrough

Adds default frame-parameter (fparam) exposure and population, extends fitting statistics computation with on-disk persistence and NumPy aggregation, and threads multitask-aware parameter-sharing probabilities and protection factors through model wrapper, trainer, and fitting layers.

Changes

Cohort / File(s) Summary
DP atomic & model API
deepmd/pt/model/atomic_model/dp_atomic_model.py, deepmd/pt/model/model/make_model.py
Added get_default_fparam() accessors; wrapped_sampler now populates missing fparam from default when available; pass stat_file_path into fitting_net.compute_input_stats.
Fitting/statistics core
deepmd/pt/model/task/fitting.py
Extended share_params() to accept model probability and protection; added stat persistence/restoration (save_to_file_fparam/aparam, restore_*); compute_input_stats() now accepts stat_file_path, uses NumPy aggregation, optionally saves/loads per-type stat files; added get_stats() and get_default_fparam().
Training orchestration
deepmd/pt/train/training.py
Compute and normalize per-model probabilities (model_prob); compute/validate common data_stat_protect; pass model_key_prob_map and data_stat_protect into share_params; propagate default fparam into DataRequirementItem for "fparam".
Wrapper parameter sharing
deepmd/pt/train/wrapper.py
Expanded share_params() signature to accept model_key_prob_map and data_stat_protect; compute frac_prob per link and forward model_prob and protection into underlying share_params calls.
Stat utilities
deepmd/utils/env_mat_stat.py
Added __mul__(self, scalar: float) to StatItem for scalar multiplication.
Tests & test data
source/tests/pt/model/water/data/..., source/tests/pt/test_fitting_stat.py
Added raw test data files and extensive tests covering stat computation, file I/O, multi-task weighting, and fparam handling.

Sequence Diagram(s)

sequenceDiagram
    actor Trainer as Training
    participant Wrapper as ModelWrapper
    participant Fit as Fitting
    participant Atom as DPAtomicModel

    Training->>Training: compute model_key_prob_map & data_stat_protect
    Training->>Wrapper: share_params(shared_links, model_key_prob_map, data_stat_protect)
    activate Wrapper
    Wrapper->>Wrapper: for each link compute frac_prob = prob_link/prob_base
    Wrapper->>Fit: share_params(base_class, shared_level, model_prob=frac_prob, protection=data_stat_protect, resume)
    deactivate Wrapper

    Training->>Atom: compute_or_load_stat(stat_file_path)
    activate Atom
    Atom->>Fit: compute_input_stats(merged, protection, stat_file_path)
    activate Fit
    alt stat_file_path exists
        Fit->>Fit: restore_fparam/aparam_from_file(stat_file_path)
    else
        Fit->>Fit: aggregate stats from data (NumPy), apply protection
        Fit->>Fit: save_to_file_fparam/aparam(stat_file_path)
    end
    Fit->>Atom: return stats/default_fparam
    deactivate Fit
    Atom->>Training: provide default fparam for data requirements
    deactivate Atom
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Areas requiring extra attention:

  • deepmd/pt/model/task/fitting.py — stats aggregation, file I/O format, NumPy ↔ torch conversions, buffer/linking semantics.
  • deepmd/pt/train/wrapper.py & deepmd/pt/train/training.py — correctness of probability normalization and frac_prob propagation.
  • cross-file propagation of stat_file_path, model_key_prob_map, data_stat_protect, and default fparam.

Possibly related PRs

Suggested reviewers

  • njzjz
  • wanghan-iapcm
  • anyangml

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.35% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly addresses the main changes: calculating correct fitting statistics when using default fparam and share fitting, which aligns with the PR's core objectives.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4c3072e and 6f18144.

📒 Files selected for processing (8)
  • deepmd/pt/model/atomic_model/dp_atomic_model.py (2 hunks)
  • deepmd/pt/model/model/make_model.py (1 hunks)
  • deepmd/pt/model/task/fitting.py (5 hunks)
  • deepmd/pt/train/training.py (2 hunks)
  • deepmd/pt/train/wrapper.py (2 hunks)
  • source/tests/pt/model/water/data/data_1/type.raw (1 hunks)
  • source/tests/pt/model/water/data/data_1/type_map.raw (1 hunks)
  • source/tests/pt/test_fitting_stat.py (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • source/tests/pt/model/water/data/data_1/type_map.raw
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/pt/model/atomic_model/dp_atomic_model.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Always run ruff check . and ruff format . before committing changes to Python code

Files:

  • source/tests/pt/test_fitting_stat.py
  • deepmd/pt/model/model/make_model.py
  • deepmd/pt/train/wrapper.py
  • deepmd/pt/train/training.py
  • deepmd/pt/model/task/fitting.py
🧬 Code graph analysis (5)
source/tests/pt/test_fitting_stat.py (5)
deepmd/utils/argcheck.py (1)
  • normalize (3498-3503)
deepmd/utils/compat.py (1)
  • update_deepmd_input (375-394)
deepmd/utils/path.py (1)
  • DPPath (28-158)
deepmd/pt/model/task/fitting.py (1)
  • compute_input_stats (236-344)
deepmd/pd/train/wrapper.py (1)
  • state_dict (199-203)
deepmd/pt/model/model/make_model.py (2)
deepmd/pt/model/task/fitting.py (1)
  • get_default_fparam (643-644)
deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
  • get_default_fparam (356-357)
deepmd/pt/train/wrapper.py (1)
deepmd/pt/model/task/fitting.py (1)
  • share_params (65-144)
deepmd/pt/train/training.py (3)
deepmd/pt/model/task/fitting.py (4)
  • share_params (65-144)
  • get_default_fparam (643-644)
  • has_default_fparam (639-641)
  • get_dim_fparam (635-637)
deepmd/pt/train/wrapper.py (1)
  • share_params (63-152)
deepmd/utils/data.py (1)
  • DataRequirementItem (1017-1097)
deepmd/pt/model/task/fitting.py (3)
deepmd/utils/env_mat_stat.py (3)
  • StatItem (26-98)
  • compute_avg (58-73)
  • compute_std (75-98)
deepmd/utils/path.py (16)
  • DPPath (28-158)
  • mkdir (149-158)
  • mkdir (270-282)
  • mkdir (472-490)
  • save_numpy (70-77)
  • save_numpy (200-211)
  • save_numpy (358-370)
  • load_numpy (50-57)
  • load_numpy (180-188)
  • load_numpy (335-343)
  • is_dir (115-116)
  • is_dir (249-251)
  • is_dir (439-445)
  • is_file (111-112)
  • is_file (245-247)
  • is_file (433-437)
deepmd/pt/utils/utils.py (6)
  • to_numpy_array (224-224)
  • to_numpy_array (228-228)
  • to_numpy_array (231-247)
  • to_torch_tensor (251-251)
  • to_torch_tensor (255-255)
  • to_torch_tensor (258-276)
🪛 Ruff (0.14.3)
source/tests/pt/test_fitting_stat.py

114-114: Local variable f is assigned to but never used

Remove assignment to unused variable f

(F841)

deepmd/pt/model/task/fitting.py

160-160: Avoid specifying long messages outside the exception class

(TRY003)


186-186: Avoid specifying long messages outside the exception class

(TRY003)


349-349: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Analyze (c-cpp)
🔇 Additional comments (16)
source/tests/pt/model/water/data/data_1/type.raw (1)

1-192: LGTM! Test data file added correctly.

The atom type data follows the expected format for water model tests (64 oxygen atoms, 128 hydrogen atoms), consistent with the H₂O molecular structure.

deepmd/pt/model/model/make_model.py (1)

533-534: LGTM! Delegation pattern correctly applied.

The new get_default_fparam method follows the same delegation pattern as neighboring methods (get_dim_fparam, has_default_fparam) and exposes the default frame parameter from the atomic model level.

deepmd/pt/train/wrapper.py (1)

63-69: LGTM! Signature expanded to support probabilistic parameter sharing.

The new parameters enable per-model probability weighting and data protection during multitask parameter sharing, aligning with the PR objectives.

source/tests/pt/test_fitting_stat.py (4)

108-116: LGTM! Test setup correctly initializes temporary H5 file.

The unused variable f at line 114 flagged by static analysis is a false positive—it's required for the h5py.File context manager to create and properly close the file.


118-207: LGTM! Comprehensive test coverage for fitting statistics lifecycle.

The test effectively validates three critical paths:

  1. Computing and applying statistics (lines 132-143)
  2. Persisting statistics to disk (lines 154-182)
  3. Loading statistics from disk with a callable that raises to force the load path (lines 184-206)

209-226: LGTM! Weighted statistics computation correctly implemented.

The helper function properly computes probability-weighted averages and standard deviations across multiple tasks, applying the protection floor to prevent numerical instability.


229-487: LGTM! Multitask fitting statistics tests are thorough.

The test class validates:

  • Multitask parameter sharing with explicit fparam data
  • Multitask parameter sharing with default fparam fallback
  • State dictionary consistency across shared models
  • Per-model statistics file persistence and correctness
  • Weighted statistics computation in shared fitting nets
deepmd/pt/train/training.py (3)

619-632: LGTM! Multitask probability initialization correctly implemented.

The logic properly initializes model probabilities from either explicit config values or training data sizes, validates the sum is positive, and normalizes to ensure probabilities sum to 1.0. Note that while the assertion at line 631 prevents all-zero probabilities, individual model probabilities can still be zero after normalization, which is handled appropriately downstream (though it triggers the division-by-zero guard needed in wrapper.py).


636-649: LGTM! Data protection parameter validation correctly enforced.

The code properly extracts data_stat_protect from each model configuration and asserts consistency across all branches—a necessary requirement for shared parameter statistics. The common protection value and probability map are then correctly passed to wrapper.share_params.


1348-1381: LGTM! Default fparam handling correctly implemented.

The logic properly retrieves the default fparam tensor from the model (if available), converts it to numpy, and propagates it as the default value in the data requirement. When a default exists, must=False allows the data system to omit fparam.npy files, falling back to the configured default. This aligns with the PR objective to support default fparam in fitting statistics.

deepmd/pt/model/task/fitting.py (6)

40-49: LGTM! Required imports added for statistics persistence.

The imports of StatItem and DPPath support the new fitting statistics persistence and restoration capabilities introduced in this PR.


65-144: LGTM! Probabilistic parameter sharing correctly implemented.

The updated share_params method properly implements weighted statistics aggregation for multitask training:

  • Accumulates per-branch statistics weighted by model_prob (lines 89, 118)
  • Computes protected averages and standard deviations (lines 90-93, 119-122)
  • Links buffers to enable shared parameter updates (lines 109-110, 138-139)

The implementation aligns with the PR objectives to support share_fitting with correct statistical propagation.


146-235: LGTM! Statistics persistence methods correctly implemented.

The save/restore methods properly:

  • Serialize StatItem objects to 3-column numpy arrays (number, sum, squared_sum)
  • Use DPPath abstraction for filesystem/HDF5 compatibility
  • Validate array shapes during restoration
  • Log operations for observability

The format is simple and sufficient for the statistics being persisted.


236-344: LGTM! Statistics computation enhanced with persistence support.

The compute_input_stats method now properly:

  • Accepts optional stat_file_path for persistence (line 240)
  • Loads statistics from file when available (lines 268-273, 306-311)
  • Computes statistics from data when files don't exist (lines 275-293, 313-335)
  • Saves computed statistics for future reuse (lines 292-293, 334-335)
  • Applies protection parameter to prevent numerical instability (lines 296-297, 338-339)

The indentation issue flagged in past reviews has been resolved.


346-350: LGTM! Statistics accessor method appropriately implemented.

The get_stats method provides safe access to computed statistics with clear error messaging when statistics haven't been computed yet.


643-644: LGTM! Default fparam accessor correctly implemented.

The get_default_fparam method exposes the default frame parameter tensor, enabling downstream code (e.g., training.py) to retrieve the configured default value for data requirements.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
deepmd/pt/model/model/make_model.py (1)

9-9: Remove unused numpy import.

The numpy import is not used anywhere in this file.

Apply this diff:

-import numpy as np
deepmd/pt/train/training.py (1)

636-642: Fix unnecessary f-string prefix.

The assertion message on line 637 uses an f-string without any placeholders.

Apply this diff:

-            assert np.allclose(_data_stat_protect, _data_stat_protect[0]), f"Model key 'data_stat_protect' must be the same in each branch when multitask!"
+            assert np.allclose(_data_stat_protect, _data_stat_protect[0]), "Model key 'data_stat_protect' must be the same in each branch when multitask!"

The logic correctly validates consistency and propagates the protection value to parameter sharing.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25fa707 and 4c3072e.

📒 Files selected for processing (6)
  • deepmd/pt/model/atomic_model/dp_atomic_model.py (3 hunks)
  • deepmd/pt/model/model/make_model.py (2 hunks)
  • deepmd/pt/model/task/fitting.py (6 hunks)
  • deepmd/pt/train/training.py (2 hunks)
  • deepmd/pt/train/wrapper.py (2 hunks)
  • deepmd/utils/env_mat_stat.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Always run ruff check . and ruff format . before committing changes to Python code

Files:

  • deepmd/pt/train/wrapper.py
  • deepmd/pt/model/atomic_model/dp_atomic_model.py
  • deepmd/utils/env_mat_stat.py
  • deepmd/pt/train/training.py
  • deepmd/pt/model/task/fitting.py
  • deepmd/pt/model/model/make_model.py
🧬 Code graph analysis (5)
deepmd/pt/train/wrapper.py (1)
deepmd/pt/model/task/fitting.py (1)
  • share_params (66-128)
deepmd/pt/model/atomic_model/dp_atomic_model.py (4)
deepmd/pt/model/model/make_model.py (2)
  • has_default_fparam (530-532)
  • get_default_fparam (535-536)
deepmd/pt/model/task/fitting.py (3)
  • has_default_fparam (599-601)
  • get_default_fparam (603-604)
  • compute_input_stats (208-269)
deepmd/pd/model/atomic_model/dp_atomic_model.py (2)
  • has_default_fparam (414-416)
  • wrapped_sampler (387-397)
deepmd/pt/model/atomic_model/base_atomic_model.py (1)
  • has_default_fparam (138-140)
deepmd/pt/train/training.py (4)
deepmd/pt/model/task/fitting.py (4)
  • share_params (66-128)
  • get_default_fparam (603-604)
  • has_default_fparam (599-601)
  • get_dim_fparam (595-597)
deepmd/pt/train/wrapper.py (1)
  • share_params (63-139)
deepmd/pt/model/atomic_model/dp_atomic_model.py (3)
  • get_default_fparam (355-356)
  • has_default_fparam (351-353)
  • get_dim_fparam (347-349)
deepmd/utils/data.py (1)
  • DataRequirementItem (745-825)
deepmd/pt/model/task/fitting.py (5)
deepmd/utils/path.py (13)
  • DPPath (28-158)
  • mkdir (149-158)
  • mkdir (270-282)
  • mkdir (472-490)
  • save_numpy (70-77)
  • save_numpy (200-211)
  • save_numpy (358-370)
  • load_numpy (50-57)
  • load_numpy (180-188)
  • load_numpy (335-343)
  • is_dir (115-116)
  • is_dir (249-251)
  • is_dir (439-445)
deepmd/utils/env_mat_stat.py (3)
  • StatItem (26-98)
  • compute_avg (58-73)
  • compute_std (75-98)
deepmd/pt/utils/utils.py (6)
  • to_numpy_array (224-224)
  • to_numpy_array (228-228)
  • to_numpy_array (231-247)
  • to_torch_tensor (251-251)
  • to_torch_tensor (255-255)
  • to_torch_tensor (258-276)
deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
  • get_default_fparam (355-356)
deepmd/pt/model/model/make_model.py (1)
  • get_default_fparam (535-536)
deepmd/pt/model/model/make_model.py (3)
deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
  • get_default_fparam (355-356)
deepmd/pt/model/task/fitting.py (1)
  • get_default_fparam (603-604)
deepmd/pt/model/network/network.py (1)
  • Tensor (36-37)
🪛 Ruff (0.14.3)
deepmd/pt/train/training.py

637-637: f-string without any placeholders

Remove extraneous f prefix

(F541)

deepmd/pt/model/task/fitting.py

269-270: Expected an indented block after if statement

(invalid-syntax)


272-272: unindent does not match any outer indentation level

(invalid-syntax)


272-272: Expected a statement

(invalid-syntax)


272-272: Expected a statement

(invalid-syntax)


272-273: Expected a statement

(invalid-syntax)


273-273: Unexpected indentation

(invalid-syntax)


297-297: unindent does not match any outer indentation level

(invalid-syntax)


298-298: Unexpected indentation

(invalid-syntax)


304-304: unindent does not match any outer indentation level

(invalid-syntax)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Analyze (python)
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
🔇 Additional comments (13)
deepmd/utils/env_mat_stat.py (1)

51-56: LGTM!

The scalar multiplication operator correctly scales all statistical components for probability-weighted aggregation in multitask training. The implementation properly supports the weighted averaging workflow where statistics from multiple models are combined using probability weights.

deepmd/pt/model/model/make_model.py (1)

534-536: LGTM!

The method correctly delegates to the atomic model and follows the established pattern for other similar accessors in this class.

deepmd/pt/train/wrapper.py (1)

63-63: LGTM!

The extended signature correctly supports probability-weighted parameter sharing for multitask training. The parameters align with the updated share_params implementation in the fitting net.

deepmd/pt/model/atomic_model/dp_atomic_model.py (2)

329-337: LGTM!

The logic correctly populates missing fparam with default values when available. The check for both "find_fparam" and "fparam" ensures proper handling of data loading states.


342-342: LGTM!

The stat_file_path propagation enables proper persistence of fparam/aparam statistics, and the get_default_fparam method correctly delegates to the fitting net.

Also applies to: 355-356

deepmd/pt/train/training.py (2)

619-632: LGTM!

The model probability calculation correctly supports both explicit configuration and data-driven defaults, with proper normalization and validation to ensure a valid probability distribution.


1344-1351: LGTM!

The default fparam handling correctly retrieves and converts the default value from the model, passing it to the data requirement with proper type conversion.

deepmd/pt/model/task/fitting.py (6)

66-128: LGTM!

The extended share_params correctly implements probability-weighted parameter sharing for multitask training. The logic properly accumulates weighted statistics for fparam/aparam buffers and links them to the base class.


130-206: LGTM!

The persistence methods correctly save and restore fparam/aparam statistics using numpy arrays, with proper path handling and logging.


208-266: LGTM!

The fparam statistics computation correctly implements the load-or-compute pattern with proper persistence and type conversions.


304-310: LGTM!

The get_stats method properly validates that statistics have been computed before returning them.


603-604: LGTM!

The method correctly exposes the default fparam tensor and aligns with the existing has_default_fparam accessor.


11-11: LGTM!

The new imports are properly used throughout the file for type hints and statistics handling.

Also applies to: 45-50

@codecov
Copy link

codecov bot commented Nov 8, 2025

Codecov Report

❌ Patch coverage is 88.74172% with 17 lines in your changes missing coverage. Please review.
✅ Project coverage is 84.19%. Comparing base (7f25e16) to head (6f18144).

Files with missing lines Patch % Lines
deepmd/pt/model/task/fitting.py 87.80% 15 Missing ⚠️
deepmd/pt/train/training.py 86.66% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##            devel    #5038    +/-   ##
========================================
  Coverage   84.19%   84.19%            
========================================
  Files         709      709            
  Lines       70216    70326   +110     
  Branches     3621     3618     -3     
========================================
+ Hits        59116    59213    +97     
- Misses       9933     9945    +12     
- Partials     1167     1168     +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@Chengqian-Zhang Chengqian-Zhang marked this pull request as ready for review November 9, 2025 12:20
Comment on lines +51 to +56
def __mul__(self, scalar: float) -> "StatItem":
return StatItem(
number=self.number * scalar,
sum=self.sum * scalar,
squared_sum=self.squared_sum * scalar,
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some type issues here:
number is int and scalar is float, int * float = float, so it cannot be assigned to number (expected an int),

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants