Skip to content

Bug Report: next_eval() in ParallelDataManager uses iter_train_raybundles instead of iter_eval_raybundlesΒ #3731

@BestSushi

Description

@BestSushi

First of all, thank you to the Nerfstudio team for your amazing work and continuous improvements!

Your efforts make this project incredibly valuable for the community. πŸ™


Description

While reviewing the implementation of ParallelDataManager, I noticed that the next_eval() method currently calls:

ray_bundle, batch = next(self.iter_train_raybundles)[0]

This means evaluation batches are drawn from the training iterator, not the evaluation iterator (iter_eval_raybundles), even though setup_eval() initializes iter_eval_raybundles.


Expected Behavior

next_eval() should consume from self.iter_eval_raybundles:

ray_bundle, batch = next(self.iter_eval_raybundles)[0]

Impact

  • Evaluation steps do not use the eval dataset.
  • Metrics and loss computed during eval are based on training data.
  • This could lead to misleading validation curves and incorrect evaluation logic.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions