First of all, thank you to the Nerfstudio team for your amazing work and continuous improvements!
Your efforts make this project incredibly valuable for the community. π
Description
While reviewing the implementation of ParallelDataManager, I noticed that the next_eval() method currently calls:
ray_bundle, batch = next(self.iter_train_raybundles)[0]
This means evaluation batches are drawn from the training iterator, not the evaluation iterator (iter_eval_raybundles), even though setup_eval() initializes iter_eval_raybundles.
Expected Behavior
next_eval() should consume from self.iter_eval_raybundles:
ray_bundle, batch = next(self.iter_eval_raybundles)[0]
Impact
- Evaluation steps do not use the eval dataset.
- Metrics and loss computed during eval are based on training data.
- This could lead to misleading validation curves and incorrect evaluation logic.