|
| 1 | + |
| 2 | +# Change Log |
| 3 | +All notable changes to this project will be documented in this file. |
| 4 | + |
| 5 | +The format is based on [Keep a Changelog](http://keepachangelog.com/) |
| 6 | +and this project adheres to [Semantic Versioning](http://semver.org/). |
| 7 | + |
| 8 | +## [0.5.0] - 2023-04-04 |
| 9 | + |
| 10 | +This is a major upgrade in which 90% of the code has been rewritten. In this version |
| 11 | +we achieves: |
| 12 | + |
| 13 | + |
| 14 | + |
| 15 | +Links: |
| 16 | +- Documentation: https://www.nerfacc.com/en/v0.5.0/ |
| 17 | +- ArXiv Report: Coming Soon. |
| 18 | + |
| 19 | +Methodologies: |
| 20 | +- Upgrade Occupancy Grid to support multiple levels. |
| 21 | +- Support Proposal Network from Mip-NeRF 360. |
| 22 | +- Update examples on unbounded scenes to use Multi-level Occupancy Grid or Proposal Network. |
| 23 | +- Contraction for Occupancy Grid is no longer supported due to it's inefficiency for ray traversal. |
| 24 | + |
| 25 | +API Changes: |
| 26 | +- [Changed] `OccupancyGrid()` -> `OccGridEstimator()`. |
| 27 | + - [Added] Argument `levels=1` for multi-level support. |
| 28 | + - [Added] Function `self.sampling()` that does basically the same thing with the old `nerfacc.ray_marching`. |
| 29 | + - [Renamed] Function `self.every_n_step()` -> `self.update_every_n_steps()` |
| 30 | +- [Added] `PropNetEstimator()`. With functions `self.sampling()`, `self.update_every_n_steps()` |
| 31 | +and `self.compute_loss()`. |
| 32 | +- [Removed] `ray_marching()`. Ray marching is now implemented through calling `sampling()` of |
| 33 | +the `OccGridEstimator()` / `PropNetEstimator()`. |
| 34 | +- [Changed] `ray_aabb_intersect()` now supports multiple aabb, and supports new argument `near_plane`, `far_plane`, `miss_value`. |
| 35 | +- [Changed] `render_*_from_*()`. The input shape changes from `(all_samples, 1)` to `(all_samples)`. And the function will returns all intermediate results so it might be a tuple. |
| 36 | +- [Changed] `rendering()`. The input shape changes from `(all_samples, 1)` to `(all_samples)`, including the shape assumption for the `rgb_sigma_fn` and `rgb_alpha_fn`. Be aware of this shape change. |
| 37 | +- [Changed] `accumulate_along_rays()`. The shape of the `weights` in the inputs should be `(all_samples)` now. |
| 38 | +- [Removed] `unpack_info()`, `pack_data()`, `unpack_data()` are temporally removed due to in-compatibility |
| 39 | +with the new backend implementation. Will add them back later. |
| 40 | +- [Added] Some basic functions that support both batched tensor and flattened tensor: `inclusive_prod()`, `inclusive_sum()`, `exclusive_prod()`, `exclusive_sum()`, `importance_sampling()`, `searchsorted()`. |
| 41 | + |
| 42 | +Examples & Benchmarks: |
| 43 | +- More benchmarks and examples. See folder `examples/` and `benchmarks/`. |
| 44 | + |
| 45 | +## [0.3.5] - 2023-02-23 |
| 46 | + |
| 47 | +A stable version that achieves: |
| 48 | +- The vanilla Nerf model with 8-layer MLPs can be trained to better quality (+~0.5 PNSR) in 1 hour rather than 1~2 days as in the paper. |
| 49 | +- The Instant-NGP Nerf model can be trained to equal quality in 4.5 minutes, comparing to the official pure-CUDA implementation. |
| 50 | +- The D-Nerf model for dynamic objects can also be trained in 1 hour rather than 2 days as in the paper, and with better quality (+~2.5 PSNR). |
| 51 | +- Both bounded and unbounded scenes are supported. |
| 52 | + |
| 53 | +Links: |
| 54 | +- Documentation: https://www.nerfacc.com/en/v0.3.5/ |
| 55 | +- ArXiv Report: https://arxiv.org/abs/2210.04847v2/ |
| 56 | + |
| 57 | +Methodologies: |
| 58 | +- Single resolution `nerfacc.OccupancyGrid` for synthetic scenes. |
| 59 | +- Contraction methods `nerfacc.ContractionType` for unbounded scenes. |
0 commit comments