You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please first `cd data/`, and then download datasets into `data/` following the instructions below. The organization of the datasets should be the same as above.
40
40
41
41
#### (a) **Our captures**
42
-
We captured 67 forward-facing scenes (each scene contains 20-60 images). To download our data [ibrnet_collected.zip](https://drive.google.com/file/d/1rkzl3ecL3H0Xxf5WTyc2Swv30RIyr1R_/view?usp=sharing) (4.1G) for training, run:
42
+
We captured 67 forward-facing scenes (each scene contains 20-60 images). To download our data [ibrnet_collected.zip](https://drive.google.com/file/d/1dZZChihfSt9iIzcQICojLziPvX1vejkp/view?usp=sharing) (4.1G) for training, run:
P.S. We've captured some more scenes in [ibrnet_collected_more.zip](https://drive.google.com/file/d/1Uxw0neyiIn3Ve8mpRsO6A06KfbqNrWuq/view?usp=sharing), but we didn't include them for training. Feel free to download them if you would like more scenes for your task, but you wouldn't need them to reproduce our results.
48
+
P.S. We've captured some more scenes in [ibrnet_collected_more.zip](https://drive.google.com/file/d/1Xsi2170hvm1fpIaP6JI_d9oa0LGThJ7E/view?usp=sharing), but we didn't include them for training. Feel free to download them if you would like more scenes for your task, but you wouldn't need them to reproduce our results.
49
49
#### (b) [**LLFF**](https://bmild.github.io/llff/) released scenes
50
-
Download and process [real_iconic_noface.zip](https://drive.google.com/drive/folders/1M-_Fdn4ajDa0CS8-iqejv0fQQeuonpKF) (6.6G) using the following commands:
50
+
Download and process [real_iconic_noface.zip](https://drive.google.com/file/d/1m6AaHg-NEH3VW3t0Zk9E9WcNp4ZPNopl/view?usp=sharing) (6.6G) using the following commands:
The mapping between our renderings and the public Google Scanned Objects can be found in [this spreadsheet](https://docs.google.com/spreadsheets/d/14FivSzpjtqraR8IFmKOWWFXRUh4JsmTJqF2hr_ZY2R4/edit?usp=sharing&resourcekey=0-vVIKfNOVddY20NhBWr2ipQ).
94
+
The mapping between our renderings and the public Google Scanned Objects can be found in [this spreadsheet](https://docs.google.com/spreadsheets/d/1JGqJ9vKgZf9gLLUM-KIiRr_ePzJ-2CYRs5daB0qNIPo/edit?usp=sharing&resourcekey=0-aZfNVJQSm9GEIzT1afvx8Q).
95
95
96
96
### 2. Evaluation datasets
97
97
```
@@ -108,7 +108,7 @@ bash download_eval_data.sh
108
108
## Evaluation
109
109
First download our pretrained model under the project root directory:
- Our current implementation is not well-optimized in terms of the time efficiency at inference. Rendering a 1000x800 image can take from 30s to over a minute depending on specific GPU models. Please make sure to maximize the GPU memory utilization by increasing the size of the chunk to reduce inference time. You can also try to decrease the number of input source views (but subject to performance loss).
149
149
- If you want to create and train on your own datasets, you can implement your own Dataset class following our examples in `ibrnet/data_loaders/`. You can verify the camera poses using `data_verifier.py` in `ibrnet/data_loaders/`.
150
150
- Since the evaluation datasets are either object-centric or forward-facing scenes, our provided view selection methods are very simple (based on either viewpoints or camera locations). If you want to evaluate our method on new scenes with other kinds of camera distributions, you might need to implement your own view selection methods to identify the most effective source views.
0 commit comments