Skip to content

Commit 41807b2

Browse files
update google drive links
1 parent 066a422 commit 41807b2

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Renderi
66
> CVPR 2021
77
>
88
9-
#### [project page](https://ibrnet.github.io/) | [paper](http://arxiv.org/abs/2102.13090) | [data & model](https://drive.google.com/drive/folders/1qfcPffMy8-rmZjbapLAtdrKwg3AV-NJe?usp=sharing)
9+
#### [project page](https://ibrnet.github.io/) | [paper](http://arxiv.org/abs/2102.13090) | [data & model](https://drive.google.com/drive/folders/1I2MTWAJPCoseyaPOmRvpWkxIZq3c5lCu?usp=sharing)
1010

1111
![Demo](assets/ancient.gif)
1212

@@ -39,18 +39,18 @@ conda activate ibrnet
3939
Please first `cd data/`, and then download datasets into `data/` following the instructions below. The organization of the datasets should be the same as above.
4040

4141
#### (a) **Our captures**
42-
We captured 67 forward-facing scenes (each scene contains 20-60 images). To download our data [ibrnet_collected.zip](https://drive.google.com/file/d/1rkzl3ecL3H0Xxf5WTyc2Swv30RIyr1R_/view?usp=sharing) (4.1G) for training, run:
42+
We captured 67 forward-facing scenes (each scene contains 20-60 images). To download our data [ibrnet_collected.zip](https://drive.google.com/file/d/1dZZChihfSt9iIzcQICojLziPvX1vejkp/view?usp=sharing) (4.1G) for training, run:
4343
```
44-
gdown https://drive.google.com/uc?id=1rkzl3ecL3H0Xxf5WTyc2Swv30RIyr1R_
44+
gdown https://drive.google.com/uc?id=1dZZChihfSt9iIzcQICojLziPvX1vejkp
4545
unzip ibrnet_collected.zip
4646
```
4747

48-
P.S. We've captured some more scenes in [ibrnet_collected_more.zip](https://drive.google.com/file/d/1Uxw0neyiIn3Ve8mpRsO6A06KfbqNrWuq/view?usp=sharing), but we didn't include them for training. Feel free to download them if you would like more scenes for your task, but you wouldn't need them to reproduce our results.
48+
P.S. We've captured some more scenes in [ibrnet_collected_more.zip](https://drive.google.com/file/d/1Xsi2170hvm1fpIaP6JI_d9oa0LGThJ7E/view?usp=sharing), but we didn't include them for training. Feel free to download them if you would like more scenes for your task, but you wouldn't need them to reproduce our results.
4949
#### (b) [**LLFF**](https://bmild.github.io/llff/) released scenes
50-
Download and process [real_iconic_noface.zip](https://drive.google.com/drive/folders/1M-_Fdn4ajDa0CS8-iqejv0fQQeuonpKF) (6.6G) using the following commands:
50+
Download and process [real_iconic_noface.zip](https://drive.google.com/file/d/1m6AaHg-NEH3VW3t0Zk9E9WcNp4ZPNopl/view?usp=sharing) (6.6G) using the following commands:
5151
```angular2
5252
# download
53-
gdown https://drive.google.com/uc?id=1ThgjloNt58ZdnEuiCeRf9tATJ-HI0b01
53+
gdown https://drive.google.com/uc?id=1m6AaHg-NEH3VW3t0Zk9E9WcNp4ZPNopl
5454
unzip real_iconic_noface.zip
5555
5656
# [IMPORTANT] remove scenes that appear in the test set
@@ -86,12 +86,12 @@ cd ../
8686
Google Scanned Objects contain 1032 diffuse objects with various shapes and appearances.
8787
We use [gaps](https://github.com/tomfunkhouser/gaps) to render these objects for training. Each object is rendered at 512 × 512 pixels
8888
from viewpoints on a quarter of the sphere. We render 250
89-
views for each object. To download [our renderings](https://drive.google.com/file/d/1w1Cs0yztH6kE3JIz7mdggvPGCwIKkVi2/view?usp=sharing) (7.5GB), run:
89+
views for each object. To download [our renderings](https://drive.google.com/file/d/1tKHhH-L1viCvTuBO1xg--B_ioK7JUrrE/view?usp=sharing) (7.5GB), run:
9090
```
91-
gdown https://drive.google.com/uc?id=1w1Cs0yztH6kE3JIz7mdggvPGCwIKkVi2
91+
gdown https://drive.google.com/uc?id=1tKHhH-L1viCvTuBO1xg--B_ioK7JUrrE
9292
unzip google_scanned_objects_renderings.zip
9393
```
94-
The mapping between our renderings and the public Google Scanned Objects can be found in [this spreadsheet](https://docs.google.com/spreadsheets/d/14FivSzpjtqraR8IFmKOWWFXRUh4JsmTJqF2hr_ZY2R4/edit?usp=sharing&resourcekey=0-vVIKfNOVddY20NhBWr2ipQ).
94+
The mapping between our renderings and the public Google Scanned Objects can be found in [this spreadsheet](https://docs.google.com/spreadsheets/d/1JGqJ9vKgZf9gLLUM-KIiRr_ePzJ-2CYRs5daB0qNIPo/edit?usp=sharing&resourcekey=0-aZfNVJQSm9GEIzT1afvx8Q).
9595

9696
### 2. Evaluation datasets
9797
```
@@ -108,7 +108,7 @@ bash download_eval_data.sh
108108
## Evaluation
109109
First download our pretrained model under the project root directory:
110110
```
111-
gdown https://drive.google.com/uc?id=165Et85R8YnL-5NcehG0fzqsnAUN8uxUJ
111+
gdown https://drive.google.com/uc?id=1wNkZkVQGx7rFksnX7uVX3NazrbjqaIgU
112112
unzip pretrained_model.zip
113113
```
114114

@@ -148,7 +148,7 @@ python -m torch.distributed.launch --nproc_per_node=2 train.py --config configs/
148148
- Our current implementation is not well-optimized in terms of the time efficiency at inference. Rendering a 1000x800 image can take from 30s to over a minute depending on specific GPU models. Please make sure to maximize the GPU memory utilization by increasing the size of the chunk to reduce inference time. You can also try to decrease the number of input source views (but subject to performance loss).
149149
- If you want to create and train on your own datasets, you can implement your own Dataset class following our examples in `ibrnet/data_loaders/`. You can verify the camera poses using `data_verifier.py` in `ibrnet/data_loaders/`.
150150
- Since the evaluation datasets are either object-centric or forward-facing scenes, our provided view selection methods are very simple (based on either viewpoints or camera locations). If you want to evaluate our method on new scenes with other kinds of camera distributions, you might need to implement your own view selection methods to identify the most effective source views.
151-
- If you have any questions, you can contact [email protected].
151+
- If you have any questions, you can contact [email protected].
152152
## Citation
153153
```
154154
@inproceedings{wang2021ibrnet,

0 commit comments

Comments
 (0)