|
| 1 | +# UnMix-NeRF |
| 2 | + |
| 3 | +<h4>Spectral Unmixing Meets Neural Radiance Fields</h4> |
| 4 | + |
| 5 | +```{button-link} https://www.arxiv.org/pdf/2506.21884 |
| 6 | +:color: primary |
| 7 | +:outline: |
| 8 | +Paper |
| 9 | +``` |
| 10 | + |
| 11 | +```{button-link} https://www.factral.co/UnMix-NeRF/ |
| 12 | +:color: primary |
| 13 | +:outline: |
| 14 | +Project Page |
| 15 | +``` |
| 16 | + |
| 17 | +<div style="text-align: center;"> |
| 18 | + |
| 19 | +TL;DR _We propose UnMix-NeRF, the first method integrating spectral unmixing into NeRF, enabling hyperspectral view synthesis, accurate unsupervised material segmentation, and intuitive material-based scene editing, significantly outperforming existing methods._ |
| 20 | + |
| 21 | +_ICCV 2025_ |
| 22 | + |
| 23 | +<img src="https://www.factral.co/UnMix-NeRF/assets/intro.jpg" alt="UnMix-NeRF Overview" style="width:50% !important;"/><br> |
| 24 | + |
| 25 | +</div> |
| 26 | + |
| 27 | +## Visual Results |
| 28 | + |
| 29 | +<h3 style="text-align: center;">Hotdog Scene</h3> |
| 30 | +<table style="width:100%; border: none;"> |
| 31 | + <tr style="border: none;"> |
| 32 | + <td style="text-align: center; border: none; padding: 5px;"> |
| 33 | + <video autoplay="autoplay" loop="loop" muted="muted" playsinline="playsinline" style="width:100%"> |
| 34 | + <source src="https://www.factral.co/UnMix-NeRF/assets/hotdog/rgb/rgb_hotdog.mp4" type="video/mp4"> |
| 35 | + </video> |
| 36 | + <p>RGB</p> |
| 37 | + </td> |
| 38 | + <td style="text-align: center; border: none; padding: 5px;"> |
| 39 | + <video autoplay="autoplay" loop="loop" muted="muted" playsinline="playsinline" style="width:100%"> |
| 40 | + <source src="https://www.factral.co/UnMix-NeRF/assets/hotdog/seg/seg.mp4" type="video/mp4"> |
| 41 | + </video> |
| 42 | + <p>Unsupervised Material Segmentation</p> |
| 43 | + </td> |
| 44 | + <td style="text-align: center; border: none; padding: 5px;"> |
| 45 | + <video autoplay="autoplay" loop="loop" muted="muted" playsinline="playsinline" style="width:100%"> |
| 46 | + <source src="https://www.factral.co/UnMix-NeRF/assets/hotdog/edit1/edit1.mp4" type="video/mp4"> |
| 47 | + </video> |
| 48 | + <p>Scene Editing</p> |
| 49 | + </td> |
| 50 | + </tr> |
| 51 | +</table> |
| 52 | + |
| 53 | +<h3 style="text-align: center;">Ajar Scene</h3> |
| 54 | +<table style="width:100%; border: none;"> |
| 55 | + <tr style="border: none;"> |
| 56 | + <td style="text-align: center; border: none; padding: 5px;"> |
| 57 | + <video autoplay="autoplay" loop="loop" muted="muted" playsinline="playsinline" style="width:100%"> |
| 58 | + <source src="https://www.factral.co/UnMix-NeRF/assets/ajar/seg_1_segment0.mp4" type="video/mp4"> |
| 59 | + </video> |
| 60 | + <p>RGB</p> |
| 61 | + </td> |
| 62 | + <td style="text-align: center; border: none; padding: 5px;"> |
| 63 | + <video autoplay="autoplay" loop="loop" muted="muted" playsinline="playsinline" style="width:100%"> |
| 64 | + <source src="https://www.factral.co/UnMix-NeRF/assets/ajar/seg_1_segment2.mp4" type="video/mp4"> |
| 65 | + </video> |
| 66 | + <p>Unsupervised Material Segmentation</p> |
| 67 | + </td> |
| 68 | + <td style="text-align: center; border: none; padding: 5px;"> |
| 69 | + <video autoplay="autoplay" loop="loop" muted="muted" playsinline="playsinline" style="width:100%"> |
| 70 | + <source src="https://www.factral.co/UnMix-NeRF/assets/ajar/seg_1_segment1.mp4" type="video/mp4"> |
| 71 | + </video> |
| 72 | + <p>PCA Visualization</p> |
| 73 | + </td> |
| 74 | + </tr> |
| 75 | +</table> |
| 76 | + |
| 77 | +## Installation |
| 78 | + |
| 79 | +Install nerfstudio dependencies following the [installation guide](https://docs.nerf.studio/quickstart/installation.html). |
| 80 | + |
| 81 | +Then install UnMix-NeRF: |
| 82 | + |
| 83 | +```bash |
| 84 | +git clone https://github.com/Factral/UnMix-NeRF |
| 85 | +cd UnMix-NeRF |
| 86 | +pip install -r requirements.txt |
| 87 | +pip install . |
| 88 | +``` |
| 89 | + |
| 90 | +## Running UnMix-NeRF |
| 91 | + |
| 92 | +Basic training command: |
| 93 | + |
| 94 | +```bash |
| 95 | +ns-train unmixnerf \ |
| 96 | + --data <path_to_data> \ |
| 97 | + --pipeline.num_classes <number_of_materials> \ |
| 98 | + --pipeline.model.spectral_loss_weight 5.0 \ |
| 99 | + --pipeline.model.temperature 0.4 \ |
| 100 | + --experiment-name my_experiment |
| 101 | +``` |
| 102 | + |
| 103 | +## Method |
| 104 | + |
| 105 | +### Overview |
| 106 | + |
| 107 | +[UnMix-NeRF](https://www.arxiv.org/pdf/2506.21884) Neural Radiance Field (NeRF)-based segmentation methods focus on object semantics and rely solely on RGB data, lacking intrinsic material properties. This limitation restricts accurate material perception, which is crucial for robotics, augmented reality, simulation, and other applications. We introduce UnMix-NeRF, a framework that integrates spectral unmixing into NeRF, enabling joint hyperspectral novel view synthesis and unsupervised material segmentation. |
| 108 | + |
| 109 | +Our method models spectral reflectance via diffuse and specular components, where a learned dictionary of global endmembers represents pure material signatures, and per-point abundances capture their distribution. For material segmentation, we use spectral signature predictions along learned endmembers, allowing unsupervised material clustering. Additionally, UnMix-NeRF enables scene editing by modifying learned endmember dictionaries for flexible material-based appearance manipulation. Extensive experiments validate our approach, demonstrating superior spectral reconstruction and material segmentation to existing methods. |
| 110 | + |
| 111 | +### Pipeline |
| 112 | + |
| 113 | +<br> |
| 114 | + |
| 115 | +## Data Format |
| 116 | + |
| 117 | +UnMix-NeRF extends standard nerfstudio data conventions to support hyperspectral data: |
| 118 | + |
| 119 | +### Required Structure |
| 120 | + |
| 121 | +``` |
| 122 | +data/ |
| 123 | +├── transforms.json # Camera poses (standard) |
| 124 | +├── images/ # RGB images (standard) |
| 125 | +│ ├── frame_00001.jpg |
| 126 | +│ └── ... |
| 127 | +├── hyperspectral/ # Hyperspectral data (NEW) |
| 128 | +│ ├── frame_00001.npy # Shape: (H, W, B) |
| 129 | +│ └── ... |
| 130 | +└── segmentation/ # Ground truth (optional) |
| 131 | + ├── frame_00001.png |
| 132 | + └── ... |
| 133 | +``` |
| 134 | + |
| 135 | +### Hyperspectral Data |
| 136 | + |
| 137 | +- **Format**: `.npy` files with dimensions `(H, W, B)` |
| 138 | +- **Values**: Normalized between 0 and 1 |
| 139 | +- **Bands**: Number of spectral channels (B) |
| 140 | + |
| 141 | +Update your `transforms.json`: |
| 142 | + |
| 143 | +```json |
| 144 | +{ |
| 145 | + "frames": [ |
| 146 | + { |
| 147 | + "file_path": "./images/frame_00001.jpg", |
| 148 | + "hyperspectral_file_path": "./hyperspectral/frame_00001.npy", |
| 149 | + "seg_file_path": "./segmentation/frame_00001.png", |
| 150 | + "transform_matrix": [...] |
| 151 | + } |
| 152 | + ] |
| 153 | +} |
| 154 | +``` |
| 155 | + |
| 156 | +## Key Parameters |
| 157 | + |
| 158 | +| Parameter | Description | Default | |
| 159 | +| --------------------------------------- | --------------------------------- | ------- | |
| 160 | +| `--pipeline.num_classes` | Number of material endmembers | 6 | |
| 161 | +| `--pipeline.model.spectral_loss_weight` | Weight for spectral loss | 5.0 | |
| 162 | +| `--pipeline.model.temperature` | Temperature for abundance softmax | 0.4 | |
| 163 | +| `--pipeline.model.load_vca` | Initialize with VCA endmembers | False | |
| 164 | +| `--pipeline.model.pred_specular` | Enable specular component | True | |
| 165 | + |
| 166 | +## Citation |
| 167 | + |
| 168 | +```bibtex |
| 169 | +@inproceedings{perez2025unmix, |
| 170 | + title={UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields}, |
| 171 | + author={Perez, Fabian and Rojas, Sara and Hinojosa, Carlos and Rueda-Chac{\'o}n, Hoover and Ghanem, Bernard}, |
| 172 | + booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, |
| 173 | + year={2025} |
| 174 | +} |
| 175 | +``` |
0 commit comments