Skip to content

Commit 01a271d

Browse files
committed
wnet_review_update
1 parent 63ada86 commit 01a271d

File tree

1 file changed

+6
-8
lines changed

1 file changed

+6
-8
lines changed

guide/14-deep-learning/how_wnet_cgan_works.ipynb

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,14 @@
1818
"cell_type": "markdown",
1919
"metadata": {},
2020
"source": [
21-
"In this guide, we will focus on WNet_cGAN [[4](https://arxiv.org/abs/1904.09935)], that blends spectral and height information in one network. This model was developed to refine or extract level of details 2 (LoD2) DSM for builings from previously available DSM using an addtional raster such as panchromatic(PAN) imagery. This approach could help in extraction of refined building structures with higher level of details from a raw DSM and high resolution imagery for urban cities, other usage also include use of imageries from two different domains, to generate imagery of third domain, for example, use of asending and desending SAR to generate digital elevation model (DEM)."
21+
"In this guide, we will focus on WNet_cGAN [[4](https://arxiv.org/abs/1904.09935)], that blends spectral and height information in one network. This model was developed to refine or extract level of details 2 (LoD2) DSM for builings from previously available DSM using an addtional raster such as panchromatic(PAN) imagery. This approach could help in extraction of refined building structures with higher level of details from a raw DSM and high resolution imagery for urban cities."
2222
]
2323
},
2424
{
2525
"cell_type": "markdown",
2626
"metadata": {},
2727
"source": [
28-
"To follow the guide below, we assume that you have some basic understanding of the convolutional neural networks (CNN) concept. You can refresh your CNN knowledge by going through this short paper [“A guide to convolution arithmetic for deep learning”](https://arxiv.org/pdf/1603.07285.pdf) and course on Convolutional Neural Networks for Visual Recognition [[2](http://cs231n.stanford.edu/)]. Also, we recommend to read this paper about [Generative Adversarial Networks: An Overview](https://arxiv.org/abs/1710.07035) and go through fast.ai course on [GANs](https://course18.fast.ai/lessons/lesson12.html) before reading this one. "
28+
"To follow the guide below, we recommend to read this guide about [How Pix2Pix works?](https://developers.arcgis.com/python/guide/how-pix2pix-works/)."
2929
]
3030
},
3131
{
@@ -127,7 +127,7 @@
127127
"source": [
128128
"The data folder should currently have two folders named `train_A_C` and `train_B`. Initially, we have to export the image chips in `Export tiles` metadata format using `Export Training data for deep learning` tool available in ArcGIS Pro by providing two domains of imagery in `Input Raster` and `Additional Input Raster`. \n",
129129
"\n",
130-
"This is done two times, First, with DSM as input raster, labels or LOD2 DSM as additional raster and output folder name `train_A_C`. Second, with only panchromatic raster or other multispectral raster as input raster and output folder name `train_B`. Then the path is provided to `prepare_data` function in `arcgis.learn` to create a databunch.\n",
130+
"This is done two times, First, with DSM as input raster, labels or LOD2 DSM as additional raster and output folder name `train_A_C`. Second, with only panchromatic raster or other multispectral raster as input raster and output folder name `train_B`. The rasters used for data export should have similar cell size. Then the path is provided to `prepare_data` function in `arcgis.learn` to create a databunch.\n",
131131
"\n",
132132
"`data = arcgis.learn.prepare_data(path=r\"path/to/exported/data\")`"
133133
]
@@ -140,7 +140,7 @@
140140
"\n",
141141
"`model = arcgis.learn.WNet_cGAN(data=data)`\n",
142142
"\n",
143-
"Here, `data` is a fastai databunch, object returned from `prepare_data` function, more explanation can be found at fast.ai's docs [[6](https://fastai1.fast.ai/index.html)]\n",
143+
"Here, `data` is a databunch, object returned from `prepare_data` function.\n",
144144
"\n",
145145
"Than we can continue with basic `arcgis.learn` workflow.\n",
146146
"\n",
@@ -176,9 +176,7 @@
176176
"2. CS231n: Convolutional Neural Networks for Visual Recognition. http://cs231n.stanford.edu/\n",
177177
"3. Bittner, K., Körner, M. and Reinartz, P., 2019, July. DSM building shape refinement from combined remote sensing images based on WNET-CGANS. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium (pp. 783-786). IEEE.\n",
178178
"4. Bittner, Ksenia, Peter Reinartz, and Marco Korner. \"Late or earlier information fusion from depth and spectral data? large-scale digital surface model refinement by hybrid-cgan.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 0-0. 2019.\n",
179-
"5. Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. \"Generative adversarial nets.\" In Advances in neural information processing systems, pp. 2672-2680. 2014.\n",
180-
"6. Fast.ai docs. https://fastai1.fast.ai/index.html. Accessed 27 November 2020.\n",
181-
"7. Fast.ai's course on GANs. https://course18.fast.ai/lessons/lesson12.html. Accessed 27 November 2020."
179+
"5. Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. \"Generative adversarial nets.\" In Advances in neural information processing systems, pp. 2672-2680. 2014."
182180
]
183181
}
184182
],
@@ -198,7 +196,7 @@
198196
"name": "python",
199197
"nbconvert_exporter": "python",
200198
"pygments_lexer": "ipython3",
201-
"version": "3.8.12"
199+
"version": "3.8.13"
202200
}
203201
},
204202
"nbformat": 4,

0 commit comments

Comments
 (0)