|
80 | 80 | "metadata": {},
|
81 | 81 | "source": [
|
82 | 82 | "Since the [RESISC45 Dataset](https://arxiv.org/abs/1703.00121) is publically available, we will download the data from the [Tensorflow website](https://www.tensorflow.org/datasets/catalog/resisc45).\n",
|
83 |
| - "The name of the dataset we will be downloading is <b>NWPU-RESISC45.rar</b> <br>\n", |
84 |
| - "- Citation : `@article{cheng2017remote,\n", |
85 |
| - " title={Remote sensing image scene classification: Benchmark and state of the art},\n", |
86 |
| - " author={Cheng, Gong and Han, Junwei and Lu, Xiaoqiang},\n", |
87 |
| - " journal={Proceedings of the IEEE},\n", |
88 |
| - " volume={105},\n", |
89 |
| - " number={10},\n", |
90 |
| - " pages={1865--1883},\n", |
91 |
| - " year={2017},\n", |
92 |
| - " publisher={IEEE}\n", |
93 |
| - "}`" |
| 83 | + "The name of the dataset we will be downloading is <b>NWPU-RESISC45.rar</b> <br>" |
94 | 84 | ]
|
95 | 85 | },
|
96 | 86 | {
|
|
151 | 141 | "- `dataset_type` : The type of dataset getting passed to the Feature Classifier.\n",
|
152 | 142 | "- `batch_size`: Number of images your model will train on each step inside an epoch. <b>This directly depends on the memory of your graphic card</b>. 128 worked for us on a 32GB GPU.\n",
|
153 | 143 | "\n",
|
154 |
| - "Since we are using the dataset from external source for training our `FeatureClassifier`, we will be using <b>Imagenet</b> as dataset_type." |
| 144 | + "Since we are using the dataset from external source for training our `FeatureClassifier`, we will be using <b>Imagenet</b> as `dataset_type`." |
155 | 145 | ]
|
156 | 146 | },
|
157 | 147 | {
|
|
448 | 438 | "cell_type": "markdown",
|
449 | 439 | "metadata": {},
|
450 | 440 | "source": [
|
451 |
| - "Here, with only 20 epochs, we can see reasonable results, as both training and validation losses have decreased considerably, indicating that the model is learning to classify image scenes." |
| 441 | + "Here only after 20 epochs both training and validation losses have decreased considerably, indicating that the model is learning to classify image scenes." |
452 | 442 | ]
|
453 | 443 | },
|
454 | 444 | {
|
|
487 | 477 | "model.show_results(rows=4)"
|
488 | 478 | ]
|
489 | 479 | },
|
| 480 | + { |
| 481 | + "cell_type": "markdown", |
| 482 | + "metadata": {}, |
| 483 | + "source": [ |
| 484 | + "Here, with only 20 epochs, we can see reasonable results." |
| 485 | + ] |
| 486 | + }, |
490 | 487 | {
|
491 | 488 | "cell_type": "markdown",
|
492 | 489 | "metadata": {},
|
|
696 | 693 | "source": [
|
697 | 694 | "In this notebook, we demonstrated how to use the `FeatureClassifier` model from the `ArcGIS API for Python` to classify image scenes using training data from an external source."
|
698 | 695 | ]
|
| 696 | + }, |
| 697 | + { |
| 698 | + "cell_type": "markdown", |
| 699 | + "metadata": {}, |
| 700 | + "source": [ |
| 701 | + "- Citation : `@article{cheng2017remote,\n", |
| 702 | + " title={Remote sensing image scene classification: Benchmark and state of the art},\n", |
| 703 | + " author={Cheng, Gong and Han, Junwei and Lu, Xiaoqiang},\n", |
| 704 | + " journal={Proceedings of the IEEE},\n", |
| 705 | + " volume={105},\n", |
| 706 | + " number={10},\n", |
| 707 | + " pages={1865--1883},\n", |
| 708 | + " year={2017},\n", |
| 709 | + " publisher={IEEE}\n", |
| 710 | + "}`" |
| 711 | + ] |
699 | 712 | }
|
700 | 713 | ],
|
701 | 714 | "metadata": {
|
|
714 | 727 | "name": "python",
|
715 | 728 | "nbconvert_exporter": "python",
|
716 | 729 | "pygments_lexer": "ipython3",
|
717 |
| - "version": "3.7.11" |
| 730 | + "version": "3.9.11" |
718 | 731 | }
|
719 | 732 | },
|
720 | 733 | "nbformat": 4,
|
|
0 commit comments