Skip to content

Commit be4c9f1

Browse files
committed
modify : Pretraining_VGG_from_scratch.rst
apply suggestion pull request #2971
1 parent bb15e46 commit be4c9f1

File tree

1 file changed

+19
-57
lines changed

1 file changed

+19
-57
lines changed

beginner_source/Pretraining_Vgg_from_scratch.rst

Lines changed: 19 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,4 @@
11

2-
.. DO NOT EDIT.
3-
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
4-
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
5-
.. "beginner/Pretraining_Vgg_from_scratch.py"
6-
.. LINE NUMBERS ARE GIVEN BELOW.
7-
8-
.. only:: html
9-
10-
.. note::
11-
:class: sphx-glr-download-link-note
12-
13-
Click :ref:`here <sphx_glr_download_beginner_Pretraining_Vgg_from_scratch.py>`
14-
to download the full example code
15-
16-
.. rst-class:: sphx-glr-example-title
17-
18-
.. _sphx_glr_beginner_Pretraining_Vgg_from_scratch.py:
19-
20-
212
Pre-training VGG from scratch
223
============================
234

@@ -73,7 +54,7 @@ VGG within the training time suggested in the paper.
7354
Setup
7455
--------
7556

76-
.. note:: if you are running this in Google Colab, install ``albumentations`` by running:
57+
.. note:: If you are running this in Google Colab, install ``albumentations`` by running:
7758

7859
.. code-block:: python
7960
@@ -82,7 +63,6 @@ Setup
8263
8364
First, let's import the required dependencies:
8465

85-
.. GENERATED FROM PYTHON SOURCE LINES 67-92
8666

8767
.. code-block:: default
8868
@@ -110,22 +90,6 @@ First, let's import the required dependencies:
11090
11191
device = 'cuda' if torch.cuda.is_available() else 'cpu'
11292
113-
114-
115-
116-
117-
118-
.. rst-class:: sphx-glr-script-out
119-
120-
.. code-block:: none
121-
122-
albumentations are already installed
123-
124-
125-
126-
127-
.. GENERATED FROM PYTHON SOURCE LINES 93-100
128-
12993
VGG Configuration
13094
-----------------
13195

@@ -134,7 +98,7 @@ We use the CIFAR100 dataset. The authors of the VGG paper scale images ``isotrop
13498
which means increasing the size of an image while maintaining its proportions,
13599
preventing distortion and maintaining the consistency of the object.
136100

137-
.. GENERATED FROM PYTHON SOURCE LINES 100-140
101+
138102

139103
.. code-block:: default
140104
@@ -185,7 +149,7 @@ preventing distortion and maintaining the consistency of the object.
185149
186150
187151
188-
.. GENERATED FROM PYTHON SOURCE LINES 141-147
152+
189153
190154
.. note:: In the code above, we have defined the batch size as 32,
191155
which is recommended for Google Colab. However, if you are
@@ -194,7 +158,7 @@ preventing distortion and maintaining the consistency of the object.
194158
size according to your preference and hardware capabilities.
195159

196160

197-
.. GENERATED FROM PYTHON SOURCE LINES 149-174
161+
198162

199163
Defining the dataset
200164
--------------------
@@ -222,7 +186,7 @@ To apply preprocessing, we need to override the CIFAR100 class that we have impo
222186
``torchvision.datasets`` with a custom class:
223187

224188

225-
.. GENERATED FROM PYTHON SOURCE LINES 174-227
189+
226190

227191
.. code-block:: default
228192
@@ -286,7 +250,7 @@ To apply preprocessing, we need to override the CIFAR100 class that we have impo
286250
287251
288252
289-
.. GENERATED FROM PYTHON SOURCE LINES 228-238
253+
290254
291255
Define Model
292256
------------
@@ -299,7 +263,7 @@ We will use two main components to define the model:
299263
* ``Config_channels``: This refers to the number of output channels for each layer.
300264
* ``Config_kernels``: This refers to the kernel size (or filter size) for each layer.
301265

302-
.. GENERATED FROM PYTHON SOURCE LINES 238-266
266+
303267

304268
.. code-block:: default
305269
@@ -338,12 +302,10 @@ We will use two main components to define the model:
338302
339303
340304
341-
.. GENERATED FROM PYTHON SOURCE LINES 267-269
342305
343-
Next, we define a model class that generates a model with a choice of six versions.
344306
307+
Next, we define a model class that generates a model with a choice of six versions.
345308

346-
.. GENERATED FROM PYTHON SOURCE LINES 269-363
347309

348310
.. code-block:: default
349311
@@ -448,7 +410,7 @@ Next, we define a model class that generates a model with a choice of six versio
448410
449411
450412
451-
.. GENERATED FROM PYTHON SOURCE LINES 364-377
413+
452414
453415
Initializing Model Weights
454416
----------------------------
@@ -464,7 +426,7 @@ to initialize the model weights. Specifically, we will apply Xavier
464426
initialization to the first few layers and the last few layers, while using
465427
random initialization for the remaining layers.
466428

467-
.. GENERATED FROM PYTHON SOURCE LINES 377-399
429+
468430

469431
.. code-block:: default
470432
@@ -497,15 +459,15 @@ random initialization for the remaining layers.
497459
498460
499461
500-
.. GENERATED FROM PYTHON SOURCE LINES 400-405
462+
501463
502464
Training the Model
503465
------------------
504466

505467
First, let's define top-k error.
506468

507469

508-
.. GENERATED FROM PYTHON SOURCE LINES 405-422
470+
509471

510472
.. code-block:: default
511473
@@ -533,13 +495,13 @@ First, let's define top-k error.
533495
534496
535497
536-
.. GENERATED FROM PYTHON SOURCE LINES 423-426
498+
537499
538500
Next, we initiate the model and loss function, optimizer and schedulers. In the VGG model,
539501
they use a softmax output, Momentum Optimizer, and scheduling based on accuracy.
540502

541503

542-
.. GENERATED FROM PYTHON SOURCE LINES 426-434
504+
543505

544506
.. code-block:: default
545507
@@ -630,12 +592,12 @@ they use a softmax output, Momentum Optimizer, and scheduling based on accuracy.
630592
631593
632594
633-
.. GENERATED FROM PYTHON SOURCE LINES 435-437
595+
634596
635597
As mentioned above, we are using the ``CIFAR100`` dataset and set gradient
636598
clipping to 1.0 to prevent gradient exploding.
637599

638-
.. GENERATED FROM PYTHON SOURCE LINES 437-570
600+
639601

640602
.. code-block:: default
641603
@@ -3432,14 +3394,14 @@ clipping to 1.0 to prevent gradient exploding.
34323394
34333395
34343396
3435-
.. GENERATED FROM PYTHON SOURCE LINES 571-575
3397+
34363398
34373399
(Optional) Additional Exercise: ImageNet
34383400
--------------------------------------------
34393401

34403402
You can apply the same model that we have trained above with another popular dataset called ImageNet:
34413403

3442-
.. GENERATED FROM PYTHON SOURCE LINES 575-644
3404+
34433405

34443406
.. code-block:: default
34453407
@@ -3519,7 +3481,7 @@ You can apply the same model that we have trained above with another popular dat
35193481
35203482
35213483
3522-
.. GENERATED FROM PYTHON SOURCE LINES 645-660
3484+
35233485
35243486
Conclusion
35253487
----------

0 commit comments

Comments
 (0)