Skip to content

Commit e8311bd

Browse files
committed
modify :Pretraining _VGG from scrtach.rst
delete GENERATED
1 parent b4d6d9c commit e8311bd

File tree

1 file changed

+0
-38
lines changed

1 file changed

+0
-38
lines changed

beginner_source/Pretraining_Vgg_from_scratch.rst

Lines changed: 0 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -90,11 +90,6 @@ First, let's import the required dependencies:
9090
9191
albumentations are already installed
9292
93-
94-
95-
96-
.. GENERATED FROM PYTHON SOURCE LINES 93-100
97-
9893
VGG Configuration
9994
-----------------
10095

@@ -103,8 +98,6 @@ We use the CIFAR100 dataset. The authors of the VGG paper scale images ``isotrop
10398
which means increasing the size of an image while maintaining its proportions,
10499
preventing distortion and maintaining the consistency of the object.
105100

106-
.. GENERATED FROM PYTHON SOURCE LINES 100-140
107-
108101
.. code-block:: default
109102
110103
@@ -146,17 +139,12 @@ preventing distortion and maintaining the consistency of the object.
146139
147140
model_layers =None
148141
149-
150-
.. GENERATED FROM PYTHON SOURCE LINES 141-147
151-
152142
.. note:: In the code above, we have defined the batch size as 32,
153143
which is recommended for Google Colab. However, if you are
154144
running this code on a machine with 24GB of GPU memory,
155145
you can set the batch size to 128. You can modify the batch
156146
size according to your preference and hardware capabilities.
157147

158-
.. GENERATED FROM PYTHON SOURCE LINES 149-174
159-
160148
Defining the dataset
161149
--------------------
162150

@@ -182,9 +170,6 @@ thus improving its performance on both test data and in real-world applications.
182170
To apply preprocessing, we need to override the CIFAR100 class that we have imported from the
183171
``torchvision.datasets`` with a custom class:
184172

185-
186-
.. GENERATED FROM PYTHON SOURCE LINES 174-227
187-
188173
.. code-block:: default
189174
190175
@@ -240,9 +225,6 @@ To apply preprocessing, we need to override the CIFAR100 class that we have impo
240225
img=img.transpose((2,0,1))
241226
return img, target
242227
243-
244-
.. GENERATED FROM PYTHON SOURCE LINES 228-238
245-
246228
Define Model
247229
------------
248230

@@ -254,8 +236,6 @@ We will use two main components to define the model:
254236
* ``Config_channels``: This refers to the number of output channels for each layer.
255237
* ``Config_kernels``: This refers to the kernel size (or filter size) for each layer.
256238

257-
.. GENERATED FROM PYTHON SOURCE LINES 238-266
258-
259239
.. code-block:: default
260240
261241
@@ -285,13 +265,8 @@ We will use two main components to define the model:
285265
"E":[3,3,2,3,3,2,3,3,3,3,2,3,3,3,3,2,3,3,3,3,2],
286266
}
287267
288-
.. GENERATED FROM PYTHON SOURCE LINES 267-269
289-
290268
Next, we define a model class that generates a model with a choice of six versions.
291269

292-
293-
.. GENERATED FROM PYTHON SOURCE LINES 269-363
294-
295270
.. code-block:: default
296271
297272
@@ -300,7 +275,6 @@ Next, we define a model class that generates a model with a choice of six versio
300275
in_channels = 3
301276
i = 1
302277
for out_channels , kernel in zip(cfg_c,cfg_k) :
303-
# print(f"{i} th layer {out_channels} processing")
304278
if out_channels == "M" :
305279
feature_extract += [nn.MaxPool2d(kernel,2) ]
306280
elif out_channels == "LRN":
@@ -372,8 +346,6 @@ Next, we define a model class that generates a model with a choice of six versio
372346
self.last_xavier+=1
373347
nn.init.constant_(m.bias, 0)
374348
375-
.. GENERATED FROM PYTHON SOURCE LINES 364-377
376-
377349
Initializing Model Weights
378350
----------------------------
379351

@@ -388,8 +360,6 @@ to initialize the model weights. Specifically, we will apply Xavier
388360
initialization to the first few layers and the last few layers, while using
389361
random initialization for the remaining layers.
390362

391-
.. GENERATED FROM PYTHON SOURCE LINES 377-399
392-
393363
.. code-block:: default
394364
395365
# .. note::
@@ -413,15 +383,11 @@ random initialization for the remaining layers.
413383
#
414384
# These values have been found to work well in practice.
415385
416-
.. GENERATED FROM PYTHON SOURCE LINES 400-405
417-
418386
Training the Model
419387
------------------
420388

421389
First, let's define top-k error.
422390

423-
.. GENERATED FROM PYTHON SOURCE LINES 405-422
424-
425391
.. code-block:: default
426392
427393
def accuracy(output, target, topk=(1,)):
@@ -439,13 +405,9 @@ First, let's define top-k error.
439405
res.append(correct_k)
440406
return res
441407
442-
.. GENERATED FROM PYTHON SOURCE LINES 423-426
443-
444408
Next, we initiate the model and loss function, optimizer and schedulers. In the VGG model,
445409
they use a softmax output, Momentum Optimizer, and scheduling based on accuracy.
446410

447-
.. GENERATED FROM PYTHON SOURCE LINES 426-434
448-
449411
.. code-block:: default
450412
451413
model_version='B'

0 commit comments

Comments
 (0)